Araştırma Çıktıları | WoS | Scopus | TR-Dizin | PubMed
Permanent URI for this communityhttps://hdl.handle.net/20.500.14719/1741
Browse
3 results
Search Results
Publication Metadata only Multipose face detection using Zernike moment invariants, Zernike moment deǧi̇şmezleri̇ i̇le pozdan baǧimsiz yüz tespi̇ti̇(2012) Karaali, Ali; Erdem, Cigdem Eroglu; Ulukaya, Sezer; Karaali, Ali, Bahçeşehir Üniversitesi, Istanbul, Turkey; Erdem, Cigdem Eroglu, Bahçeşehir Üniversitesi, Istanbul, Turkey; Ulukaya, Sezer, Bahçeşehir Üniversitesi, Istanbul, TurkeyWe propose a new efficient technique for localization of faces in arbitrary images. The technique is based on segmentation of images into skin colored blobs, which is followed by computation of scale, translation and rotation invariant moment-based features to learn a statistical model of faces and non-face regions. The superiority of the method to the state-of-the-art face detection methods is its ability to detect non-frontal faces in a person independent way. Experimental results on CVL database show that the proposed algorithm gives higher true positive rates as compared to the well-known Viola-Jones face detector. © 2012 IEEE. © 2012 Elsevier B.V., All rights reserved.Publication Metadata only Region growing on frangi vesselness values in 3-D CTA data, 3-BOYUTLU bt anjiyografi verilerinde frangi damarlik yöntemi üzerine bölge büyütme yaklasimi(2013) Oksuz, Ilkay; Ünay, Devrim; Kadipaşaoǧlu, Kâmuran A.; Oksuz, Ilkay, Bahçeşehir Üniversitesi, Istanbul, Turkey; Ünay, Devrim, Bahçeşehir Üniversitesi, Istanbul, Turkey; Kadipaşaoǧlu, Kâmuran A., Bahçeşehir Üniversitesi, Istanbul, TurkeyIn cardiac related diagnostic methods, the shape and curvature of coronary arteries is essential. Consequently, one of the most important requirements for Computer Aided Diagnosis (CAD) Systems is automated segmentation of vasculature. In this paper, we propose a new hybrid algorithm, which segment the coronary arterial tree in CTA images by merging methodologies-, namely, Region Growing and Frangi Approach. The algorithm first runs a region growing on Frangi vesselness values and subsequently optimizes the results with several threshold values. Comparison of the present results with optimal results of existing segmentation algorithms reveals that the proposed approach outperforms its predecessors. The diagnostic accuracy of the algorithm will next be validated on the segmentation of coronary arteries from real CT data. © 2013 IEEE. © 2013 Elsevier B.V., All rights reserved.Publication Metadata only A watershed and active contours based method for dendritic spine segmentation in 2-photon microscopy images, 2-Foton mikroskopi görüntülerindeki dendritik dikenlerin bölütlenmesi için watershed ve etkin çevritlere dayali bir yöntem(2013) Erdil, Ertunç; Argunşah, Ali Özgür; Ünay, Devrim; Çetin, Müjdat; Erdil, Ertunç, Mühendislik ve Doǧa Bilimleri Fakültesi, Sabancı Üniversitesi, Tuzla, Turkey; Argunşah, Ali Özgür, Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal; Ünay, Devrim, Bahçeşehir Üniversitesi, Istanbul, Turkey; Çetin, Müjdat, Mühendislik ve Doǧa Bilimleri Fakültesi, Sabancı Üniversitesi, Tuzla, TurkeyAnalysing morphological and volumetric properties of dendritic spines from 2-photon microscopy images has been of interest to neuroscientists in recent years. Developing robust and reliable tools for automatic analysis depends on the segmentation quality. In this paper, we propose a new segmentation algorithm for dendritic spine segmentation based on watershed and active contour methods. First, our proposed method coarsely segments the dendritic spine area using the watershed algorithm. Then, these results are further refined using a region-based active contour approach. We compare our results and the results of existing methods in the literature to manual delineations of a domain expert. Experimental results demonstrate that our proposed method produces more accurate results than the existing algorithms proposed for dendritic spine segmentation. © 2013 IEEE. © 2013 Elsevier B.V., All rights reserved.
