Araştırma Çıktıları | WoS | Scopus | TR-Dizin | PubMed
Permanent URI for this communityhttps://hdl.handle.net/20.500.14719/1741
Browse
7 results
Search Results
Publication Metadata only Face recognition-based IMDB plug-in for movies, Filmler için yüz tanima tabanli IMDB eklentisi(2011) Ulukaya, Sezer; Kayim, Güney; Ekenel, Hazim Kemal; Ulukaya, Sezer, Bahçeşehir Üniversitesi, Istanbul, Turkey; Kayim, Güney, Bahçeşehir Üniversitesi, Istanbul, Turkey; Ekenel, Hazim Kemal, Boğaziçi Üniversitesi, Bebek, TurkeyIn this paper, we present an initial study on an IMDB plug-in for cast identification in movies. In the system, training face images are collected by using Google image search. While watching a movie, the user clicks on the face of the person he is interested to acquire information. Afterwards, the system first tries to detect close to frontal faces, if it cannot find any, then it runs a profile face detector. The found face are then tracked backwards and forwards in the shot and this way a face sequence is obtained. Matching is performed between the extracted face sequence from the movie and the face image sets collected from the web. IMDB page links of the closest three persons resulted from the matching process is then presented to the user. In this study, we addressed the following three interesting points: matching between face sequence and face image sets, the effect of automatically collected noisy training images from the web on the performance, and finally, the performance effect of utilizing prior information of cast list and performing the classification within a limited number of classes. Experiments have shown that matching between face sequence and face image sets is a difficult problem. © 2011 IEEE. © 2011 Elsevier B.V., All rights reserved.Publication Metadata only A comparison of geometrical facial features for affect recognition, Duygu tanima i̇çi̇n geometri̇k yüz özni̇teli̇kleri̇ni̇n karşilaştirilmasi(2011) Ulukaya, Sezer; Erdem, Cigdem Eroglu; Ulukaya, Sezer, Bahçeşehir Üniversitesi, Istanbul, Turkey; Erdem, Cigdem Eroglu, Bahçeşehir Üniversitesi, Istanbul, TurkeyIn this work, we compare two different geometric feature extraction methods derived from coordinates of facial points tracked by Active Appearance Models. The compared feature extraction methods differ in their use of coordinates or distances between facial points and whether they use the information of a neutral facial expression. Experiments on the extended Cohn-Kanade database show that the coordinate-based features using the neutral frame information gives the best emotion recognition results (%94) using a SVC classifier with a polynomial kernel. © 2011 IEEE. © 2011 Elsevier B.V., All rights reserved.Publication Metadata only Feature extraction for facial expression recognition by canonical correlation analysis, Kanoni̇k korelasyon anali̇zi̇ i̇le yüz i̇fadesi̇nden duygu tanima i̇çi̇n özni̇teli̇ k çikarimi(2012) Sakar, C. Okan; Kursun, Olcay; Karaali, Ali; Erdem, Cigdem Eroglu; Sakar, C. Okan, Bahçeşehir Üniversitesi, Istanbul, Turkey; Kursun, Olcay, Istanbul Üniversitesi, Istanbul, Turkey; Karaali, Ali, Bahçeşehir Üniversitesi, Istanbul, Turkey; Erdem, Cigdem Eroglu, Bahçeşehir Üniversitesi, Istanbul, TurkeyAlthough several methods have been proposed for fusing different image representations obtained by different preprocessing methods for emotion recognition from the facial expression in a given image, the dependencies and relations among them have not been much investigated. In this study, it has been shown that covariates obtained by Canonical Correlation Analysis (CCA) that extracts relations between different representations have high predictive power for emotion recognition. As high prediction accuracy can be achieved using a small number of features extracted by it, CCA is considered to be a good dimensionality reduction method. For our simulations, we used the CK+ database and showed that covariates obtained from difference-images and geometric-features representations have high prediction accuracy. © 2012 IEEE. © 2012 Elsevier B.V., All rights reserved.Publication Metadata only A hybrid facial expression recognition method based on neutral face shape estimation, Yüz i̇fadesi̇ tanima i̇çi̇n nötr yüz şekli̇ni̇n kesti̇ri̇lmesi̇ne dayali hi̇bri̇t bi̇r yöntem(2012) Ulukaya, Sezer; Erdem, Cigdem Eroglu; Ulukaya, Sezer, Boğaziçi Üniversitesi, Bebek, Turkey, Bahçeşehir Üniversitesi, Istanbul, Turkey; Erdem, Cigdem Eroglu, Bahçeşehir Üniversitesi, Istanbul, TurkeyIn order to recognize the facial expression of a person, the knowledge of the neutral facial expression of that person is useful but may not always be available.We present a method based on Gaussian mixture models (GMM) to estimate the unknown neutral facial expression of an expressive face. The estimated neutral face is then subtracted from the features of the expressive image and classified using support vector classifiers (SVC). Experimental results on the extended Cohn-Kanade (CK+) database give an emotion recognition rate of 88% using geometric features only and 92% if appearance based features are also included. © 2012 IEEE. © 2012 Elsevier B.V., All rights reserved.Publication Metadata only Multipose face detection using Zernike moment invariants, Zernike moment deǧi̇şmezleri̇ i̇le pozdan baǧimsiz yüz tespi̇ti̇(2012) Karaali, Ali; Erdem, Cigdem Eroglu; Ulukaya, Sezer; Karaali, Ali, Bahçeşehir Üniversitesi, Istanbul, Turkey; Erdem, Cigdem Eroglu, Bahçeşehir Üniversitesi, Istanbul, Turkey; Ulukaya, Sezer, Bahçeşehir Üniversitesi, Istanbul, TurkeyWe propose a new efficient technique for localization of faces in arbitrary images. The technique is based on segmentation of images into skin colored blobs, which is followed by computation of scale, translation and rotation invariant moment-based features to learn a statistical model of faces and non-face regions. The superiority of the method to the state-of-the-art face detection methods is its ability to detect non-frontal faces in a person independent way. Experimental results on CVL database show that the proposed algorithm gives higher true positive rates as compared to the well-known Viola-Jones face detector. © 2012 IEEE. © 2012 Elsevier B.V., All rights reserved.Publication Metadata only A method for extraction of affective audio-visual facial clips from movies, Filmlerden duygusal yüz ifadeleri içeren video klipleri elde etmek için bir yöntem(2013) Turan, Çigdem; Kansin, Can; Zhalehpour, Sara; Aydin, Zafer; Erdem, Cigdem Eroglu; Turan, Çigdem, Bahçeşehir Üniversitesi, Istanbul, Turkey; Kansin, Can, Bahçeşehir Üniversitesi, Istanbul, Turkey; Zhalehpour, Sara, Bahçeşehir Üniversitesi, Istanbul, Turkey; Aydin, Zafer, Bahçeşehir Üniversitesi, Istanbul, Turkey; Erdem, Cigdem Eroglu, Bahçeşehir Üniversitesi, Istanbul, TurkeyIn order to design algorithms for affect recognition from facial expressions and speech, audio-visual databases are needed. The affective databases used by researchers today are generally recorded in laboratory environments and contain acted expressions. In this work, we present a method for extraction of audio-visual facial clips from movies. The database collected using the proposed method contains English and Turkish clips and can easily be extended for other languages. We also provide facial expresssion recognition results, which utilize local phase quantization based feature extraction and a support vector machine. Due to larger number of features compared to the number of examples, the affect recognition accuracy improves significantly when feature selection is also performed. © 2013 IEEE. © 2013 Elsevier B.V., All rights reserved.Publication Metadata only An hierarchical approach for human computer interaction using eyelid movements, Göz kapaǧi hareketleriyle insan bilgisayar etkileşimi için siradüzensel yaklaşim(IEEE Computer Society [email protected], 2014) Çelik, Anıl; Arica, Nafiz; Çelik, Anıl, Bahçeşehir Üniversitesi, Istanbul, Turkey; Arica, Nafiz, Bahçeçehir Üniversitesi, Istanbul, TurkeyThis work proposes a method to achieve real-time HumanComputer interaction with the movements of the eyelids in low-resolution video. Classification of left and right eye states as closed or open are performed by an hierarchical approach of tracking by detection. After the initial detection of the face area, an efficient face tracking algorithm is used to reduce the search space for detection of eye region. By seperating the eye region into two overlapping pieces, left and right eyes are detected and classified as closed or open. The proposed method is robust against mimics and multiple faces in the frame, while being unaffected by the negative effects of aliasing and resizing. Thus, people who suffer from medical conditions limiting their physical movement capabilities can interact with computers with their eyelid movements. © 2014 IEEE. © 2014 Elsevier B.V., All rights reserved.
