Publication:
BAUM-1: A Spontaneous Audio-Visual Face Database of Affective and Mental States

dc.contributor.authorZhalehpour, Sara
dc.contributor.authorÖnder, Onur
dc.contributor.authorAkhtar, Zahid
dc.contributor.authorErdem, Cigdem Eroglu
dc.contributor.institutionZhalehpour, Sara, Centre Énergie Matériaux Télécommunications, Varennes, Canada
dc.contributor.institutionÖnder, Onur, Arçelik A.S., Istanbul, Turkey
dc.contributor.institutionAkhtar, Zahid, Department of Mathematics and Computer Science, Università degli Studi di Udine, Udine, Italy
dc.contributor.institutionErdem, Cigdem Eroglu, Department of Electrical and Electronic Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey
dc.date.accessioned2025-10-05T16:16:34Z
dc.date.issued2017
dc.description.abstractIn affective computing applications, access to labeled spontaneous affective data is essential for testing the designed algorithms under naturalistic and challenging conditions. Most databases available today are acted or do not contain audio data. We present a spontaneous audio-visual affective face database of affective and mental states. The video clips in the database are obtained by recording the subjects from the frontal view using a stereo camera and from the half-profile view using a mono camera. The subjects are first shown a sequence of images and short video clips, which are not only meticulously fashioned but also timed to evoke a set of emotions and mental states. Then, they express their ideas and feelings about the images and video clips they have watched in an unscripted and unguided way in Turkish. The target emotions, include the six basic ones (happiness, anger, sadness, disgust, fear, surprise) as well as boredom and contempt. We also target several mental states, which are unsure (including confused, undecided), thinking, concentrating, and bothered. Baseline experimental results on the BAUM-1 database show that recognition of affective and mental states under naturalistic conditions is quite challenging. The database is expected to enable further research on audio-visual affect and mental state recognition under close-To-real scenarios. © 2017 Elsevier B.V., All rights reserved.
dc.identifier.doi10.1109/TAFFC.2016.2553038
dc.identifier.endpage313
dc.identifier.issn19493045
dc.identifier.issue3
dc.identifier.scopus2-s2.0-85029943602
dc.identifier.startpage300
dc.identifier.urihttps://doi.org/10.1109/TAFFC.2016.2553038
dc.identifier.urihttps://hdl.handle.net/20.500.14719/12032
dc.identifier.volume8
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.relation.sourceIEEE Transactions on Affective Computing
dc.subject.authorkeywordsAffective Computing
dc.subject.authorkeywordsAudio-visual Affective Database
dc.subject.authorkeywordsDynamic Facial Expression Database
dc.subject.authorkeywordsEmotion Recognition From Speech
dc.subject.authorkeywordsEmotional Corpora
dc.subject.authorkeywordsFacial Expression Recognition
dc.subject.authorkeywordsMental State Recognition
dc.subject.authorkeywordsSpontaneous Expressions
dc.subject.authorkeywordsCameras
dc.subject.authorkeywordsDatabase Systems
dc.subject.authorkeywordsHuman Computer Interaction
dc.subject.authorkeywordsState Estimation
dc.subject.authorkeywordsStereo Image Processing
dc.subject.authorkeywordsVideo Cameras
dc.subject.authorkeywordsAffective Computing
dc.subject.authorkeywordsAudio-visual
dc.subject.authorkeywordsDynamic Facial Expression
dc.subject.authorkeywordsEmotion Recognition From Speech
dc.subject.authorkeywordsEmotional Corpora
dc.subject.authorkeywordsFacial Expression Recognition
dc.subject.authorkeywordsMental State
dc.subject.authorkeywordsSpontaneous Expressions
dc.subject.authorkeywordsSpeech Recognition
dc.subject.indexkeywordsCameras
dc.subject.indexkeywordsDatabase systems
dc.subject.indexkeywordsHuman computer interaction
dc.subject.indexkeywordsState estimation
dc.subject.indexkeywordsStereo image processing
dc.subject.indexkeywordsVideo cameras
dc.subject.indexkeywordsAffective Computing
dc.subject.indexkeywordsAudio-visual
dc.subject.indexkeywordsDynamic facial expression
dc.subject.indexkeywordsEmotion recognition from speech
dc.subject.indexkeywordsEmotional corpora
dc.subject.indexkeywordsFacial expression recognition
dc.subject.indexkeywordsMental state
dc.subject.indexkeywordsspontaneous expressions
dc.subject.indexkeywordsSpeech recognition
dc.titleBAUM-1: A Spontaneous Audio-Visual Face Database of Affective and Mental States
dc.typeArticle
dcterms.referencesSebe, Niculae, Multimodal approaches for emotion recognition: A survey, Proceedings of SPIE - The International Society for Optical Engineering, 5670, pp. 56-67, (2005), Zeng, Zhihong, A survey of affect recognition methods: Audio, visual, and spontaneous expressions, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31, 1, pp. 39-58, (2009), Ryan, Andrew, Automated facial expression recognition system, Proceedings - International Carnahan Conference on Security Technology, pp. 172-177, (2009), Littlewort, Gwen C., Automatic coding of facial expressions displayed during posed and genuine pain, Image and Vision Computing, 27, 12, pp. 1797-1803, (2009), Ashraf, Ahmed Bilal, The painful face - Pain expression recognition using active appearance models, Image and Vision Computing, 27, 12, pp. 1788-1796, (2009), Palo Alto CA Consulting Psychologists Press, (1976), Bassili, John N., Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face, Journal of Personality and Social Psychology, 37, 11, pp. 2049-2058, (1979), Lucey, Patrick, The extended Cohn-Kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression, pp. 94-101, (2010), Blueprint for Affective Computing A Sourcebook, (2010), Savran, Arman, Bosphorus database for 3D face analysis, Lecture Notes in Computer Science, 5372 LNCS, pp. 47-56, (2008)
dspace.entity.typePublication
local.indexed.atScopus
person.identifier.scopus-author-id58343878100
person.identifier.scopus-author-id24765325900
person.identifier.scopus-author-id46661628200
person.identifier.scopus-author-id55807016900

Files