Publication: Deep Covariance Feature and CNN-based End-to-End Masked Face Recognition
| dc.contributor.author | Junayed, Masum Shah | |
| dc.contributor.author | Sadeghzadeh, Arezoo | |
| dc.contributor.author | Islam, Md Baharul | |
| dc.contributor.editor | Struc, V | |
| dc.contributor.editor | Ivanovska, M | |
| dc.contributor.institution | Bahcesehir University | |
| dc.date.accessioned | 2025-10-09T11:01:16Z | |
| dc.date.issued | 2021 | |
| dc.description.abstract | With the emergence of the global epidemic of COVID-19, face recognition systems have achieved much attention as contactless identity verification methods. However, covering a considerable part of the face by the mask poses severe challenges for conventional face recognition systems. This paper proposes an automated Masked Face Recognition (MFR) system based on the combination of a mask occlusion discarding technique and a deep-learning model. Initially, a pre-processing step is carried out in which the images pass three filters. Then, a Convolutional Neural Network (CNN) model is proposed to extract the features from unoccluded regions of the faces (i.e., eyes and forehead). These feature maps are employed to obtain covariance-based features. Two extra layers, i.e., Bitmap and Eigenvalue, are designed to reduce the dimension and concatenate these covariance feature matrices. The deep covariance features are quantized to codebooks combined based on Bag-of-Features (BoF) paradigm. Finally, a global histogram is created based on these codebooks and utilized for training an SVM classifier. The proposed method is trained and evaluated on Real-World-Masked-Face-Recognition-Dataset (RMFRD) and Simulated-Masked-Face-Recognition-Dataset (SMFRD) achieves an accuracy of 95.07% and 92.32%, respectively, showing its competitive performance compared to the state-of-the-art. Experimental results prove that our system has high robustness against noisy data and illumination variations. | |
| dc.identifier.conferenceDate | DEC 15-18, 2021 | |
| dc.identifier.conferenceHost | TIH iHub Drishti | |
| dc.identifier.conferenceName | 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG) | |
| dc.identifier.conferencePlace | TIH iHub Drishti, ELECTR NETWORK | |
| dc.identifier.conferenceSponsor | IEEE,IEEE Photon Soc,IEEE Biometr Council,Google,NVIDIA,CCS Comp,Mukh Technologies,IEEE Comp Soc | |
| dc.identifier.isbn | 978-1-6654-3176-7 | |
| dc.identifier.issn | 2326-5396 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.14719/15958 | |
| dc.identifier.wos | WOS:000784811600078 | |
| dc.identifier.woscitationindex | Conference Proceedings Citation Index - Science (CPCI-S) | |
| dc.language.iso | en | |
| dc.publisher | IEEE | |
| dc.relation.fundingName | Scientific and Technological Research Council of Turkey (TUBITAK)(Turkiye Bilimsel ve Teknolojik Arastirma Kurumu (TUBITAK)) | |
| dc.relation.fundingOrg | Scientific and Technological Research Council of Turkey (TUBITAK) [118C301] | |
| dc.relation.fundingText | This work is supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under 2232 Outstanding Researchers program, Project No. 118C301. | |
| dc.relation.source | 2021 16TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2021) | |
| dc.relation.source | IEEE International Conference on Automatic Face and Gesture Recognition and Workshops | |
| dc.subject.wos | Computer Science, Artificial Intelligence | |
| dc.subject.wos | Computer Science, Software Engineering | |
| dc.subject.wos | Engineering, Electrical & Electronic | |
| dc.subject.wos | Imaging Science & Photographic Technology | |
| dc.title | Deep Covariance Feature and CNN-based End-to-End Masked Face Recognition | |
| dc.type | Proceedings Paper | |
| dspace.entity.type | Publication | |
| local.indexed.at | WOS | |
| person.identifier.rid | Junayed, Masum Shah/P-7375-2019 | |
| person.identifier.rid | Islam, Md Baharul/R-3751-2019 |
