Araştırma Çıktıları | WoS | Scopus | TR-Dizin | PubMed

Permanent URI for this communityhttps://hdl.handle.net/20.500.14719/1741

Browse

Search Results

Now showing 1 - 2 of 2
  • Publication
    Deep Covariance Feature and CNN-based End-to-End Masked Face Recognition
    (IEEE, 2021) Junayed, Masum Shah; Sadeghzadeh, Arezoo; Islam, Md Baharul; Struc, V; Ivanovska, M; Bahcesehir University
    With the emergence of the global epidemic of COVID-19, face recognition systems have achieved much attention as contactless identity verification methods. However, covering a considerable part of the face by the mask poses severe challenges for conventional face recognition systems. This paper proposes an automated Masked Face Recognition (MFR) system based on the combination of a mask occlusion discarding technique and a deep-learning model. Initially, a pre-processing step is carried out in which the images pass three filters. Then, a Convolutional Neural Network (CNN) model is proposed to extract the features from unoccluded regions of the faces (i.e., eyes and forehead). These feature maps are employed to obtain covariance-based features. Two extra layers, i.e., Bitmap and Eigenvalue, are designed to reduce the dimension and concatenate these covariance feature matrices. The deep covariance features are quantized to codebooks combined based on Bag-of-Features (BoF) paradigm. Finally, a global histogram is created based on these codebooks and utilized for training an SVM classifier. The proposed method is trained and evaluated on Real-World-Masked-Face-Recognition-Dataset (RMFRD) and Simulated-Masked-Face-Recognition-Dataset (SMFRD) achieves an accuracy of 95.07% and 92.32%, respectively, showing its competitive performance compared to the state-of-the-art. Experimental results prove that our system has high robustness against noisy data and illumination variations.
  • Publication
    An Effective Multi-Camera Dataset and Hybrid Feature Matcher for Real-Time Video Stitching
    (IEEE, 2021) Hosen, Md Imran; Islam, Md Baharul; Sadeghzadeh, Arezoo; Cree, MJ; Bahcesehir University
    Multi-camera video stitching combines several videos captured by different cameras into a single video for a wide Field-of-View (FOV). In this paper, a novel dataset is developed for video stitching which consists of 30 video sets captured by four static cameras in various environmental scenarios. Then, a new video stitching method is proposed based on a hybrid matcher for stitching four videos with over 200 degrees FOV. The keypoints and descriptors are obtained by the scale-invariant feature transform (SIFT) and Root-SIFT, respectively. Then, these keypoint descriptors are matched by applying a hybrid matcher, a combination of Brute Force (BF), and Fast Linear Approximated Nearest Neighbours (FLANN) matchers. After geometrical verification and eliminating outlier matching points, one-time homography is estimated based on Random Sample Consensus (RANSAC). The proposed method is implemented and evaluated in different indoor/outdoor video settings. Experimental results demonstrate the capability, high accuracy, and robustness of the proposed method.