Araştırma Çıktıları | WoS | Scopus | TR-Dizin | PubMed
Permanent URI for this communityhttps://hdl.handle.net/20.500.14719/1741
Browse
37 results
Search Results
Publication Metadata only Forecasting Electricity Consumption Using Deep Learning Methods with Hyperparameter Tuning, Hiperparametre Ayarl Derin Orenme Yontemleri ile Elektrik Tuketiminin Tahmini(Institute of Electrical and Electronics Engineers Inc., 2020) Ayvaz, Serkan; Onur Arslan; Ayvaz, Serkan, Department of Software Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey; Onur Arslan, null, Department of Software Engineering, Bahçeşehir Üniversitesi, Istanbul, TurkeyIn this study, it is tried to estimate one-day electricity consumption by using deep learning methods with a dataset which includes the change in time-dependent electricity consumption. After explaining the time series components and machine learning concepts, general information about previous studies on electricity consumption estimation is given. Since the dataset used is a time series, all the features are emphasized in detail and necessary operations like resample and differencing are performed before proceeding to the modeling. Tuning was applied on hyperparameters which significantly affect the performance of the algorithms used in the modeling stage and the most suitable parameters were searched for each method. Then the best results were compared with each other and the method with the lowest error rate was determined. © 2021 Elsevier B.V., All rights reserved.Publication Metadata only Real-Time Image Stitching for Multiple Camera Panoramic Video Shoot: A Case Study in Football Matches, Gercek Zamanli Coklu Kamera Panoramik Video Cekimi icin Imge Birlestirme: Futbol Maci Ornek Calismasi(Institute of Electrical and Electronics Engineers Inc., 2020) Bayrak, Mehmet; Kilinç, Orkun; Arica, Nafiz; Bayrak, Mehmet, Bilgisayar Mühendisliǧi Bölümü, Bahçeşehir Üniversitesi, Istanbul, Turkey; Kilinç, Orkun, Rotechvision Bilisim, Istanbul, Turkey; Arica, Nafiz, Bilgisayar Mühendisliǧi Bölümü, Bahçeşehir Üniversitesi, Istanbul, TurkeyIn this study, a real-time image stitching method is proposed for cost-effective high resolution wide angle video shooting. In the first stage, the images taken from more than one camera with fixed position are stitched with a classical algorithm. After the parameters calculated in the first stage are stored, the image pixels are mapped using the stored parameters and ArUco markers. The mapping process is used for real-time panoramic video shooting after it has been calculated for that particular camera setup once. The images taken from the multi camera are combined by remapping with a GPU based approach. Therefore, registered mapping can be used in different environments without changing the position and lenses of the cameras. As a case study, real-time panoramic video is shot with two cost-effective cameras in football matches. Deep-learning based autonomous pilot video shooting is then performed on the high resolution panoramic video obtained. In experiments, 36 FPS speed has been reached by using a standard desktop computer and it has been seen that image quality measurements are at reasonable levels. © 2021 Elsevier B.V., All rights reserved.Publication Metadata only Real-time Restoration of Quality Distortions in Mobile Images using Deep Learning(Institute of Electrical and Electronics Engineers Inc., 2020) Kocak, Taskin; Ciloglu, Cagkan; Kocak, Taskin, University of New Orleans Dr. Robert A. Savoie College of Engineering, New Orleans, United States; Ciloglu, Cagkan, Department of Computer Engineering, Bahçeşehir Üniversitesi, Istanbul, TurkeyFrames provided by camera on mobile devices may be distorted because of camera defects and/or weather conditions such as rain and snow. These distortions affect image classifiers. This paper proposes using deep-learning architectures to restore quality distortions in real-time mobile video for image classifiers. An iOS based app is developed using CoreML to show that deep convolutional auto-encoder (CAE) based methods can be used to restore picture quality. © 2021 Elsevier B.V., All rights reserved.Publication Metadata only SkNet: A Convolutional Neural Networks Based Classification Approach for Skin Cancer Classes(Institute of Electrical and Electronics Engineers Inc., 2020) Jeny, Afsana Ahsan; Sakib, Abu Noman Md; Junayed, Masum Shah; Lima, Khadija Akter; Ahmed, Ikhtiar; Islam, Md Baharul; Jeny, Afsana Ahsan, Department of CSE, Daffodil International University, Dhaka, Bangladesh; Sakib, Abu Noman Md, Department of Cse, Khulna University of Engineering and Technology, Khulna, Bangladesh; Junayed, Masum Shah, Department of Computer Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey; Lima, Khadija Akter, Department of CSE, Daffodil International University, Dhaka, Bangladesh; Ahmed, Ikhtiar, Department of CSE, Daffodil International University, Dhaka, Bangladesh; Islam, Md Baharul, Department of Computer Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey, American University of Malta, Cospicua, MaltaSkin Cancer is one of the most common types of cancer. A solution for this globally recognized health problem is much required. Machine Learning techniques have brought revolutionary changes in the field of biomedical researches. Previously, It took a significant amount of time and much effort in detecting skin cancers. In recent years, many works have been done with Deep Learning which made the process a lot faster and much more accurate. In this paper, We have proposed a novel Convolutional Neural Networks (CNN) based approach that can classify four different types of Skin Cancer. We have developed our model SkNet consisting of 19 convolution layers. In previous works, the highest accuracy gained on 1000 images was 80.52%. Our proposed model exceeded that previous performance and achieved an accuracy of 95.26% on a dataset of 4800 images which is the highest acquired accuracy. © 2021 Elsevier B.V., All rights reserved.Publication Open Access A Deep CNN Model for Skin Cancer Detection and Classification(Vaclav Skala Union Agency, 2021) Junayed, Masum Shah; Anjum, Nipa; Sakib, Abu Noman Md; Islam, Md Baharul; Skala, V.; Junayed, Masum Shah, Bahçeşehir Üniversitesi, Istanbul, Turkey; Anjum, Nipa, Khulna University of Engineering and Technology, Khulna, Bangladesh; Sakib, Abu Noman Md, Khulna University of Engineering and Technology, Khulna, Bangladesh; Islam, Md Baharul, American University of Malta, Cospicua, MaltaSkin cancer is one of the most dangerous types of cancers that affect millions of people every year. The detection of skin cancer in the early stages is an expensive and challenging process. In recent studies, machine learning-based methods help dermatologists in classifying medical images. This paper proposes a deep learning-based model to detect and classify skin cancer using the concept of deep Convolution Neural Network (CNN). Initially, we collected a dataset that includes four skin cancer image data before applying them in augmentation techniques to increase the accumulated dataset size. Then, we designed a deep CNN model to train our dataset. On the test data, our model receives 95.98% accuracy that exceeds the two pre-train models, GoogleNet by 1.76% and MobileNet by 1.12%, respectively. The proposed deep CNN model also beats other contemporaneous models while being computationally comparable. © 2022 Elsevier B.V., All rights reserved.Publication Metadata only Deep Covariance Feature and CNN-based End-to-End Masked Face Recognition(Institute of Electrical and Electronics Engineers Inc., 2021) Junayed, Masum Shah; Sadeghzadeh, Arezoo; Islam, Md Baharul; Struc, V.; Ivanovska, M.; Junayed, Masum Shah, Department of Computer Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey; Sadeghzadeh, Arezoo, Department of Computer Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey; Islam, Md Baharul, Department of Computer Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey, College of Data Science and Engineering, American University of Malta, Cospicua, MaltaWith the emergence of the global epidemic of COVID-19, face recognition systems have achieved much attention as contactless identity verification methods. However, covering a considerable part of the face by the mask poses severe challenges for conventional face recognition systems. This paper proposes an automated Masked Face Recognition (MFR) system based on the combination of a mask occlusion discarding technique and a deep-learning model. Initially, a pre-processing step is carried out in which the images pass three filters. Then, a Convolutional Neural Network (CNN) model is proposed to extract the features from unoccluded regions of the faces (i.e., eyes and forehead). These feature maps are employed to obtain covariance-based features. Two extra layers, i.e., Bitmap and Eigenvalue, are designed to reduce the dimension and concatenate these covariance feature matrices. The deep covariance features are quantized to codebooks combined based on Bag-of-Features (BoF) paradigm. Finally, a global histogram is created based on these codebooks and utilized for training an SVM classifier. The proposed method is trained and evaluated on Real-World-Masked-Face-Recognition-Dataset (RMFRD) and Simulated-Masked-Face-Recognition-Dataset (SMFRD) achieves an accuracy of 95.07% and 92.32 %, respectively, showing its competitive performance compared to the state-of-the-art. Experimental results prove that our system has high robustness against noisy data and illumination variations. © 2025 Elsevier B.V., All rights reserved.Publication Metadata only A Review of Spam Detection in Social Media(Institute of Electrical and Electronics Engineers Inc., 2021) Yurtseven, Ilke; Baǧriyanik, Selami; Ayvaz, Serkan; Yurtseven, Ilke, Bahçeşehir Üniversitesi, Istanbul, Turkey; Baǧriyanik, Selami, Department of Software Engineering, Nişantaşı Üniversitesi, Istanbul, Turkey; Ayvaz, Serkan, Department of Computer Engineering, Yıldız Teknik Üniversitesi, Istanbul, TurkeyWith significant usage of social media to socialize in virtual environments, bad actors are now able to use these platforms to spread their maUcious activities such as hate speech, spam, and even phishing to very large crowds. Especially, Twitter is suitable for these types of activities because it is one of the most common social media platforms for microblogging with millions of active users. Moreover, since the end of 2019, Covid-19 has changed the lives of individuals in many ways. While it increased social media usage due to free time, the number of cyber-attacks soared too. To prevent these activities, detection is a very crucial phase. Thus, the main goal of this study is to review the state-of-art in the detection of malicious content and the contribution of AI algorithms for detecting spam and scams effectively in social media. © 2022 Elsevier B.V., All rights reserved.Publication Metadata only Comparing Fusion Methods for 3D Object Detection(Springer Science and Business Media Deutschland GmbH, 2022) Arıcan, Erkut; Aydin, Tarkan; Kahraman, C.; Cebi, S.; Cevik Onar, S.; Oztaysi, B.; Tolga, A.C.; Sari, I.U.; Arıcan, Erkut, Faculty of Engineering and Natural Sciences, Bahçeşehir Üniversitesi, Istanbul, Turkey; Aydin, Tarkan, Faculty of Engineering and Natural Sciences, Bahçeşehir Üniversitesi, Istanbul, TurkeyOne of the main problems in computer vision that leads digital technologies to transform our business and social life is object detection. The solutions to the problem have various application areas such as security systems, surveillance, shopping applications, and much more. Significant performance gain have been achieved by using the popular deep learning methods that are applied on 2D RGB images. With the availability of low-cost 3D sensors, methods that incorporates 3D data with 2D RGB data using existing deep network architectures became more popular to increase performance further. In this work, different data level and feature level fusion strategies have been analyzed to incorporate 3D depth and 2D RGB data with existing architectures to assess their effects on the performance. These methods were tested on the real RGB-D benchmark datasets available in the literature, and the accuracy results were compared to each other in their object types of group. © 2021 Elsevier B.V., All rights reserved.Publication Metadata only Stereoscopic Video Quality Assessment Using Modified Parallax Attention Module(Springer Science and Business Media Deutschland GmbH, 2022) Imani, Hassan; Zaim, Selim; Islam, Md Baharul; Junayed, Masum Shah; Durakbasa, N.M.; Gençyılmaz, M.G.; Imani, Hassan, Computer Vision Lab, Bahçeşehir Üniversitesi, Istanbul, Turkey; Zaim, Selim, Faculty of Engineering and Natural Sciences, Bahçeşehir Üniversitesi, Istanbul, Turkey; Islam, Md Baharul, Computer Vision Lab, Bahçeşehir Üniversitesi, Istanbul, Turkey; Junayed, Masum Shah, Computer Vision Lab, Bahçeşehir Üniversitesi, Istanbul, TurkeyDeep learning techniques are utilized for most computer vision tasks. Especially, Convolutional Neural Networks (CNNs) have shown great performance in detection and classification tasks. Recently, in the field of Stereoscopic Video Quality Assessment (SVQA), 3D CNNs are used to extract spatial and temporal features from stereoscopic videos, but the importance of the disparity information which is very important did not consider well. Most of the recently proposed deep learning-based methods mostly used cost volume methods to produce the stereo correspondence for large disparities. Because the disparities can differ considerably for stereo cameras with different configurations, recently the Parallax Attention Mechanism (PAM) is proposed that captures the stereo correspondence disregarding the disparity changes. In this paper, we propose a new SVQA model using a base 3D CNN-based network, and a modified PAM-based left and right feature fusion model. Firstly, we use 3D CNNs and residual blocks to extract features from the left and right views of a stereo video patch. Then, we modify the PAM model to fuse the left and right features with considering the disparity information, and using some fully connected layers, we calculate the quality score of a stereoscopic video. We divided the input videos into cube patches for data augmentation and remove some cubes that confuse our model from the training dataset. Two standard stereoscopic video quality assessment benchmarks of LFOVIAS3DPh2 and NAMA3DS1-COSPAD1 are used to train and test our model. Experimental results indicate that our proposed model is very competitive with the state-of-the-art methods in the NAMA3DS1-COSPAD1 dataset, and it is the state-of-the-art method in the LFOVIAS3DPh2 dataset. © 2022 Elsevier B.V., All rights reserved.Publication Metadata only A Deep-Learning Based Automated COVID-19 Physical Distance Measurement System Using Surveillance Video(Springer Science and Business Media Deutschland GmbH, 2022) Junayed, Masum Shah; Islam, Md Baharul; Santosh, K.; Hegadi, R.; Pal, U.; Junayed, Masum Shah, Department of Computer Engineering, Bahçeşehir Üniversitesi, Istanbul, Turkey, Department of CSE, Daffodil International University, Dhaka, Bangladesh; Islam, Md Baharul, College of Data Science and Engineering, American University of Malta, Cospicua, MaltaThe contagious Corona Virus (COVID-19) transmission can be reduced by following and maintaining physical distancing (also known as COVID-19 social distance). The World Health Organisation (WHO) recommends it to prevent COVID-19 from spreading in public areas. On the other hand, people may not be maintaining the required 2-m physical distance as a mandated safety precaution in shopping malls and public places. The spread of the fatal disease may be slowed by an active monitoring system suitable for identifying distances between people and alerting them. This paper introduced a deep learning-based system for automatically detecting physical distance using video from security cameras. The proposed system employed the fine-tuning YOLO v4 for object detection and classification and Deepsort for tracking the detected people using bounding boxes from the video. Pairwise L2 vectorized normalization was utilized to generate a three-dimensional feature space for tracking physical distances and the violation index, determining the number of individuals who follow the distance rules. For training and testing, we use the MS COCO and Oxford Town Centre (OTC) datasets. We compared the proposed system to two well-known object detection models, YOLO v3 and Faster RCNN. Our method obtained a weighted mAP score of 87.8% and an FPS score of 28, both are computationally comparable. © 2022 Elsevier B.V., All rights reserved.
