Araştırma Çıktıları | WoS | Scopus | TR-Dizin | PubMed

Permanent URI for this communityhttps://hdl.handle.net/20.500.14719/1741

Browse

Search Results

Now showing 1 - 6 of 6
  • Publication
    An efficient end-to-end deep neural network for interstitial lung disease recognition and classification
    (Tubitak Scientific & Technological Research Council Turkey, 2022) Junayed, Masum Shah; Jeny, Afsana Ahsan; Islam, Md Baharul; Ahmed, Ikhtiar; Shah, Afm Shahen; Bahcesehir University; Daffodil International University; Dortmund University of Technology; Yildiz Technical University
    The automated Interstitial Lung Diseases (ILDs) classification technique is essential for assisting clinicians during the diagnosis process. Detecting and classifying ILDs patterns is a challenging problem. This paper introduces an end-to-end deep convolution neural network (CNN) for classifying ILDs patterns. The proposed model comprises four convolutional layers with different kernel sizes and Rectified Linear Unit (ReLU) activation function, followed by batch normalization and max-pooling with a size equal to the final feature map size well as four dense layers. We used the ADAM optimizer to minimize categorical cross-entropy. A dataset consisting of 21328 image patches of 128 CT scans with five classes is taken to train and assess the proposed model. A comparison study showed that the presented model outperformed pre-trained CNNs and five-fold cross-validation on the same dataset. For ILDs pattern classification, the proposed approach achieved the accuracy scores of 99.09% and the average F score of 97.9% that outperforms three pre-trained CNNs. These outcomes show that the proposed model is relatively state-of-the-art in precision, recall, f score, and accuracy.
  • Publication
    SkNet: A Convolutional Neural Networks Based Classification Approach for Skin Cancer Classes
    (IEEE, 2020) Jeny, Afsana Ahsan; Sakib, Abu Noman Md; Junayed, Masum Shah; Lima, Khadija Akter; Ahmed, Ikhtiar; Islam, Md Baharul; Daffodil International University; Khulna University of Engineering & Technology (KUET); Bahcesehir University; Daffodil International University
    Skin Cancer is one of the most common types of cancer. A solution for this globally recognized health problem is much required. Machine Learning techniques have brought revolutionary changes in the field of biomedical researches. Previously, It took a significant amount of time and much effort in detecting skin cancers. In recent years, many works have been done with Deep Learning which made the process a lot faster and much more accurate. In this paper, We have proposed a novel Convolutional Neural Networks (CNN) based approach that can classify four different types of Skin Cancer. We have developed our model SkNet consisting of 19 convolution layers. In previous works, the highest accuracy gained on 1000 images was 80.52%. Our proposed model exceeded that previous performance and achieved an accuracy of 95.26% on a dataset of 4800 images which is the highest acquired accuracy.
  • Publication
    Improving Image Compression With Adjacent Attention and Refinement Block
    (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2023) Jeny, Afsana Ahsan; Islam, Md Baharul; Junayed, Masum Shah; Das, Debashish; Daffodil International University; Bahcesehir University; Birmingham City University
    Recently, learned image compression algorithms have shown incredible performance compared to classic hand-crafted image codecs. Despite its considerable achievements, the fundamental disadvantage is not optimized for retaining local redundancies, particularly non-repetitive patterns, which have a detrimental influence on the reconstruction quality. This paper introduces the autoencoder-style network-based efficient image compression method, which contains three novel blocks, i.e., adjacent attention block, Gaussian merge block, and decoded image refinement block, to improve the overall image compression performance. The adjacent attention block allocates the additional bits required to capture spatial correlations (both vertical and horizontal) and effectively remove worthless information. The Gaussian merge block assists the rate-distortion optimization performance, while the decoded image refinement block improves the defects in low-resolution reconstructed images. A comprehensive ablation study analyzes and evaluates the qualitative and quantitative capabilities of the proposed model. Experimental results on two publicly available datasets reveal that our method outperforms the state-of-the-art methods on the KODAK dataset (by around 4dB and 5dB) and CLIC dataset (by about 4dB and 3dB) in terms of PSNR and MS-SSIM.
  • Publication
    ScarNet: Development and Validation of a Novel Deep CNN Model for Acne Scar Classification With a New Dataset
    (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2022) Junayed, Masum Shah; Islam, Md Baharul; Jeny, Afsana Ahsan; Sadeghzadeh, Arezoo; Biswas, Topu; Shah, A. F. M. Shahen; Daffodil International University; Bahcesehir University; Multimedia University; Yildiz Technical University
    Acne scarring occurs in 95% of people with acne vulgaris due to collagen loss or gains when the body is healing the damages of the skin caused by acne inflammation. Accurate classification of acne scars is a vital factor in providing a timely, effective treatment protocol. Dermatologists mainly recognize the type of acne scars manually based on visual inspections, which are time- and energy-consuming and subject to intra- and inter-reader variability. In this paper, a novel automated acne scar classification system is proposed based on a deep Convolutional Neural Network (CNN) model. First, a dataset of 250 images from five different classes is collected and labeled by four well-experienced dermatologists. The pre-processed input images are fed into our proposed model, namely ScarNet, for deep feature map extraction. The optimizer, loss function, activation functions, filter and kernel sizes, regularization methods, and the batch size of the proposed architecture are tuned so that the classification performance is maximized while minimizing the computational cost. Experimental results demonstrate the feasibility of the proposed method with accuracy, specificity, and kappa score of 92.53%, 95.38%, and 76.7%, respectively.
  • Publication
    AN EFFICIENT END-TO-END IMAGE COMPRESSION TRANSFORMER
    (IEEE, 2022) Jeny, Afsana Ahsan; Junayed, Masum Shah; Islam, Md Baharul; Bahcesehir University
    Image and video compression received significant research attention and expanded their applications. Existing entropy estimation-based methods combine with hyperprior and local context, limiting their efficacy. This paper introduces an efficient end-to-end transformer-based image compression model, which generates a global receptive field to tackle the long-range correlation issues. A hyper encoder-decoder-based transformer block employs a multi-head spatial reduction self-attention (MHSRSA) layer to minimize the computational cost of the self-attention layer and enable rapid learning of multi-scale and high-resolution features. A Casual Global Anticipation Module (CGAM) is designed to construct highly informative adjacent contexts utilizing channel-wise linkages and identify global reference points in the latent space for end-to-end rate-distortion optimization (RDO). Experimental results demonstrate the effectiveness and competitive performance of the KODAK dataset.
  • Publication
    DeepPyNet: A Deep Feature Pyramid Network for Optical Flow Estimation
    (IEEE, 2021) Jeny, Afsana Ahsan; Islam, Md Baharul; Aydin, Tarkan; Cree, MJ; Bahcesehir University
    Recent advances in optical flow prediction have been made possible by using feature pyramids and iterative refining. Though downsampling in feature pyramids may cause foreground items to merge with the background, the iterative processing could be incorrect in optical flow experiments. Particularly the outcomes of the movement of narrow and tiny objects can be more invisible in the flow scene. We introduce a novel method called DeepPyNet for optical flow estimation that includes feature extractor, multi-channel cost volume, and flow decoder. In this method, we propose a deep recurrent feature pyramid-based network for the end-to-end optical flow estimation. The feature extraction from each pixel of the feature map keeps essential information without modifying the feature receptive field. Then, a multi-scale 4 Dc orrelation volume is built from the visual similarity of each pair of pixels. Finally, we utilize the multi-scale correlation volumes to continuously update the flow field through an iterative recurrent method. Experimental results demonstrate that DeepPyNet significantly eliminates flow errors and provides state-of-the-art performance in various datasets. Moreover, DeepPyNet is less complex and uses only 6.1M parameters 81% and 35% smaller than the popular FlowNet and PWC-Net+, respectively.