Displaying all 14 publications

Abstract:
Sort:
  1. Kaplan E, Baygin M, Barua PD, Dogan S, Tuncer T, Altunisik E, et al.
    Med Eng Phys, 2023 May;115:103971.
    PMID: 37120169 DOI: 10.1016/j.medengphy.2023.103971
    PURPOSE: The classification of medical images is an important priority for clinical research and helps to improve the diagnosis of various disorders. This work aims to classify the neuroradiological features of patients with Alzheimer's disease (AD) using an automatic hand-modeled method with high accuracy.

    MATERIALS AND METHOD: This work uses two (private and public) datasets. The private dataset consists of 3807 magnetic resonance imaging (MRI) and computer tomography (CT) images belonging to two (normal and AD) classes. The second public (Kaggle AD) dataset contains 6400 MR images. The presented classification model comprises three fundamental phases: feature extraction using an exemplar hybrid feature extractor, neighborhood component analysis-based feature selection, and classification utilizing eight different classifiers. The novelty of this model is feature extraction. Vision transformers inspire this phase, and hence 16 exemplars are generated. Histogram-oriented gradients (HOG), local binary pattern (LBP) and local phase quantization (LPQ) feature extraction functions have been applied to each exemplar/patch and raw brain image. Finally, the created features are merged, and the best features are selected using neighborhood component analysis (NCA). These features are fed to eight classifiers to obtain highest classification performance using our proposed method. The presented image classification model uses exemplar histogram-based features; hence, it is called ExHiF.

    RESULTS: We have developed the ExHiF model with a ten-fold cross-validation strategy using two (private and public) datasets with shallow classifiers. We have obtained 100% classification accuracy using cubic support vector machine (CSVM) and fine k nearest neighbor (FkNN) classifiers for both datasets.

    CONCLUSIONS: Our developed model is ready to be validated with more datasets and has the potential to be employed in mental hospitals to assist neurologists in confirming their manual screening of AD using MRI/CT images.

  2. Horry MJ, Chakraborty S, Pradhan B, Paul M, Zhu J, Loh HW, et al.
    Sensors (Basel), 2023 Jul 21;23(14).
    PMID: 37514877 DOI: 10.3390/s23146585
    Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening.
  3. Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, et al.
    Comput Methods Programs Biomed, 2024 Jan;243:107880.
    PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880
    Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
  4. Tuncer I, Barua PD, Dogan S, Baygin M, Tuncer T, Tan RS, et al.
    Inform Med Unlocked, 2023;36:101158.
    PMID: 36618887 DOI: 10.1016/j.imu.2022.101158
    BACKGROUND: Chest computed tomography (CT) has a high sensitivity for detecting COVID-19 lung involvement and is widely used for diagnosis and disease monitoring. We proposed a new image classification model, swin-textural, that combined swin-based patch division with textual feature extraction for automated diagnosis of COVID-19 on chest CT images. The main objective of this work is to evaluate the performance of the swin architecture in feature engineering.

    MATERIAL AND METHOD: We used a public dataset comprising 2167, 1247, and 757 (total 4171) transverse chest CT images belonging to 80, 80, and 50 (total 210) subjects with COVID-19, other non-COVID lung conditions, and normal lung findings. In our model, resized 420 × 420 input images were divided using uniform square patches of incremental dimensions, which yielded ten feature extraction layers. At each layer, local binary pattern and local phase quantization operations extracted textural features from individual patches as well as the undivided input image. Iterative neighborhood component analysis was used to select the most informative set of features to form ten selected feature vectors and also used to select the 11th vector from among the top selected feature vectors with accuracy >97.5%. The downstream kNN classifier calculated 11 prediction vectors. From these, iterative hard majority voting generated another nine voted prediction vectors. Finally, the best result among the twenty was determined using a greedy algorithm.

    RESULTS: Swin-textural attained 98.71% three-class classification accuracy, outperforming published deep learning models trained on the same dataset. The model has linear time complexity.

    CONCLUSIONS: Our handcrafted computationally lightweight swin-textural model can detect COVID-19 accurately on chest CT images with low misclassification rates. The model can be implemented in hospitals for efficient automated screening of COVID-19 on chest CT images. Moreover, findings demonstrate that our presented swin-textural is a self-organized, highly accurate, and lightweight image classification model and is better than the compared deep learning models for this dataset.

  5. Erten M, Tuncer I, Barua PD, Yildirim K, Dogan S, Tuncer T, et al.
    J Digit Imaging, 2023 Aug;36(4):1675-1686.
    PMID: 37131063 DOI: 10.1007/s10278-023-00827-8
    Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.
  6. Xu S, Deo RC, Soar J, Barua PD, Faust O, Homaira N, et al.
    Comput Methods Programs Biomed, 2023 Nov;241:107746.
    PMID: 37660550 DOI: 10.1016/j.cmpb.2023.107746
    BACKGROUND AND OBJECTIVE: Obstructive airway diseases, including asthma and Chronic Obstructive Pulmonary Disease (COPD), are two of the most common chronic respiratory health problems. Both of these conditions require health professional expertise in making a diagnosis. Hence, this process is time intensive for healthcare providers and the diagnostic quality is subject to intra- and inter- operator variability. In this study we investigate the role of automated detection of obstructive airway diseases to reduce cost and improve diagnostic quality.

    METHODS: We investigated the existing body of evidence and applied Preferred Reporting Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to search records in IEEE, Google scholar, and PubMed databases. We identified 65 papers that were published from 2013 to 2022 and these papers cover 67 different studies. The review process was structured according to the medical data that was used for disease detection. We identified six main categories, namely air flow, genetic, imaging, signals, and miscellaneous. For each of these categories, we report both disease detection methods and their performance.

    RESULTS: We found that medical imaging was used in 14 of the reviewed studies as data for automated obstructive airway disease detection. Genetics and physiological signals were used in 13 studies. Medical records and air flow were used in 9 and 7 studies, respectively. Most papers were published in 2020 and we found three times more work on Machine Learning (ML) when compared to Deep Learning (DL). Statistical analysis shows that DL techniques achieve higher Accuracy (ACC) when compared to ML. Convolutional Neural Network (CNN) is the most common DL classifier and Support Vector Machine (SVM) is the most widely used ML classifier. During our review, we discovered only two publicly available asthma and COPD datasets. Most studies used private clinical datasets, so data size and data composition are inconsistent.

    CONCLUSIONS: Our review results indicate that Artificial Intelligence (AI) can improve both decision quality and efficiency of health professionals during COPD and asthma diagnosis. However, we found several limitations in this review, such as a lack of dataset consistency, a limited dataset and remote monitoring was not sufficiently explored. We appeal to society to accept and trust computer aided airflow obstructive diseases diagnosis and we encourage health professionals to work closely with AI scientists to promote automated detection in clinical practice and hospital settings.

  7. Loh HW, Ooi CP, Oh SL, Barua PD, Tan YR, Molinari F, et al.
    Comput Methods Programs Biomed, 2023 Nov;241:107775.
    PMID: 37651817 DOI: 10.1016/j.cmpb.2023.107775
    BACKGROUND AND OBJECTIVE: Attention Deficit Hyperactivity problem (ADHD) is a common neurodevelopment problem in children and adolescents that can lead to long-term challenges in life outcomes if left untreated. Also, ADHD is frequently associated with Conduct Disorder (CD), and multiple research have found similarities in clinical signs and behavioral symptoms between both diseases, making differentiation between ADHD, ADHD comorbid with CD (ADHD+CD), and CD a subjective diagnosis. Therefore, the goal of this pilot study is to create the first explainable deep learning (DL) model for objective ECG-based ADHD/CD diagnosis as having an objective biomarker may improve diagnostic accuracy.

    METHODS: The dataset used in this study consist of ECG data collected from 45 ADHD, 62 ADHD+CD, and 16 CD patients at the Child Guidance Clinic in Singapore. The ECG data were segmented into 2 s epochs and directly used to train our 1-dimensional (1D) convolutional neural network (CNN) model.

    RESULTS: The proposed model yielded 96.04% classification accuracy, 96.26% precision, 95.99% sensitivity, and 96.11% F1-score. The Gradient-weighted class activation mapping (Grad-CAM) function was also used to highlight the important ECG characteristics at specific time points that most impact the classification score.

    CONCLUSION: In addition to achieving model performance results with our suggested DL method, Grad-CAM's implementation also offers vital temporal data that clinicians and other mental healthcare professionals can use to make wise medical judgments. We hope that by conducting this pilot study, we will be able to encourage larger-scale research with a larger biosignal dataset. Hence allowing biosignal-based computer-aided diagnostic (CAD) tools to be implemented in healthcare and ambulatory settings, as ECG can be easily obtained via wearable devices such as smartwatches.

  8. Kaplan E, Chan WY, Altinsoy HB, Baygin M, Barua PD, Chakraborty S, et al.
    J Digit Imaging, 2023 Dec;36(6):2441-2460.
    PMID: 37537514 DOI: 10.1007/s10278-023-00889-8
    Detecting neurological abnormalities such as brain tumors and Alzheimer's disease (AD) using magnetic resonance imaging (MRI) images is an important research topic in the literature. Numerous machine learning models have been used to detect brain abnormalities accurately. This study addresses the problem of detecting neurological abnormalities in MRI. The motivation behind this problem lies in the need for accurate and efficient methods to assist neurologists in the diagnosis of these disorders. In addition, many deep learning techniques have been applied to MRI to develop accurate brain abnormality detection models, but these networks have high time complexity. Hence, a novel hand-modeled feature-based learning network is presented to reduce the time complexity and obtain high classification performance. The model proposed in this work uses a new feature generation architecture named pyramid and fixed-size patch (PFP). The main aim of the proposed PFP structure is to attain high classification performance using essential feature extractors with both multilevel and local features. Furthermore, the PFP feature extractor generates low- and high-level features using a handcrafted extractor. To obtain the high discriminative feature extraction ability of the PFP, we have used histogram-oriented gradients (HOG); hence, it is named PFP-HOG. Furthermore, the iterative Chi2 (IChi2) is utilized to choose the clinically significant features. Finally, the k-nearest neighbors (kNN) with tenfold cross-validation is used for automated classification. Four MRI neurological databases (AD dataset, brain tumor dataset 1, brain tumor dataset 2, and merged dataset) have been utilized to develop our model. PFP-HOG and IChi2-based models attained 100%, 94.98%, 98.19%, and 97.80% using the AD dataset, brain tumor dataset1, brain tumor dataset 2, and merged brain MRI dataset, respectively. These findings not only provide an accurate and robust classification of various neurological disorders using MRI but also hold the potential to assist neurologists in validating manual MRI brain abnormality screening.
  9. Barua PD, Chan WY, Dogan S, Baygin M, Tuncer T, Ciaccio EJ, et al.
    Entropy (Basel), 2021 Dec 08;23(12).
    PMID: 34945957 DOI: 10.3390/e23121651
    Optical coherence tomography (OCT) images coupled with many learning techniques have been developed to diagnose retinal disorders. This work aims to develop a novel framework for extracting deep features from 18 pre-trained convolutional neural networks (CNN) and to attain high performance using OCT images. In this work, we have developed a new framework for automated detection of retinal disorders using transfer learning. This model consists of three phases: deep fused and multilevel feature extraction, using 18 pre-trained networks and tent maximal pooling, feature selection with ReliefF, and classification using the optimized classifier. The novelty of this proposed framework is the feature generation using widely used CNNs and to select the most suitable features for classification. The extracted features using our proposed intelligent feature extractor are fed to iterative ReliefF (IRF) to automatically select the best feature vector. The quadratic support vector machine (QSVM) is utilized as a classifier in this work. We have developed our model using two public OCT image datasets, and they are named database 1 (DB1) and database 2 (DB2). The proposed framework can attain 97.40% and 100% classification accuracies using the two OCT datasets, DB1 and DB2, respectively. These results illustrate the success of our model.
  10. Dogan S, Barua PD, Baygin M, Tuncer T, Tan RS, Ciaccio EJ, et al.
    Cogn Neurodyn, 2024 Oct;18(5):2503-2519.
    PMID: 39555305 DOI: 10.1007/s11571-024-10104-1
    This paper presents an innovative feature engineering framework based on lattice structures for the automated identification of Alzheimer's disease (AD) using electroencephalogram (EEG) signals. Inspired by the Shannon information entropy theorem, we apply a probabilistic function to create the novel Lattice123 pattern, generating two directed graphs with minimum and maximum distance-based kernels. Using these graphs and three kernel functions (signum, upper ternary, and lower ternary), we generate six feature vectors for each input signal block to extract textural features. Multilevel discrete wavelet transform (MDWT) was used to generate low-level wavelet subbands. Our proposed model mirrors deep learning approaches, facilitating feature extraction in frequency and spatial domains at various levels. We used iterative neighborhood component analysis to select the most discriminative features from the extracted vectors. An iterative hard majority voting and a greedy algorithm were used to generate voted vectors to select the optimal channel-wise and overall results. Our proposed model yielded a classification accuracy of more than 98% and a geometric mean of more than 96%. Our proposed Lattice123 pattern, dynamic graph generation, and MDWT-based multilevel feature extraction can detect AD accurately as the proposed pattern can extract subtle changes from the EEG signal accurately. Our prototype is ready to be validated using a large and diverse database.
  11. Barua PD, Muhammad Gowdh NF, Rahmat K, Ramli N, Ng WL, Chan WY, et al.
    PMID: 34360343 DOI: 10.3390/ijerph18158052
    COVID-19 and pneumonia detection using medical images is a topic of immense interest in medical and healthcare research. Various advanced medical imaging and machine learning techniques have been presented to detect these respiratory disorders accurately. In this work, we have proposed a novel COVID-19 detection system using an exemplar and hybrid fused deep feature generator with X-ray images. The proposed Exemplar COVID-19FclNet9 comprises three basic steps: exemplar deep feature generation, iterative feature selection and classification. The novelty of this work is the feature extraction using three pre-trained convolutional neural networks (CNNs) in the presented feature extraction phase. The common aspects of these pre-trained CNNs are that they have three fully connected layers, and these networks are AlexNet, VGG16 and VGG19. The fully connected layer of these networks is used to generate deep features using an exemplar structure, and a nine-feature generation method is obtained. The loss values of these feature extractors are computed, and the best three extractors are selected. The features of the top three fully connected features are merged. An iterative selector is used to select the most informative features. The chosen features are classified using a support vector machine (SVM) classifier. The proposed COVID-19FclNet9 applied nine deep feature extraction methods by using three deep networks together. The most appropriate deep feature generation model selection and iterative feature selection have been employed to utilise their advantages together. By using these techniques, the image classification ability of the used three deep networks has been improved. The presented model is developed using four X-ray image corpora (DB1, DB2, DB3 and DB4) with two, three and four classes. The proposed Exemplar COVID-19FclNet9 achieved a classification accuracy of 97.60%, 89.96%, 98.84% and 99.64% using the SVM classifier with 10-fold cross-validation for four datasets, respectively. Our developed Exemplar COVID-19FclNet9 model has achieved high classification accuracy for all four databases and may be deployed for clinical application.
  12. Barua PD, Baygin N, Dogan S, Baygin M, Arunkumar N, Fujita H, et al.
    Sci Rep, 2022 Oct 14;12(1):17297.
    PMID: 36241674 DOI: 10.1038/s41598-022-21380-4
    Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches or "shutter blinds". A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used to generate deep features from the shutter blinds and the undivided resized segmented input facial image. The most discriminative features were selected from these deep features using iterative neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model was trained and tested on datasets derived from two public databases-University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of Spontaneous Facial Action Database-which both comprised four pain intensity classes that had been labeled by human experts using validated facial action coding system methodology. Our shutter blinds-based classification model attained more than 95% overall accuracy rates on both datasets. The excellent performance suggests that the automated pain intensity classification model can be deployed to assist doctors in the non-verbal detection of pain using facial images in various situations (e.g., non-communicative patients or during surgery). This system can facilitate timely detection and management of pain.
  13. Gudigar A, Kadri NA, Raghavendra U, Samanth J, Maithri M, Inamdar MA, et al.
    Comput Biol Med, 2024 Apr;172:108207.
    PMID: 38489986 DOI: 10.1016/j.compbiomed.2024.108207
    Artificial Intelligence (AI) techniques are increasingly used in computer-aided diagnostic tools in medicine. These techniques can also help to identify Hypertension (HTN) in its early stage, as it is a global health issue. Automated HTN detection uses socio-demographic, clinical data, and physiological signals. Additionally, signs of secondary HTN can also be identified using various imaging modalities. This systematic review examines related work on automated HTN detection. We identify datasets, techniques, and classifiers used to develop AI models from clinical data, physiological signals, and fused data (a combination of both). Image-based models for assessing secondary HTN are also reviewed. The majority of the studies have primarily utilized single-modality approaches, such as biological signals (e.g., electrocardiography, photoplethysmography), and medical imaging (e.g., magnetic resonance angiography, ultrasound). Surprisingly, only a small portion of the studies (22 out of 122) utilized a multi-modal fusion approach combining data from different sources. Even fewer investigated integrating clinical data, physiological signals, and medical imaging to understand the intricate relationships between these factors. Future research directions are discussed that could build better healthcare systems for early HTN detection through more integrated modeling of multi-modal data sources.
  14. Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, et al.
    Sensors (Basel), 2021 Dec 01;21(23).
    PMID: 34884045 DOI: 10.3390/s21238045
    The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links