Displaying publications 81 - 85 of 85 in total

Abstract:
Sort:
  1. Xu S, Deo RC, Soar J, Barua PD, Faust O, Homaira N, et al.
    Comput Methods Programs Biomed, 2023 Nov;241:107746.
    PMID: 37660550 DOI: 10.1016/j.cmpb.2023.107746
    BACKGROUND AND OBJECTIVE: Obstructive airway diseases, including asthma and Chronic Obstructive Pulmonary Disease (COPD), are two of the most common chronic respiratory health problems. Both of these conditions require health professional expertise in making a diagnosis. Hence, this process is time intensive for healthcare providers and the diagnostic quality is subject to intra- and inter- operator variability. In this study we investigate the role of automated detection of obstructive airway diseases to reduce cost and improve diagnostic quality.

    METHODS: We investigated the existing body of evidence and applied Preferred Reporting Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to search records in IEEE, Google scholar, and PubMed databases. We identified 65 papers that were published from 2013 to 2022 and these papers cover 67 different studies. The review process was structured according to the medical data that was used for disease detection. We identified six main categories, namely air flow, genetic, imaging, signals, and miscellaneous. For each of these categories, we report both disease detection methods and their performance.

    RESULTS: We found that medical imaging was used in 14 of the reviewed studies as data for automated obstructive airway disease detection. Genetics and physiological signals were used in 13 studies. Medical records and air flow were used in 9 and 7 studies, respectively. Most papers were published in 2020 and we found three times more work on Machine Learning (ML) when compared to Deep Learning (DL). Statistical analysis shows that DL techniques achieve higher Accuracy (ACC) when compared to ML. Convolutional Neural Network (CNN) is the most common DL classifier and Support Vector Machine (SVM) is the most widely used ML classifier. During our review, we discovered only two publicly available asthma and COPD datasets. Most studies used private clinical datasets, so data size and data composition are inconsistent.

    CONCLUSIONS: Our review results indicate that Artificial Intelligence (AI) can improve both decision quality and efficiency of health professionals during COPD and asthma diagnosis. However, we found several limitations in this review, such as a lack of dataset consistency, a limited dataset and remote monitoring was not sufficiently explored. We appeal to society to accept and trust computer aided airflow obstructive diseases diagnosis and we encourage health professionals to work closely with AI scientists to promote automated detection in clinical practice and hospital settings.

  2. Loh HW, Ooi CP, Oh SL, Barua PD, Tan YR, Molinari F, et al.
    Comput Methods Programs Biomed, 2023 Nov;241:107775.
    PMID: 37651817 DOI: 10.1016/j.cmpb.2023.107775
    BACKGROUND AND OBJECTIVE: Attention Deficit Hyperactivity problem (ADHD) is a common neurodevelopment problem in children and adolescents that can lead to long-term challenges in life outcomes if left untreated. Also, ADHD is frequently associated with Conduct Disorder (CD), and multiple research have found similarities in clinical signs and behavioral symptoms between both diseases, making differentiation between ADHD, ADHD comorbid with CD (ADHD+CD), and CD a subjective diagnosis. Therefore, the goal of this pilot study is to create the first explainable deep learning (DL) model for objective ECG-based ADHD/CD diagnosis as having an objective biomarker may improve diagnostic accuracy.

    METHODS: The dataset used in this study consist of ECG data collected from 45 ADHD, 62 ADHD+CD, and 16 CD patients at the Child Guidance Clinic in Singapore. The ECG data were segmented into 2 s epochs and directly used to train our 1-dimensional (1D) convolutional neural network (CNN) model.

    RESULTS: The proposed model yielded 96.04% classification accuracy, 96.26% precision, 95.99% sensitivity, and 96.11% F1-score. The Gradient-weighted class activation mapping (Grad-CAM) function was also used to highlight the important ECG characteristics at specific time points that most impact the classification score.

    CONCLUSION: In addition to achieving model performance results with our suggested DL method, Grad-CAM's implementation also offers vital temporal data that clinicians and other mental healthcare professionals can use to make wise medical judgments. We hope that by conducting this pilot study, we will be able to encourage larger-scale research with a larger biosignal dataset. Hence allowing biosignal-based computer-aided diagnostic (CAD) tools to be implemented in healthcare and ambulatory settings, as ECG can be easily obtained via wearable devices such as smartwatches.

  3. Mak NL, Ng WH, Ooi EH, Lau EV, Pamidi N, Foo JJ, et al.
    Comput Methods Programs Biomed, 2024 Jan;243:107866.
    PMID: 37865059 DOI: 10.1016/j.cmpb.2023.107866
    BACKGROUND AND OBJECTIVES: Thermochemical ablation (TCA) is a cancer treatment that utilises the heat released from the neutralisation of acid and base to raise tissue temperature to levels sufficient to induce thermal coagulation. Computational studies have demonstrated that the coagulation volume produced by sequential injection is smaller than that with simultaneous injection. By injecting the reagents in an ensuing manner, the region of contact between acid and base is limited to a thin contact layer sandwiched between the distribution of acid and base. It is hypothesised that increasing the frequency of acid-base injections into the tissue by shortening the injection interval for each reagent can increase the effective area of contact between acid and base, thereby intensifying neutralisation and the exothermic heat released into the tissue.

    METHODS: To verify this hypothesis, a computational model was developed to simulate the thermochemical processes involved during TCA with sequential injection. Four major processes that take place during TCA were considered, i.e., the flow of acid and base, their neutralisation, the release of exothermic heat and the formation of thermal damage inside the tissue. Equimolar acid and base at 7.5 M was injected into the tissue intermittently. Six injection intervals, namely 3, 6, 15, 20, 30 and 60 s were investigated.

    RESULTS: Shortening of the injection interval led to the enlargement of coagulation volume. If one considers only the coagulation volume as the determining factor, then a 15 s injection interval was found to be optimum. Conversely, if one places priority on safety, then a 3 s injection interval would result in the lowest amount of reagent residue inside the tissue after treatment. With a 3 s injection interval, the coagulation volume was found to be larger than that of simultaneous injection with the same treatment parameters. Not only that, the volume also surpassed that of radiofrequency ablation (RFA); a conventional thermal ablation technique commonly used for liver cancer treatment.

    CONCLUSION: The numerical results verified the hypothesis that shortening the injection interval will lead to the formation of larger thermal coagulation zone during TCA with sequential injection. More importantly, a 3 s injection interval was found to be optimum for both efficacy (large coagulation volume) and safety (least amount of reagent residue).

  4. Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, et al.
    Comput Methods Programs Biomed, 2024 Jan;243:107880.
    PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880
    Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
  5. Zhang K, Ting HN, Choo YM
    Comput Methods Programs Biomed, 2024 Mar;245:108043.
    PMID: 38306944 DOI: 10.1016/j.cmpb.2024.108043
    BACKGROUND AND OBJECTIVE: Conflict may happen when more than one classifier is used to perform prediction or classification. The recognition model error leads to conflicting evidence. These conflicts can cause decision errors in a baby cry recognition and further decrease its recognition accuracy. Thus, the objective of this study is to propose a method that can effectively minimize the conflict among deep learning models and improve the accuracy of baby cry recognition.

    METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.

    RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.

    CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.

Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links