Displaying publications 81 - 87 of 87 in total

Abstract:
Sort:
  1. Loh HW, Ooi CP, Oh SL, Barua PD, Tan YR, Molinari F, et al.
    Comput Methods Programs Biomed, 2023 Nov;241:107775.
    PMID: 37651817 DOI: 10.1016/j.cmpb.2023.107775
    BACKGROUND AND OBJECTIVE: Attention Deficit Hyperactivity problem (ADHD) is a common neurodevelopment problem in children and adolescents that can lead to long-term challenges in life outcomes if left untreated. Also, ADHD is frequently associated with Conduct Disorder (CD), and multiple research have found similarities in clinical signs and behavioral symptoms between both diseases, making differentiation between ADHD, ADHD comorbid with CD (ADHD+CD), and CD a subjective diagnosis. Therefore, the goal of this pilot study is to create the first explainable deep learning (DL) model for objective ECG-based ADHD/CD diagnosis as having an objective biomarker may improve diagnostic accuracy.

    METHODS: The dataset used in this study consist of ECG data collected from 45 ADHD, 62 ADHD+CD, and 16 CD patients at the Child Guidance Clinic in Singapore. The ECG data were segmented into 2 s epochs and directly used to train our 1-dimensional (1D) convolutional neural network (CNN) model.

    RESULTS: The proposed model yielded 96.04% classification accuracy, 96.26% precision, 95.99% sensitivity, and 96.11% F1-score. The Gradient-weighted class activation mapping (Grad-CAM) function was also used to highlight the important ECG characteristics at specific time points that most impact the classification score.

    CONCLUSION: In addition to achieving model performance results with our suggested DL method, Grad-CAM's implementation also offers vital temporal data that clinicians and other mental healthcare professionals can use to make wise medical judgments. We hope that by conducting this pilot study, we will be able to encourage larger-scale research with a larger biosignal dataset. Hence allowing biosignal-based computer-aided diagnostic (CAD) tools to be implemented in healthcare and ambulatory settings, as ECG can be easily obtained via wearable devices such as smartwatches.

  2. Mak NL, Ng WH, Ooi EH, Lau EV, Pamidi N, Foo JJ, et al.
    Comput Methods Programs Biomed, 2024 Jan;243:107866.
    PMID: 37865059 DOI: 10.1016/j.cmpb.2023.107866
    BACKGROUND AND OBJECTIVES: Thermochemical ablation (TCA) is a cancer treatment that utilises the heat released from the neutralisation of acid and base to raise tissue temperature to levels sufficient to induce thermal coagulation. Computational studies have demonstrated that the coagulation volume produced by sequential injection is smaller than that with simultaneous injection. By injecting the reagents in an ensuing manner, the region of contact between acid and base is limited to a thin contact layer sandwiched between the distribution of acid and base. It is hypothesised that increasing the frequency of acid-base injections into the tissue by shortening the injection interval for each reagent can increase the effective area of contact between acid and base, thereby intensifying neutralisation and the exothermic heat released into the tissue.

    METHODS: To verify this hypothesis, a computational model was developed to simulate the thermochemical processes involved during TCA with sequential injection. Four major processes that take place during TCA were considered, i.e., the flow of acid and base, their neutralisation, the release of exothermic heat and the formation of thermal damage inside the tissue. Equimolar acid and base at 7.5 M was injected into the tissue intermittently. Six injection intervals, namely 3, 6, 15, 20, 30 and 60 s were investigated.

    RESULTS: Shortening of the injection interval led to the enlargement of coagulation volume. If one considers only the coagulation volume as the determining factor, then a 15 s injection interval was found to be optimum. Conversely, if one places priority on safety, then a 3 s injection interval would result in the lowest amount of reagent residue inside the tissue after treatment. With a 3 s injection interval, the coagulation volume was found to be larger than that of simultaneous injection with the same treatment parameters. Not only that, the volume also surpassed that of radiofrequency ablation (RFA); a conventional thermal ablation technique commonly used for liver cancer treatment.

    CONCLUSION: The numerical results verified the hypothesis that shortening the injection interval will lead to the formation of larger thermal coagulation zone during TCA with sequential injection. More importantly, a 3 s injection interval was found to be optimum for both efficacy (large coagulation volume) and safety (least amount of reagent residue).

  3. Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, et al.
    Comput Methods Programs Biomed, 2024 Jan;243:107880.
    PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880
    Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
  4. Zhang K, Ting HN, Choo YM
    Comput Methods Programs Biomed, 2024 Mar;245:108043.
    PMID: 38306944 DOI: 10.1016/j.cmpb.2024.108043
    BACKGROUND AND OBJECTIVE: Conflict may happen when more than one classifier is used to perform prediction or classification. The recognition model error leads to conflicting evidence. These conflicts can cause decision errors in a baby cry recognition and further decrease its recognition accuracy. Thus, the objective of this study is to propose a method that can effectively minimize the conflict among deep learning models and improve the accuracy of baby cry recognition.

    METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.

    RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.

    CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.

  5. Ang CYS, Chiew YS, Wang X, Ooi EH, Cove ME, Chen Y, et al.
    Comput Methods Programs Biomed, 2024 Jul 11;255:108323.
    PMID: 39029417 DOI: 10.1016/j.cmpb.2024.108323
    BACKGROUND AND OBJECTIVE: Patient-ventilator asynchrony (PVA) is associated with poor clinical outcomes and remains under-monitored. Automated PVA detection would enable complete monitoring standard observational methods do not allow. While model-based and machine learning PVA approaches exist, they have variable performance and can miss specific PVA events. This study compares a model and rule-based algorithm with a machine learning PVA method by retrospectively validating both methods using an independent patient cohort.

    METHODS: Hysteresis loop analysis (HLA) which is a rule-based method (RBM) and a tri-input convolutional neural network (TCNN) machine learning model are used to classify 7 different types of PVA, including: 1) flow asynchrony; 2) reverse triggering; 3) premature cycling; 4) double triggering; 5) delayed cycling; 6) ineffective efforts; and 7) auto triggering. Class activation mapping (CAM) heatmaps visualise sections of respiratory waveforms the TCNN model uses for decision making, improving result interpretability. Both PVA classification methods were used to classify incidence in an independent retrospective clinical cohort of 11 mechanically ventilated patients for validation and performance comparison.

    RESULTS: Self-validation with the training dataset shows overall better HLA performance (accuracy, sensitivity, specificity: 97.5 %, 96.6 %, 98.1 %) compared to the TCNN model (accuracy, sensitivity, specificity: 89.5 %, 98.3 %, 83.9 %). In this study, the TCNN model demonstrates higher sensitivity in detecting PVA, but HLA was better at identifying non-PVA breathing cycles due to its rule-based nature. While the overall AI identified by both classification methods are very similar, the intra-patient distribution of each PVA type varies between HLA and TCNN.

    CONCLUSION: The collective findings underscore the efficacy of both HLA and TCNN in PVA detection, indicating the potential for real-time continuous monitoring of PVA. While ML methods such as TCNN demonstrate good PVA identification performance, it is essential to ensure optimal model architecture and diversity in training data before widespread uptake as standard care. Moving forward, further validation and adoption of RBM methods, such as HLA, offers an effective approach to PVA detection while providing clear distinction into the underlying patterns of PVA, better aligning with clinical needs for transparency, explicability, adaptability and reliability of these emerging tools for clinical care.

  6. Ferdowsi M, Hasan MM, Habib W
    Comput Methods Programs Biomed, 2024 Sep;254:108289.
    PMID: 38905988 DOI: 10.1016/j.cmpb.2024.108289
    BACKGROUND AND OBJECTIVE: Cardiovascular disease (CD) is a major global health concern, affecting millions with symptoms like fatigue and chest discomfort. Timely identification is crucial due to its significant contribution to global mortality. In healthcare, artificial intelligence (AI) holds promise for advancing disease risk assessment and treatment outcome prediction. However, machine learning (ML) evolution raises concerns about data privacy and biases, especially in sensitive healthcare applications. The objective is to develop and implement a responsible AI model for CD prediction that prioritize patient privacy, security, ensuring transparency, explainability, fairness, and ethical adherence in healthcare applications.

    METHODS: To predict CD while prioritizing patient privacy, our study employed data anonymization involved adding Laplace noise to sensitive features like age and gender. The anonymized dataset underwent analysis using a differential privacy (DP) framework to preserve data privacy. DP ensured confidentiality while extracting insights. Compared with Logistic Regression (LR), Gaussian Naïve Bayes (GNB), and Random Forest (RF), the methodology integrated feature selection, statistical analysis, and SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) for interpretability. This approach facilitates transparent and interpretable AI decision-making, aligning with responsible AI development principles. Overall, it combines privacy preservation, interpretability, and ethical considerations for accurate CD predictions.

    RESULTS: Our investigations from the DP framework with LR were promising, with an area under curve (AUC) of 0.848 ± 0.03, an accuracy of 0.797 ± 0.02, precision at 0.789 ± 0.02, recall at 0.797 ± 0.02, and an F1 score of 0.787 ± 0.02, with a comparable performance with the non-privacy framework. The SHAP and LIME based results support clinical findings, show a commitment to transparent and interpretable AI decision-making, and aligns with the principles of responsible AI development.

    CONCLUSIONS: Our study endorses a novel approach in predicting CD, amalgamating data anonymization, privacy-preserving methods, interpretability tools SHAP, LIME, and ethical considerations. This responsible AI framework ensures accurate predictions, privacy preservation, and user trust, underscoring the significance of comprehensive and transparent ML models in healthcare. Therefore, this research empowers the ability to forecast CD, providing a vital lifeline to millions of CD patients globally and potentially preventing numerous fatalities.

  7. Zainol NM, Damanhuri NS, Othman NA, Chiew YS, Nor MBM, Muhammad Z, et al.
    Comput Methods Programs Biomed, 2022 Jun;220:106835.
    PMID: 35512627 DOI: 10.1016/j.cmpb.2022.106835
    BACKGROUND AND OBJECTIVE: Mechanical ventilation (MV) provides breathing support for acute respiratory distress syndrome (ARDS) patients in the intensive care unit, but is difficult to optimize. Too much, or too little of pressure or volume support can cause further ventilator-induced lung injury, increasing length of MV, cost and mortality. Patient-specific respiratory mechanics can help optimize MV settings. However, model-based estimation of respiratory mechanics is less accurate when patient exhibit un-modeled spontaneous breathing (SB) efforts on top of ventilator support. This study aims to estimate and quantify SB efforts by reconstructing the unaltered passive mechanics airway pressure using NARX model.

    METHODS: Non-linear autoregressive (NARX) model is used to reconstruct missing airway pressure due to the presence of spontaneous breathing effort in mv patients. Then, the incidence of SB patients is estimated. The study uses a total of 10,000 breathing cycles collected from 10 ARDS patients from IIUM Hospital in Kuantan, Malaysia. In this study, there are 2 different ratios of training and validating methods. Firstly, the initial ratio used is 60:40 which indicates 600 breath cycles for training and remaining 400 breath cycles used for testing. Then, the ratio is varied using 70:30 ratio for training and testing data.

    RESULTS AND DISCUSSION: The mean residual error between original airway pressure and reconstructed airway pressure is denoted as the magnitude of effort. The median and interquartile range of mean residual error for both ratio are 0.0557 [0.0230 - 0.0874] and 0.0534 [0.0219 - 0.0870] respectively for all patients. The results also show that Patient 2 has the highest percentage of SB incidence and Patient 10 with the lowest percentage of SB incidence which proved that NARX model is able to perform for both higher incidence of SB effort or when there is a lack of SB effort.

    CONCLUSION: This model is able to produce the SB incidence rate based on 10% threshold. Hence, the proposed NARX model is potentially useful to estimate and identify patient-specific SB effort, which has the potential to further assist clinical decisions and optimize MV settings.

Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links