Displaying publications 1 - 20 of 85 in total

Abstract:
Sort:
  1. Zhang K, Ting HN, Choo YM
    Comput Methods Programs Biomed, 2024 Mar;245:108043.
    PMID: 38306944 DOI: 10.1016/j.cmpb.2024.108043
    BACKGROUND AND OBJECTIVE: Conflict may happen when more than one classifier is used to perform prediction or classification. The recognition model error leads to conflicting evidence. These conflicts can cause decision errors in a baby cry recognition and further decrease its recognition accuracy. Thus, the objective of this study is to propose a method that can effectively minimize the conflict among deep learning models and improve the accuracy of baby cry recognition.

    METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.

    RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.

    CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.

  2. Mak NL, Ng WH, Ooi EH, Lau EV, Pamidi N, Foo JJ, et al.
    Comput Methods Programs Biomed, 2024 Jan;243:107866.
    PMID: 37865059 DOI: 10.1016/j.cmpb.2023.107866
    BACKGROUND AND OBJECTIVES: Thermochemical ablation (TCA) is a cancer treatment that utilises the heat released from the neutralisation of acid and base to raise tissue temperature to levels sufficient to induce thermal coagulation. Computational studies have demonstrated that the coagulation volume produced by sequential injection is smaller than that with simultaneous injection. By injecting the reagents in an ensuing manner, the region of contact between acid and base is limited to a thin contact layer sandwiched between the distribution of acid and base. It is hypothesised that increasing the frequency of acid-base injections into the tissue by shortening the injection interval for each reagent can increase the effective area of contact between acid and base, thereby intensifying neutralisation and the exothermic heat released into the tissue.

    METHODS: To verify this hypothesis, a computational model was developed to simulate the thermochemical processes involved during TCA with sequential injection. Four major processes that take place during TCA were considered, i.e., the flow of acid and base, their neutralisation, the release of exothermic heat and the formation of thermal damage inside the tissue. Equimolar acid and base at 7.5 M was injected into the tissue intermittently. Six injection intervals, namely 3, 6, 15, 20, 30 and 60 s were investigated.

    RESULTS: Shortening of the injection interval led to the enlargement of coagulation volume. If one considers only the coagulation volume as the determining factor, then a 15 s injection interval was found to be optimum. Conversely, if one places priority on safety, then a 3 s injection interval would result in the lowest amount of reagent residue inside the tissue after treatment. With a 3 s injection interval, the coagulation volume was found to be larger than that of simultaneous injection with the same treatment parameters. Not only that, the volume also surpassed that of radiofrequency ablation (RFA); a conventional thermal ablation technique commonly used for liver cancer treatment.

    CONCLUSION: The numerical results verified the hypothesis that shortening the injection interval will lead to the formation of larger thermal coagulation zone during TCA with sequential injection. More importantly, a 3 s injection interval was found to be optimum for both efficacy (large coagulation volume) and safety (least amount of reagent residue).

  3. Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, et al.
    Comput Methods Programs Biomed, 2024 Jan;243:107880.
    PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880
    Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
  4. Teoh YX, Othmani A, Lai KW, Goh SL, Usman J
    Comput Methods Programs Biomed, 2023 Dec;242:107807.
    PMID: 37778138 DOI: 10.1016/j.cmpb.2023.107807
    BACKGROUND AND OBJECTIVE: Knee osteoarthritis (OA) is a debilitating musculoskeletal disorder that causes functional disability. Automatic knee OA diagnosis has great potential of enabling timely and early intervention, that can potentially reverse the degenerative process of knee OA. Yet, it is a tedious task, concerning the heterogeneity of the disorder. Most of the proposed techniques demonstrated single OA diagnostic task widely based on Kellgren Lawrence (KL) standard, a composite score of only a few imaging features (i.e. osteophytes, joint space narrowing and subchondral bone changes). However, only one key disease pattern was tackled. The KL standard fails to represent disease pattern of individual OA features, particularly osteophytes, joint-space narrowing, and pain intensity that play a fundamental role in OA manifestation. In this study, we aim to develop a multitask model using convolutional neural network (CNN) feature extractors and machine learning classifiers to detect nine important OA features: KL grade, knee osteophytes (both knee, medial fibular: OSFM, medial tibial: OSTM, lateral fibular: OSFL, and lateral tibial: OSTL), joint-space narrowing (medial: JSM, and lateral: JSL), and patient-reported pain intensity from plain radiography.

    METHODS: We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers.

    RESULTS: Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction.

    CONCLUSIONS: The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.

  5. Xu S, Deo RC, Soar J, Barua PD, Faust O, Homaira N, et al.
    Comput Methods Programs Biomed, 2023 Nov;241:107746.
    PMID: 37660550 DOI: 10.1016/j.cmpb.2023.107746
    BACKGROUND AND OBJECTIVE: Obstructive airway diseases, including asthma and Chronic Obstructive Pulmonary Disease (COPD), are two of the most common chronic respiratory health problems. Both of these conditions require health professional expertise in making a diagnosis. Hence, this process is time intensive for healthcare providers and the diagnostic quality is subject to intra- and inter- operator variability. In this study we investigate the role of automated detection of obstructive airway diseases to reduce cost and improve diagnostic quality.

    METHODS: We investigated the existing body of evidence and applied Preferred Reporting Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to search records in IEEE, Google scholar, and PubMed databases. We identified 65 papers that were published from 2013 to 2022 and these papers cover 67 different studies. The review process was structured according to the medical data that was used for disease detection. We identified six main categories, namely air flow, genetic, imaging, signals, and miscellaneous. For each of these categories, we report both disease detection methods and their performance.

    RESULTS: We found that medical imaging was used in 14 of the reviewed studies as data for automated obstructive airway disease detection. Genetics and physiological signals were used in 13 studies. Medical records and air flow were used in 9 and 7 studies, respectively. Most papers were published in 2020 and we found three times more work on Machine Learning (ML) when compared to Deep Learning (DL). Statistical analysis shows that DL techniques achieve higher Accuracy (ACC) when compared to ML. Convolutional Neural Network (CNN) is the most common DL classifier and Support Vector Machine (SVM) is the most widely used ML classifier. During our review, we discovered only two publicly available asthma and COPD datasets. Most studies used private clinical datasets, so data size and data composition are inconsistent.

    CONCLUSIONS: Our review results indicate that Artificial Intelligence (AI) can improve both decision quality and efficiency of health professionals during COPD and asthma diagnosis. However, we found several limitations in this review, such as a lack of dataset consistency, a limited dataset and remote monitoring was not sufficiently explored. We appeal to society to accept and trust computer aided airflow obstructive diseases diagnosis and we encourage health professionals to work closely with AI scientists to promote automated detection in clinical practice and hospital settings.

  6. Loh HW, Ooi CP, Oh SL, Barua PD, Tan YR, Molinari F, et al.
    Comput Methods Programs Biomed, 2023 Nov;241:107775.
    PMID: 37651817 DOI: 10.1016/j.cmpb.2023.107775
    BACKGROUND AND OBJECTIVE: Attention Deficit Hyperactivity problem (ADHD) is a common neurodevelopment problem in children and adolescents that can lead to long-term challenges in life outcomes if left untreated. Also, ADHD is frequently associated with Conduct Disorder (CD), and multiple research have found similarities in clinical signs and behavioral symptoms between both diseases, making differentiation between ADHD, ADHD comorbid with CD (ADHD+CD), and CD a subjective diagnosis. Therefore, the goal of this pilot study is to create the first explainable deep learning (DL) model for objective ECG-based ADHD/CD diagnosis as having an objective biomarker may improve diagnostic accuracy.

    METHODS: The dataset used in this study consist of ECG data collected from 45 ADHD, 62 ADHD+CD, and 16 CD patients at the Child Guidance Clinic in Singapore. The ECG data were segmented into 2 s epochs and directly used to train our 1-dimensional (1D) convolutional neural network (CNN) model.

    RESULTS: The proposed model yielded 96.04% classification accuracy, 96.26% precision, 95.99% sensitivity, and 96.11% F1-score. The Gradient-weighted class activation mapping (Grad-CAM) function was also used to highlight the important ECG characteristics at specific time points that most impact the classification score.

    CONCLUSION: In addition to achieving model performance results with our suggested DL method, Grad-CAM's implementation also offers vital temporal data that clinicians and other mental healthcare professionals can use to make wise medical judgments. We hope that by conducting this pilot study, we will be able to encourage larger-scale research with a larger biosignal dataset. Hence allowing biosignal-based computer-aided diagnostic (CAD) tools to be implemented in healthcare and ambulatory settings, as ECG can be easily obtained via wearable devices such as smartwatches.

  7. Ang CYS, Chiew YS, Wang X, Ooi EH, Nor MBM, Cove ME, et al.
    Comput Methods Programs Biomed, 2023 Oct;240:107728.
    PMID: 37531693 DOI: 10.1016/j.cmpb.2023.107728
    BACKGROUND AND OBJECTIVE: Healthcare datasets are plagued by issues of data scarcity and class imbalance. Clinically validated virtual patient (VP) models can provide accurate in-silico representations of real patients and thus a means for synthetic data generation in hospital critical care settings. This research presents a realistic, time-varying mechanically ventilated respiratory failure VP profile synthesised using a stochastic model.

    METHODS: A stochastic model was developed using respiratory elastance (Ers) data from two clinical cohorts and averaged over 30-minute time intervals. The stochastic model was used to generate future Ers data based on current Ers values with added normally distributed random noise. Self-validation of the VPs was performed via Monte Carlo simulation and retrospective Ers profile fitting. A stochastic VP cohort of temporal Ers evolution was synthesised and then compared to an independent retrospective patient cohort data in a virtual trial across several measured patient responses, where similarity of profiles validates the realism of stochastic model generated VP profiles.

    RESULTS: A total of 120,000 3-hour VPs for pressure control (PC) and volume control (VC) ventilation modes are generated using stochastic simulation. Optimisation of the stochastic simulation process yields an ideal noise percentage of 5-10% and simulation iteration of 200,000 iterations, allowing the simulation of a realistic and diverse set of Ers profiles. Results of self-validation show the retrospective Ers profiles were able to be recreated accurately with a mean squared error of only 0.099 [0.009-0.790]% for the PC cohort and 0.051 [0.030-0.126]% for the VC cohort. A virtual trial demonstrates the ability of the stochastic VP cohort to capture Ers trends within and beyond the retrospective patient cohort providing cohort-level validation.

    CONCLUSION: VPs capable of temporal evolution demonstrate feasibility for use in designing, developing, and optimising bedside MV guidance protocols through in-silico simulation and validation. Overall, the temporal VPs developed using stochastic simulation alleviate the need for lengthy, resource intensive, high cost clinical trials, while facilitating statistically robust virtual trials, ultimately leading to improved patient care and outcomes in mechanical ventilation.

  8. Benyó B, Paláncz B, Szlávecz Á, Szabó B, Kovács K, Chase JG
    Comput Methods Programs Biomed, 2023 Oct;240:107633.
    PMID: 37343375 DOI: 10.1016/j.cmpb.2023.107633
    Model-based glycemic control (GC) protocols are used to treat stress-induced hyperglycaemia in intensive care units (ICUs). The STAR (Stochastic-TARgeted) glycemic control protocol - used in clinical practice in several ICUs in New Zealand, Hungary, Belgium, and Malaysia - is a model-based GC protocol using a patient-specific, model-based insulin sensitivity to describe the patient's actual state. Two neural network based methods are defined in this study to predict the patient's insulin sensitivity parameter: a classification deep neural network and a Mixture Density Network based method. Treatment data from three different patient cohorts are used to train the network models. Accuracy of neural network predictions are compared with the current model- based predictions used to guide care. The prediction accuracy was found to be the same or better than the reference. The authors suggest that these methods may be a promising alternative in model-based clinical treatment for patient state prediction. Still, more research is needed to validate these findings, including in-silico simulations and clinical validation trials.
  9. Cimr D, Busovsky D, Fujita H, Studnicka F, Cimler R, Hayashi T
    Comput Methods Programs Biomed, 2023 Sep;239:107623.
    PMID: 37276760 DOI: 10.1016/j.cmpb.2023.107623
    BACKGROUND AND OBJECTIVES: Prediction of patient deterioration is essential in medical care, and its automation may reduce the risk of patient death. The precise monitoring of a patient's medical state requires devices placed on the body, which may cause discomfort. Our approach is based on the processing of long-term ballistocardiography data, which were measured using a sensory pad placed under the patient's mattress.

    METHODS: The investigated dataset was obtained via long-term measurements in retirement homes and intensive care units (ICU). Data were measured unobtrusively using a measuring pad equipped with piezoceramic sensors. The proposed approach focused on the processing methods of the measured ballistocardiographic signals, Cartan curvature (CC), and Euclidean arc length (EAL).

    RESULTS: For analysis, 218,979 normal and 216,259 aberrant 2-second samples were collected and classified using a convolutional neural network. Experiments using cross-validation with expert threshold and data length revealed the accuracy, sensitivity, and specificity of the proposed method to be 86.51 CONCLUSIONS: The proposed method provides a unique approach for an early detection of health concerns in an unobtrusive manner. In addition, the suitability of EAL over the CC was determined.

  10. Othman NA, Azhar MAAS, Damanhuri NS, Mahadi IA, Abbas MH, Shamsuddin SA, et al.
    Comput Methods Programs Biomed, 2023 Jun;236:107566.
    PMID: 37186981 DOI: 10.1016/j.cmpb.2023.107566
    BACKGROUND AND OBJECTIVE: The identification of insulinaemic pharmacokinetic parameters using the least-squares criterion approach is easily influenced by outlying data due to its sensitivity. Furthermore, the least-squares criterion has a tendency to overfit and produce incorrect results. Hence, this research proposes an alternative approach using the artificial neural network (ANN) with two hidden layers to optimize the identifying of insulinaemic pharmacokinetic parameters. The ANN is selected for its ability to avoid overfitting parameters and its faster speed in processing data.

    METHODS: 18 voluntarily participants were recruited from the Canterbury and Otago region of New Zealand to take part in a Dynamic Insulin Sensitivity and Secretion Test (DISST) clinical trial. A total of 46 DISST data were collected. However, due to ambiguous and inconsistency, 4 data had to be removed. Analysis was done using MATLAB 2020a.

    RESULTS AND DISCUSSION: Results show that, with 42 gathered dataset, the ANN generates higher gains, ∅P = 20.73 [12.21, 28.57] mU·L·mmol-1·min-1 and ∅D = 60.42 [26.85, 131.38] mU·L·mmol-1 as compared to the linear least square method, ∅P = 19.67 [11.81, 28.02] mU·L·mmol-1 ·min-1 and ∅D = 46.21 [7.25, 116.71] mU·L·mmol-1. The average value of the insulin sensitivity (SI) of ANN is lower with, SI = 16 × 10-4 L·mU-1 ·min-1 than the linear least square, SI = 17 × 10-4 L·mU-1 ·min-1.

    CONCLUSION: Although the ANN analysis provided a lower SI value, the results were more dependable than the linear least square model because the ANN approach yielded a better model fitting accuracy than the linear least square method with a lower residual error of less than 5%. With the implementation of this ANN architecture, it shows that ANN able to produce minimal error during optimization process particularly when dealing with outlying data. The findings may provide extra information to clinicians, allowing them to gain a better knowledge of the heterogenous aetiology of diabetes and therapeutic intervention options.

  11. Ninomiya K, Arimura H, Tanaka K, Chan WY, Kabata Y, Mizuno S, et al.
    Comput Methods Programs Biomed, 2023 Jun;236:107544.
    PMID: 37148668 DOI: 10.1016/j.cmpb.2023.107544
    OBJECTIVES: To elucidate a novel radiogenomics approach using three-dimensional (3D) topologically invariant Betti numbers (BNs) for topological characterization of epidermal growth factor receptor (EGFR) Del19 and L858R mutation subtypes.

    METHODS: In total, 154 patients (wild-type EGFR, 72 patients; Del19 mutation, 45 patients; and L858R mutation, 37 patients) were retrospectively enrolled and randomly divided into 92 training and 62 test cases. Two support vector machine (SVM) models to distinguish between wild-type and mutant EGFR (mutation [M] classification) as well as between the Del19 and L858R subtypes (subtype [S] classification) were trained using 3DBN features. These features were computed from 3DBN maps by using histogram and texture analyses. The 3DBN maps were generated using computed tomography (CT) images based on the Čech complex constructed on sets of points in the images. These points were defined by coordinates of voxels with CT values higher than several threshold values. The M classification model was built using image features and demographic parameters of sex and smoking status. The SVM models were evaluated by determining their classification accuracies. The feasibility of the 3DBN model was compared with those of conventional radiomic models based on pseudo-3D BN (p3DBN), two-dimensional BN (2DBN), and CT and wavelet-decomposition (WD) images. The validation of the model was repeated with 100 times random sampling.

    RESULTS: The mean test accuracies for M classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.810, 0.733, 0.838, 0.782, and 0.799, respectively. The mean test accuracies for S classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.773, 0.694, 0.657, 0.581, and 0.696, respectively.

    CONCLUSION: 3DBN features, which showed a radiogenomic association with the characteristics of the EGFR Del19/L858R mutation subtypes, yielded higher accuracy for subtype classifications in comparison with conventional features.

  12. Akhbar MFA
    Comput Methods Programs Biomed, 2023 Apr;231:107361.
    PMID: 36736133 DOI: 10.1016/j.cmpb.2023.107361
    BACKGROUND AND OBJECTIVE: Conventional surgical drill bits suffer from several drawbacks, including extreme heat generation, breakage, jam, and undesired breakthrough. Understanding the impacts of drill margin on bone damage can provide insights that lay the foundation for improvement in the existing surgical drill bit. However, research on drill margins in bone drilling is lacking. This work assesses the influences of margin height and width on thermomechanical damage in bone drilling.

    METHODS: Thermomechanical damage-maximum bone temperature, osteonecrosis diameter, osteonecrosis depth, maximum thrust force, and torque-were calculated using the finite element method under various margin heights (0.05-0.25 mm) and widths (0.02-0.26 mm). The simulation results were validated with experimental tests and previous research data.

    RESULTS: The effect of margin height in increasing the maximum bone temperature, osteonecrosis diameter, and depth were at least 19.1%, 41.9%, and 59.6%, respectively. The thrust force and torque are highly sensitive to margin height. A higher margin height (0.21-0.25 mm) reduced the thrust force by 54.0% but increased drilling torque by 142.2%. The bone temperature, osteonecrosis diameter, and depth were 16.5%, 56.5%, and 81.4% lower, respectively, with increasing margin width. The minimum thrust force (11.1 N) and torque (41.9 Nmm) were produced with the highest margin width (0.26 mm). The margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 produced the highest sum of weightage.

    CONCLUSIONS: A surgical drill bit with a margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 mm can produce minimum thermomechanical damage in cortical bone drilling. The insights regarding the suitable ranges for margin height and width from this study could be adopted in future research devoted to optimizing the margin of the existing surgical drill bit.

  13. Cheong JK, Ooi EH, Chiew YS, Menichetti L, Armanetti P, Franchini MC, et al.
    Comput Methods Programs Biomed, 2023 Mar;230:107363.
    PMID: 36720181 DOI: 10.1016/j.cmpb.2023.107363
    BACKGROUND AND OBJECTIVES: Gold nanorod-assisted photothermal therapy (GNR-PTT) is a cancer treatment whereby GNRs incorporated into the tumour act as photo-absorbers to elevate the thermal destruction effect. In the case of bladder, there are few possible routes to target the tumour with GNRs, namely peri/intra-tumoural injection and intravesical instillation of GNRs. These two approaches lead to different GNR distribution inside the tumour and can affect the treatment outcome.

    METHODOLOGY: The present study investigates the effects of heterogeneous GNR distribution in a typical setup of GNR-PTT. Three cases were considered. Case 1 considered the GNRs at the tumour centre, while Case 2 represents a hypothetical scenario where GNRs are distributed at the tumour periphery; these two cases represent intratumoural accumulation with different degree of GNR spread inside the tumour. Case 3 is achieved when GNRs target the exposed tumoural surface that is invading the bladder wall, when they are delivered by intravesical instillation.

    RESULTS: Results indicate that for a laser power of 0.6 W and GNR volume fraction of 0.01%, Case 2 and 3 were successful in achieving complete tumour eradication after 330 and 470 s of laser irradiation, respectively. Case 1 failed to form complete tumour damage when the GNRs are concentrated at the tumour centre but managed to produce complete tumour damage if the spread of GNRs is wider. Results from Case 2 also demonstrated a different heating profile from Case 1, suggesting that thermal ablation during GNR-PTT is dependant on the GNRs distribution inside the tumour. Case 3 shows similar results to Case 2 whereby gradual but uniform heating is observed. Cases 2 and 3 show that uniformly heating the tumour can reduce damage to the surrounding tissues.

    CONCLUSIONS: Different GNR distribution associated with the different methods of introducing GNRs to the bladder during GNR-PTT affect the treatment outcome of bladder cancer in mice. Insufficient spreading during intratumoural injection of GNRs can render the treatment ineffective, while administered via intravesical instillation. GNR distribution achieved through intravesical instillation present some advantages over intratumoural injection and is worthy of further exploration.

  14. Cimr D, Fujita H, Tomaskova H, Cimler R, Selamat A
    Comput Methods Programs Biomed, 2023 Feb;229:107277.
    PMID: 36463672 DOI: 10.1016/j.cmpb.2022.107277
    BACKGROUND AND OBJECTIVES: Nowadays, an automated computer-aided diagnosis (CAD) is an approach that plays an important role in the detection of health issues. The main advantages should be in early diagnosis, including high accuracy and low computational complexity without loss of the model performance. One of these systems type is concerned with Electroencephalogram (EEG) signals and seizure detection. We designed a CAD system approach for seizure detection that optimizes the complexity of the required solution while also being reusable on different problems.

    METHODS: The methodology is built-in deep data analysis for normalization. In comparison to previous research, the system does not necessitate a feature extraction process that optimizes and reduces system complexity. The data classification is provided by a designed 8-layer deep convolutional neural network.

    RESULTS: Depending on used data, we have achieved the accuracy, specificity, and sensitivity of 98%, 98%, and 98.5% on the short-term Bonn EEG dataset, and 96.99%, 96.89%, and 97.06% on the long-term CHB-MIT EEG dataset.

    CONCLUSIONS: Through the approach to detection, the system offers an optimized solution for seizure diagnosis health problems. The proposed solution should be implemented in all clinical or home environments for decision support.

  15. Khan DM, Yahya N, Kamel N, Faye I
    Comput Methods Programs Biomed, 2023 Jan;228:107242.
    PMID: 36423484 DOI: 10.1016/j.cmpb.2022.107242
    BACKGROUND AND OBJECTIVE: Brain connectivity plays a pivotal role in understanding the brain's information processing functions by providing various details including magnitude, direction, and temporal dynamics of inter-neuron connections. While the connectivity may be classified as structural, functional and causal, a complete in-vivo directional analysis is guaranteed by the latter and is referred to as Effective Connectivity (EC). Two most widely used EC techniques are Directed Transfer Function (DTF) and Partial Directed Coherence (PDC) which are based on multivariate autoregressive models. The drawbacks of these techniques include poor frequency resolution and the requirement for experimental approach to determine signal normalization and thresholding techniques in identifying significant connectivities between multivariate sources.

    METHODS: In this study, the drawbacks of DTF and PDC are addressed by proposing a novel technique, termed as Efficient Effective Connectivity (EEC), for the estimation of EC between multivariate sources using AR spectral estimation and Granger causality principle. In EEC, a linear predictive filter with AR coefficients obtained via multivariate EEG is used for signal prediction. This leads to the estimation of full-length signals which are then transformed into frequency domain by using Burg spectral estimation method. Furthermore, the newly proposed normalization method addressed the effect on each source in EEC using the sum of maximum connectivity values over the entire frequency range. Lastly, the proposed dynamic thresholding works by subtracting the first moment of causal effects of all the sources on one source from individual connections present for that source.

    RESULTS: The proposed method is evaluated using synthetic and real resting-state EEG of 46 healthy controls. A 3D-Convolutional Neural Network is trained and tested using the PDC and EEC samples. The result indicates that compared to PDC, EEC improves the EEG eye-state classification accuracy, sensitivity and specificity by 5.57%, 3.15% and 8.74%, respectively.

    CONCLUSION: Correct identification of all connections in synthetic data and improved resting-state classification performance using EEC proved that EEC gives better estimation of directed causality and indicates that it can be used for reliable understanding of brain mechanisms. Conclusively, the proposed technique may open up new research dimensions for clinical diagnosis of mental disorders.

  16. Mak NL, Ooi EH, Lau EV, Ooi ET, Pamidi N, Foo JJ, et al.
    Comput Methods Programs Biomed, 2022 Dec;227:107195.
    PMID: 36323179 DOI: 10.1016/j.cmpb.2022.107195
    BACKGROUND AND OBJECTIVES: Thermochemical ablation (TCA) is a thermal ablation technique involving the injection of acid and base, either sequentially or simultaneously, into the target tissue. TCA remains at the conceptual stage with existing studies unable to provide recommendations on the optimum injection rate, and reagent concentration and volume. Limitations in current experimental methodology have prevented proper elucidation of the thermochemical processes inside the tissue during TCA. Nevertheless, the computational TCA framework developed recently by Mak et al. [Mak et al., Computers in Biology and Medicine, 2022, 145:105494] has opened new avenues in the development of TCA. Specifically, a recommended safe dosage is imperative in driving TCA research beyond the conceptual stage.

    METHODS: The aforesaid computational TCA framework for sequential injection was applied and adapted to simulate TCA with simultaneous injection of acid and base at equimolar and equivolume. The developed framework, which describes the flow of acid and base, their neutralisation, the rise in tissue temperature and the formation of thermal damage, was solved numerically using the finite element method. The framework will be used to investigate the effects of injection rate, reagent concentration, volume and type (weak/strong acid-base combination) on temperature rise and thermal coagulation formation.

    RESULTS: A higher injection rate resulted in higher temperature rise and larger thermal coagulation. Reagent concentration of 7500 mol/m3 was found to be optimum in producing considerable thermal coagulation without the risk of tissue overheating. Thermal coagulation volume was found to be consistently larger than the total volume of acid and base injected into the tissue, which is beneficial as it reduces the risk of chemical burn injury. Three multivariate second-order polynomials that express the targeted coagulation volume as functions of injection rate and reagent volume, for the weak-weak, weak-strong and strong-strong acid-base combinations were also derived based on the simulated data.

    CONCLUSIONS: A guideline for a safe and effective implementation of TCA with simultaneous injection of acid and base was recommended based on the numerical results of the computational model developed. The guideline correlates the coagulation volume with the reagent volume and injection rate, and may be used by clinicians in determining the safe dosage of reagents and optimum injection rate to achieve a desired thermal coagulation volume during TCA.

  17. Zainol NM, Damanhuri NS, Othman NA, Chiew YS, Nor MBM, Muhammad Z, et al.
    Comput Methods Programs Biomed, 2022 Jun;220:106835.
    PMID: 35512627 DOI: 10.1016/j.cmpb.2022.106835
    BACKGROUND AND OBJECTIVE: Mechanical ventilation (MV) provides breathing support for acute respiratory distress syndrome (ARDS) patients in the intensive care unit, but is difficult to optimize. Too much, or too little of pressure or volume support can cause further ventilator-induced lung injury, increasing length of MV, cost and mortality. Patient-specific respiratory mechanics can help optimize MV settings. However, model-based estimation of respiratory mechanics is less accurate when patient exhibit un-modeled spontaneous breathing (SB) efforts on top of ventilator support. This study aims to estimate and quantify SB efforts by reconstructing the unaltered passive mechanics airway pressure using NARX model.

    METHODS: Non-linear autoregressive (NARX) model is used to reconstruct missing airway pressure due to the presence of spontaneous breathing effort in mv patients. Then, the incidence of SB patients is estimated. The study uses a total of 10,000 breathing cycles collected from 10 ARDS patients from IIUM Hospital in Kuantan, Malaysia. In this study, there are 2 different ratios of training and validating methods. Firstly, the initial ratio used is 60:40 which indicates 600 breath cycles for training and remaining 400 breath cycles used for testing. Then, the ratio is varied using 70:30 ratio for training and testing data.

    RESULTS AND DISCUSSION: The mean residual error between original airway pressure and reconstructed airway pressure is denoted as the magnitude of effort. The median and interquartile range of mean residual error for both ratio are 0.0557 [0.0230 - 0.0874] and 0.0534 [0.0219 - 0.0870] respectively for all patients. The results also show that Patient 2 has the highest percentage of SB incidence and Patient 10 with the lowest percentage of SB incidence which proved that NARX model is able to perform for both higher incidence of SB effort or when there is a lack of SB effort.

    CONCLUSION: This model is able to produce the SB incidence rate based on 10% threshold. Hence, the proposed NARX model is potentially useful to estimate and identify patient-specific SB effort, which has the potential to further assist clinical decisions and optimize MV settings.

  18. Abbasian Ardakani A, Bureau NJ, Ciaccio EJ, Acharya UR
    Comput Methods Programs Biomed, 2022 Mar;215:106609.
    PMID: 34990929 DOI: 10.1016/j.cmpb.2021.106609
    Radiomics is a newcomer field that has opened new windows for precision medicine. It is related to extraction of a large number of quantitative features from medical images, which may be difficult to detect visually. Underlying tumor biology can change physical properties of tissues, which affect patterns of image pixels and radiomics features. The main advantage of radiomics is that it can characterize the whole tumor non-invasively, even after a single sampling from an image. Therefore, it can be linked to a "digital biopsy". Physicians need to know about radiomics features to determine how their values correlate with the appearance of lesions and diseases. Indeed, physicians need practical references to conceive of basics and concepts of each radiomics feature without knowing their sophisticated mathematical formulas. In this review, commonly used radiomics features are illustrated with practical examples to help physicians in their routine diagnostic procedures.
  19. Ang CYS, Chiew YS, Vu LH, Cove ME
    Comput Methods Programs Biomed, 2022 Mar;215:106601.
    PMID: 34973606 DOI: 10.1016/j.cmpb.2021.106601
    BACKGROUND: Spontaneous breathing (SB) effort during mechanical ventilation (MV) is an important metric of respiratory drive. However, SB effort varies due to a variety of factors, including evolving pathology and sedation levels. Therefore, assessment of SB efforts needs to be continuous and non-invasive. This is important to prevent both over- and under-assistance with MV. In this study, a machine learning model, Convolutional Autoencoder (CAE) is developed to quantify the magnitude of SB effort using only bedside MV airway pressure and flow waveform.

    METHOD: The CAE model was trained using 12,170,655 simulated SB flow and normal flow data (NB). The paired SB and NB flow data were simulated using a Gaussian Effort Model (GEM) with 5 basis functions. When the CAE model is given a SB flow input, it is capable of predicting a corresponding NB flow for the SB flow input. The magnitude of SB effort (SBEMag) is then quantified as the difference between the SB and NB flows. The CAE model was used to evaluate the SBEMag of 9 pressure control/ support datasets. Results were validated using a mean squared error (MSE) fitting between clinical and training SB flows.

    RESULTS: The CAE model was able to produce NB flows from the clinical SB flows with the median SBEMag of the 9 datasets being 25.39% [IQR: 21.87-25.57%]. The absolute error in SBEMag using MSE validation yields a median of 4.77% [IQR: 3.77-8.56%] amongst the cohort. This shows the ability of the GEM to capture the intrinsic details present in SB flow waveforms. Analysis also shows both intra-patient and inter-patient variability in SBEMag.

    CONCLUSION: A Convolutional Autoencoder model was developed with simulated SB and NB flow data and is capable of quantifying the magnitude of patient spontaneous breathing effort. This provides potential application for real-time monitoring of patient respiratory drive for better management of patient-ventilator interaction.

  20. Corda JV, Shenoy BS, Ahmad KA, Lewis L, K P, Khader SMA, et al.
    Comput Methods Programs Biomed, 2022 Feb;214:106538.
    PMID: 34848078 DOI: 10.1016/j.cmpb.2021.106538
    BACKGROUND AND OBJECTIVE: Neonates are preferential nasal breathers up to 3 months of age. The nasal anatomy in neonates and infants is at developing stages whereas the adult nasal cavities are fully grown which implies that the study of airflow dynamics in the neonates and infants are significant. In the present study, the nasal airways of the neonate, infant and adult are anatomically compared and their airflow patterns are investigated.

    METHODS: Computational Fluid Dynamics (CFD) approach is used to simulate the airflow in a neonate, an infant and an adult in sedentary breathing conditions. The healthy CT scans are segmented using MIMICS 21.0 (Materialise, Ann arbor, MI). The patient-specific 3D airway models are analyzed for low Reynolds number flow using ANSYS FLUENT 2020 R2. The applicability of the Grid Convergence Index (GCI) for polyhedral mesh adopted in this work is also verified.

    RESULTS: This study shows that the inferior meatus of neonates accounted for only 15% of the total airflow. This was in contrast to the infants and adults who experienced 49 and 31% of airflow at the inferior meatus region. Superior meatus experienced 25% of total flow which is more than normal for the neonate. The highest velocity of 1.8, 2.6 and 3.7 m/s was observed at the nasal valve region for neonates, infants and adults, respectively. The anterior portion of the nasal cavity experienced maximum wall shear stress with average values of 0.48, 0.25 and 0.58 Pa for the neonates, infants and adults.

    CONCLUSIONS: The neonates have an underdeveloped nasal cavity which significantly affects their airway distribution. The absence of inferior meatus in the neonates has limited the flow through the inferior regions and resulted in uneven flow distribution.

Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links