Displaying publications 1 - 20 of 85 in total

Abstract:
Sort:
  1. Ang CYS, Chiew YS, Wang X, Ooi EH, Nor MBM, Cove ME, et al.
    Comput Methods Programs Biomed, 2023 Oct;240:107728.
    PMID: 37531693 DOI: 10.1016/j.cmpb.2023.107728
    BACKGROUND AND OBJECTIVE: Healthcare datasets are plagued by issues of data scarcity and class imbalance. Clinically validated virtual patient (VP) models can provide accurate in-silico representations of real patients and thus a means for synthetic data generation in hospital critical care settings. This research presents a realistic, time-varying mechanically ventilated respiratory failure VP profile synthesised using a stochastic model.

    METHODS: A stochastic model was developed using respiratory elastance (Ers) data from two clinical cohorts and averaged over 30-minute time intervals. The stochastic model was used to generate future Ers data based on current Ers values with added normally distributed random noise. Self-validation of the VPs was performed via Monte Carlo simulation and retrospective Ers profile fitting. A stochastic VP cohort of temporal Ers evolution was synthesised and then compared to an independent retrospective patient cohort data in a virtual trial across several measured patient responses, where similarity of profiles validates the realism of stochastic model generated VP profiles.

    RESULTS: A total of 120,000 3-hour VPs for pressure control (PC) and volume control (VC) ventilation modes are generated using stochastic simulation. Optimisation of the stochastic simulation process yields an ideal noise percentage of 5-10% and simulation iteration of 200,000 iterations, allowing the simulation of a realistic and diverse set of Ers profiles. Results of self-validation show the retrospective Ers profiles were able to be recreated accurately with a mean squared error of only 0.099 [0.009-0.790]% for the PC cohort and 0.051 [0.030-0.126]% for the VC cohort. A virtual trial demonstrates the ability of the stochastic VP cohort to capture Ers trends within and beyond the retrospective patient cohort providing cohort-level validation.

    CONCLUSION: VPs capable of temporal evolution demonstrate feasibility for use in designing, developing, and optimising bedside MV guidance protocols through in-silico simulation and validation. Overall, the temporal VPs developed using stochastic simulation alleviate the need for lengthy, resource intensive, high cost clinical trials, while facilitating statistically robust virtual trials, ultimately leading to improved patient care and outcomes in mechanical ventilation.

  2. Mirza IA, Abdulhameed M, Vieru D, Shafie S
    Comput Methods Programs Biomed, 2016 Dec;137:149-166.
    PMID: 28110721 DOI: 10.1016/j.cmpb.2016.09.014
    Therapies with magnetic/electromagnetic field are employed to relieve pains or, to accelerate flow of blood-particles, particularly during the surgery. In this paper, a theoretical study of the blood flow along with particles suspension through capillary was made by the electro-magneto-hydrodynamic approach. Analytical solutions to the non-dimensional blood velocity and non-dimensional particles velocity are obtained by means of the Laplace transform with respect to the time variable and the finite Hankel transform with respect to the radial coordinate. The study of thermally transfer characteristics is based on the energy equation for two-phase thermal transport of blood and particles suspension with viscous dissipation, the volumetric heat generation due to Joule heating effect and electromagnetic couple effect. The solution of the nonlinear heat transfer problem is derived by using the velocity field and the integral transform method. The influence of dimensionless system parameters like the electrokinetic width, the Hartman number, Prandtl number, the coefficient of heat generation due to Joule heating and Eckert number on the velocity and temperature fields was studied using the Mathcad software. Results are presented by graphical illustrations.
  3. Ninomiya K, Arimura H, Tanaka K, Chan WY, Kabata Y, Mizuno S, et al.
    Comput Methods Programs Biomed, 2023 Jun;236:107544.
    PMID: 37148668 DOI: 10.1016/j.cmpb.2023.107544
    OBJECTIVES: To elucidate a novel radiogenomics approach using three-dimensional (3D) topologically invariant Betti numbers (BNs) for topological characterization of epidermal growth factor receptor (EGFR) Del19 and L858R mutation subtypes.

    METHODS: In total, 154 patients (wild-type EGFR, 72 patients; Del19 mutation, 45 patients; and L858R mutation, 37 patients) were retrospectively enrolled and randomly divided into 92 training and 62 test cases. Two support vector machine (SVM) models to distinguish between wild-type and mutant EGFR (mutation [M] classification) as well as between the Del19 and L858R subtypes (subtype [S] classification) were trained using 3DBN features. These features were computed from 3DBN maps by using histogram and texture analyses. The 3DBN maps were generated using computed tomography (CT) images based on the Čech complex constructed on sets of points in the images. These points were defined by coordinates of voxels with CT values higher than several threshold values. The M classification model was built using image features and demographic parameters of sex and smoking status. The SVM models were evaluated by determining their classification accuracies. The feasibility of the 3DBN model was compared with those of conventional radiomic models based on pseudo-3D BN (p3DBN), two-dimensional BN (2DBN), and CT and wavelet-decomposition (WD) images. The validation of the model was repeated with 100 times random sampling.

    RESULTS: The mean test accuracies for M classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.810, 0.733, 0.838, 0.782, and 0.799, respectively. The mean test accuracies for S classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.773, 0.694, 0.657, 0.581, and 0.696, respectively.

    CONCLUSION: 3DBN features, which showed a radiogenomic association with the characteristics of the EGFR Del19/L858R mutation subtypes, yielded higher accuracy for subtype classifications in comparison with conventional features.

  4. Akhbar MFA
    Comput Methods Programs Biomed, 2023 Apr;231:107361.
    PMID: 36736133 DOI: 10.1016/j.cmpb.2023.107361
    BACKGROUND AND OBJECTIVE: Conventional surgical drill bits suffer from several drawbacks, including extreme heat generation, breakage, jam, and undesired breakthrough. Understanding the impacts of drill margin on bone damage can provide insights that lay the foundation for improvement in the existing surgical drill bit. However, research on drill margins in bone drilling is lacking. This work assesses the influences of margin height and width on thermomechanical damage in bone drilling.

    METHODS: Thermomechanical damage-maximum bone temperature, osteonecrosis diameter, osteonecrosis depth, maximum thrust force, and torque-were calculated using the finite element method under various margin heights (0.05-0.25 mm) and widths (0.02-0.26 mm). The simulation results were validated with experimental tests and previous research data.

    RESULTS: The effect of margin height in increasing the maximum bone temperature, osteonecrosis diameter, and depth were at least 19.1%, 41.9%, and 59.6%, respectively. The thrust force and torque are highly sensitive to margin height. A higher margin height (0.21-0.25 mm) reduced the thrust force by 54.0% but increased drilling torque by 142.2%. The bone temperature, osteonecrosis diameter, and depth were 16.5%, 56.5%, and 81.4% lower, respectively, with increasing margin width. The minimum thrust force (11.1 N) and torque (41.9 Nmm) were produced with the highest margin width (0.26 mm). The margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 produced the highest sum of weightage.

    CONCLUSIONS: A surgical drill bit with a margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 mm can produce minimum thermomechanical damage in cortical bone drilling. The insights regarding the suitable ranges for margin height and width from this study could be adopted in future research devoted to optimizing the margin of the existing surgical drill bit.

  5. Hussain M, Al-Haiqi A, Zaidan AA, Zaidan BB, Kiah ML, Anuar NB, et al.
    Comput Methods Programs Biomed, 2015 Dec;122(3):393-408.
    PMID: 26412009 DOI: 10.1016/j.cmpb.2015.08.015
    To survey researchers' efforts in response to the new and disruptive technology of smartphone medical apps, mapping the research landscape form the literature into a coherent taxonomy, and finding out basic characteristics of this emerging field represented on: motivation of using smartphone apps in medicine and healthcare, open challenges that hinder the utility, and the recommendations to improve the acceptance and use of medical apps in the literature.
  6. Shindi O, Kanesan J, Kendall G, Ramanathan A
    Comput Methods Programs Biomed, 2020 Jun;189:105327.
    PMID: 31978808 DOI: 10.1016/j.cmpb.2020.105327
    BACKGROUND AND OBJECTIVES: In cancer therapy optimization, an optimal amount of drug is determined to not only reduce the tumor size but also to maintain the level of chemo toxicity in the patient's body. The increase in the number of objectives and constraints further burdens the optimization problem. The objective of the present work is to solve a Constrained Multi- Objective Optimization Problem (CMOOP) of the Cancer-Chemotherapy. This optimization results in optimal drug schedule through the minimization of the tumor size and the drug concentration by ensuring the patient's health level during dosing within an acceptable level.

    METHODS: This paper presents two hybrid methodologies that combines optimal control theory with multi-objective swarm and evolutionary algorithms and compares the performance of these methodologies with multi-objective swarm intelligence algorithms such as MOEAD, MODE, MOPSO and M-MOPSO. The hybrid and conventional methodologies are compared by addressing CMOOP.

    RESULTS: The minimized tumor and drug concentration results obtained by the hybrid methodologies demonstrate that they are not only superior to pure swarm intelligence or evolutionary algorithm methodologies but also consumes far less computational time. Further, Second Order Sufficient Condition (SSC) is also used to verify and validate the optimality condition of the constrained multi-objective problem.

    CONCLUSION: The proposed methodologies reduce chemo-medicine administration while maintaining effective tumor killing. This will be helpful for oncologist to discover and find the optimum dose schedule of the chemotherapy that reduces the tumor cells while maintaining the patients' health at a safe level.

  7. Teoh YX, Othmani A, Lai KW, Goh SL, Usman J
    Comput Methods Programs Biomed, 2023 Dec;242:107807.
    PMID: 37778138 DOI: 10.1016/j.cmpb.2023.107807
    BACKGROUND AND OBJECTIVE: Knee osteoarthritis (OA) is a debilitating musculoskeletal disorder that causes functional disability. Automatic knee OA diagnosis has great potential of enabling timely and early intervention, that can potentially reverse the degenerative process of knee OA. Yet, it is a tedious task, concerning the heterogeneity of the disorder. Most of the proposed techniques demonstrated single OA diagnostic task widely based on Kellgren Lawrence (KL) standard, a composite score of only a few imaging features (i.e. osteophytes, joint space narrowing and subchondral bone changes). However, only one key disease pattern was tackled. The KL standard fails to represent disease pattern of individual OA features, particularly osteophytes, joint-space narrowing, and pain intensity that play a fundamental role in OA manifestation. In this study, we aim to develop a multitask model using convolutional neural network (CNN) feature extractors and machine learning classifiers to detect nine important OA features: KL grade, knee osteophytes (both knee, medial fibular: OSFM, medial tibial: OSTM, lateral fibular: OSFL, and lateral tibial: OSTL), joint-space narrowing (medial: JSM, and lateral: JSL), and patient-reported pain intensity from plain radiography.

    METHODS: We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers.

    RESULTS: Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction.

    CONCLUSIONS: The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.

  8. Kho ASK, Foo JJ, Ooi ET, Ooi EH
    Comput Methods Programs Biomed, 2020 Feb;184:105289.
    PMID: 31891903 DOI: 10.1016/j.cmpb.2019.105289
    BACKGROUND AND OBJECTIVE: The majority of the studies on radiofrequency ablation (RFA) have focused on enlarging the size of the coagulation zone. An aspect that is crucial but often overlooked is the shape of the coagulation zone. The shape is crucial because the majority of tumours are irregularly-shaped. In this paper, the ability to manipulate the shape of the coagulation zone following saline-infused RFA by altering the location of saline infusion is explored.

    METHODS: A 3D model of the liver tissue was developed. Saline infusion was described using the dual porosity model, while RFA was described using the electrostatic and bioheat transfer equations. Three infusion locations were investigated, namely at the proximal end, the middle and the distal end of the electrode. Investigations were carried out numerically using the finite element method.

    RESULTS: Results indicated that greater thermal coagulation was found in the region of tissue occupied by the saline bolus. Infusion at the middle of the electrode led to the largest coagulation volume followed by infusion at the proximal and distal ends. It was also found that the ability to delay roll-off, as commonly associated with saline-infused RFA, was true only for the case when infusion is carried out at the middle. When infused at the proximal and distal ends, the occurrence of roll-off was advanced. This may be due to the rapid and more intense heating experienced by the tissue when infusion is carried out at the electrode ends where Joule heating is dominant.

    CONCLUSION: Altering the location of saline infusion can influence the shape of the coagulation zone following saline-infused RFA. The ability to 'shift' the coagulation zone to a desired location opens up great opportunities for the development of more precise saline-infused RFA treatment that targets specific regions within the tissue.

  9. Pang T, Wong JHD, Ng WL, Chan CS
    Comput Methods Programs Biomed, 2021 May;203:106018.
    PMID: 33714900 DOI: 10.1016/j.cmpb.2021.106018
    BACKGROUND AND OBJECTIVE: The capability of deep learning radiomics (DLR) to extract high-level medical imaging features has promoted the use of computer-aided diagnosis of breast mass detected on ultrasound. Recently, generative adversarial network (GAN) has aided in tackling a general issue in DLR, i.e., obtaining a sufficient number of medical images. However, GAN methods require a pair of input and labeled images, which require an exhaustive human annotation process that is very time-consuming. The aim of this paper is to develop a radiomics model based on a semi-supervised GAN method to perform data augmentation in breast ultrasound images.

    METHODS: A total of 1447 ultrasound images, including 767 benign masses and 680 malignant masses were acquired from a tertiary hospital. A semi-supervised GAN model was developed to augment the breast ultrasound images. The synthesized images were subsequently used to classify breast masses using a convolutional neural network (CNN). The model was validated using a 5-fold cross-validation method.

    RESULTS: The proposed GAN architecture generated high-quality breast ultrasound images, verified by two experienced radiologists. The improved performance of semi-supervised learning increased the quality of the synthetic data produced in comparison to the baseline method. We achieved more accurate breast mass classification results (accuracy 90.41%, sensitivity 87.94%, specificity 85.86%) with our synthetic data augmentation compared to other state-of-the-art methods.

    CONCLUSION: The proposed radiomics model has demonstrated a promising potential to synthesize and classify breast masses on ultrasound in a semi-supervised manner.

  10. Goh CH, Tan LK, Lovell NH, Ng SC, Tan MP, Lim E
    Comput Methods Programs Biomed, 2020 Nov;196:105596.
    PMID: 32580054 DOI: 10.1016/j.cmpb.2020.105596
    BACKGROUND AND OBJECTIVES: Continuous monitoring of physiological parameters such as photoplethysmography (PPG) has attracted increased interest due to advances in wearable sensors. However, PPG recordings are susceptible to various artifacts, and thus reducing the reliability of PPG-driven parameters, such as oxygen saturation, heart rate, blood pressure and respiration. This paper proposes a one-dimensional convolution neural network (1-D-CNN) to classify five-second PPG segments into clean or artifact-affected segments, avoiding data-dependent pulse segmentation techniques and heavy manual feature engineering.

    METHODS: Continuous raw PPG waveforms were blindly allocated into segments with an equal length (5s) without leveraging any pulse location information and were normalized with Z-score normalization methods. A 1-D-CNN was designed to automatically learn the intrinsic features of the PPG waveform, and perform the required classification. Several training hyperparameters (initial learning rate and gradient threshold) were varied to investigate the effect of these parameters on the performance of the network. Subsequently, this proposed network was trained and validated with 30 subjects, and then tested with eight subjects, with our local dataset. Moreover, two independent datasets downloaded from the PhysioNet MIMIC II database were used to evaluate the robustness of the proposed network.

    RESULTS: A 13 layer 1-D-CNN model was designed. Within our local study dataset evaluation, the proposed network achieved a testing accuracy of 94.9%. The classification accuracy of two independent datasets also achieved satisfactory accuracy of 93.8% and 86.7% respectively. Our model achieved a comparable performance with most reported works, with the potential to show good generalization as the proposed network was evaluated with multiple cohorts (overall accuracy of 94.5%).

    CONCLUSION: This paper demonstrated the feasibility and effectiveness of applying blind signal processing and deep learning techniques to PPG motion artifact detection, whereby manual feature thresholding was avoided and yet a high generalization ability was achieved.

  11. Ang CYS, Chiew YS, Vu LH, Cove ME
    Comput Methods Programs Biomed, 2022 Mar;215:106601.
    PMID: 34973606 DOI: 10.1016/j.cmpb.2021.106601
    BACKGROUND: Spontaneous breathing (SB) effort during mechanical ventilation (MV) is an important metric of respiratory drive. However, SB effort varies due to a variety of factors, including evolving pathology and sedation levels. Therefore, assessment of SB efforts needs to be continuous and non-invasive. This is important to prevent both over- and under-assistance with MV. In this study, a machine learning model, Convolutional Autoencoder (CAE) is developed to quantify the magnitude of SB effort using only bedside MV airway pressure and flow waveform.

    METHOD: The CAE model was trained using 12,170,655 simulated SB flow and normal flow data (NB). The paired SB and NB flow data were simulated using a Gaussian Effort Model (GEM) with 5 basis functions. When the CAE model is given a SB flow input, it is capable of predicting a corresponding NB flow for the SB flow input. The magnitude of SB effort (SBEMag) is then quantified as the difference between the SB and NB flows. The CAE model was used to evaluate the SBEMag of 9 pressure control/ support datasets. Results were validated using a mean squared error (MSE) fitting between clinical and training SB flows.

    RESULTS: The CAE model was able to produce NB flows from the clinical SB flows with the median SBEMag of the 9 datasets being 25.39% [IQR: 21.87-25.57%]. The absolute error in SBEMag using MSE validation yields a median of 4.77% [IQR: 3.77-8.56%] amongst the cohort. This shows the ability of the GEM to capture the intrinsic details present in SB flow waveforms. Analysis also shows both intra-patient and inter-patient variability in SBEMag.

    CONCLUSION: A Convolutional Autoencoder model was developed with simulated SB and NB flow data and is capable of quantifying the magnitude of patient spontaneous breathing effort. This provides potential application for real-time monitoring of patient respiratory drive for better management of patient-ventilator interaction.

  12. Lund LA, Omar Z, Khan I
    Comput Methods Programs Biomed, 2019 Dec;182:105044.
    PMID: 31491654 DOI: 10.1016/j.cmpb.2019.105044
    BACKGROUND AND OBJECTIVE: The last two and half decades are witnessed a great surge in the use convective fluids for enhancement of heat transfer of minerals ethylene glycol, oil and water due to their numerous applications in the industrial segments including chemical production, microelectronics, power generation, transportation, and air-conditioning. For this purpose, different procedures were applied to upgrade the thermal conductivity of common fluid but could not. Further, Choi and Eastman in 1995 introduced nanofluid which has good thermal properties as compared to common fluids. After that, it can be seen that researchers, mathematicians, and scientists tried to understand the principles of nanofluids and how to implicate them in many different practical applications. In this work, the Buongiorno model has been considered for nanofluid. One of the prime objectives is to consider all possible multiple solutions of the model because these solutions cannot be seen experimentally.

    METHODS: The governing equations of fluid flow have been transformed in the form of ordinary differential equations. These equations have been solved by two methods namely, shooting method and three-stage Lobatto IIIa formula.

    RESULTS: The effects of different parameters on temperature, velocity, concentration profiles, skin friction coefficient, Sherwood number, and reduced Nusselt number were obtained and presented graphically. It was noticed that four solutions existed at definite ranges of the parameters for high suction over both surfaces for the first time. The results of the stability analysis revealed that only the first solution is more stable and possess physical reliability compared to the remaining solutions.

    CONCLUSION: The graphs also indicated that the fluid velocity decreases as the thermophoresis parameter increases but the opposite behavior observed for both temperature and concentration profiles in the first solution. Furthermore, it was detected that the concentration profile declined at the higher values of the Brownian motion parameter.

  13. Lee JWW, Chiew YS, Wang X, Tan CP, Mat Nor MB, Cove ME, et al.
    Comput Methods Programs Biomed, 2022 Feb;214:106577.
    PMID: 34936946 DOI: 10.1016/j.cmpb.2021.106577
    BACKGROUND AND OBJECTIVE: Mechanical ventilation is the primary form of care provided to respiratory failure patients. Limited guidelines and conflicting results from major clinical trials means selection of mechanical ventilation settings relies heavily on clinician experience and intuition. Determining optimal mechanical ventilation settings is therefore difficult, where non-optimal mechanical ventilation can be deleterious. To overcome these difficulties, this research proposes a model-based method to manage the wide range of possible mechanical ventilation settings, while also considering patient-specific conditions and responses.

    METHODS: This study shows the design and development of the "VENT" protocol, which integrates the single compartment linear lung model with clinical recommendations from landmark studies, to aid clinical decision-making in selecting mechanical ventilation settings. Using retrospective breath data from a cohort of 24 patients, 3,566 and 2,447 clinically implemented VC and PC settings were extracted respectively. Using this data, a VENT protocol application case study and clinical comparison is performed, and the prediction accuracy of the VENT protocol is validated against actual measured outcomes of pressure and volume.

    RESULTS: The study shows the VENT protocols' potential use in narrowing an overwhelming number of possible mechanical ventilation setting combinations by up to 99.9%. The comparison with retrospective clinical data showed that only 33% and 45% of clinician settings were approved by the VENT protocol. The unapproved settings were mainly due to exceeding clinical recommended settings. When utilising the single compartment model in the VENT protocol for forecasting peak pressures and tidal volumes, median [IQR] prediction error values of 0.75 [0.31 - 1.83] cmH2O and 0.55 [0.19 - 1.20] mL/kg were obtained.

    CONCLUSIONS: Comparing the proposed protocol with retrospective clinically implemented settings shows the protocol can prevent harmful mechanical ventilation setting combinations for which clinicians would be otherwise unaware. The VENT protocol warrants a more detailed clinical study to validate its potential usefulness in a clinical setting.

  14. Jamaludin UK, M Suhaimi F, Abdul Razak NN, Md Ralib A, Mat Nor MB, Pretty CG, et al.
    Comput Methods Programs Biomed, 2018 Aug;162:149-155.
    PMID: 29903481 DOI: 10.1016/j.cmpb.2018.03.001
    BACKGROUND AND OBJECTIVE: Blood glucose variability is common in healthcare and it is not related or influenced by diabetes mellitus. To minimise the risk of high blood glucose in critically ill patients, Stochastic Targeted Blood Glucose Control Protocol is used in intensive care unit at hospitals worldwide. Thus, this study focuses on the performance of stochastic modelling protocol in comparison to the current blood glucose management protocols in the Malaysian intensive care unit. Also, this study is to assess the effectiveness of Stochastic Targeted Blood Glucose Control Protocol when it is applied to a cohort of diabetic patients.

    METHODS: Retrospective data from 210 patients were obtained from a general hospital in Malaysia from May 2014 until June 2015, where 123 patients were having comorbid diabetes mellitus. The comparison of blood glucose control protocol performance between both protocol simulations was conducted through blood glucose fitted with physiological modelling on top of virtual trial simulations, mean calculation of simulation error and several graphical comparisons using stochastic modelling.

    RESULTS: Stochastic Targeted Blood Glucose Control Protocol reduces hyperglycaemia by 16% in diabetic and 9% in nondiabetic cohorts. The protocol helps to control blood glucose level in the targeted range of 4.0-10.0 mmol/L for 71.8% in diabetic and 82.7% in nondiabetic cohorts, besides minimising the treatment hour up to 71 h for 123 diabetic patients and 39 h for 87 nondiabetic patients.

    CONCLUSION: It is concluded that Stochastic Targeted Blood Glucose Control Protocol is good in reducing hyperglycaemia as compared to the current blood glucose management protocol in the Malaysian intensive care unit. Hence, the current Malaysian intensive care unit protocols need to be modified to enhance their performance, especially in the integration of insulin and nutrition intervention in decreasing the hyperglycaemia incidences. Improvement in Stochastic Targeted Blood Glucose Control Protocol in terms of uen model is also a must to adapt with the diabetic cohort.

  15. Arunachalam GR, Chiew YS, Tan CP, Ralib AM, Nor MBM
    Comput Methods Programs Biomed, 2020 Jan;183:105103.
    PMID: 31606559 DOI: 10.1016/j.cmpb.2019.105103
    BACKGROUND AND OBJECTIVE: Mechanical ventilation therapy of respiratory failure patients can be guided by monitoring patient-specific respiratory mechanics. However, the patient's spontaneous breathing effort during controlled ventilation changes airway pressure waveform and thus affects the model-based identification of patient-specific respiratory mechanics parameters. This study develops a model to estimate respiratory mechanics in the presence of patient effort.

    METHODS: Gaussian effort model (GEM) is a derivative of the single-compartment model with basis function. GEM model uses a linear combination of basis functions to model the nonlinear pressure waveform of spontaneous breathing patients. The GEM model estimates respiratory mechanics such as Elastance and Resistance along with the magnitudes of basis functions, which accounts for patient inspiratory effort.

    RESULTS AND DISCUSSION: The GEM model was tested using both simulated data and a retrospective observational clinical trial patient data. GEM model fitting to the original airway pressure waveform is better than any existing models when reverse triggering asynchrony is present. The fitting error of GEM model was less than 10% for both simulated data and clinical trial patient data.

    CONCLUSION: GEM can capture the respiratory mechanics in the presence of patient effect in volume control ventilation mode and also can be used to assess patient-ventilator interaction. This model determines basis functions magnitudes, which can be used to simulate any waveform of patient effort pressure for future studies. The estimation of parameter identification GEM model can further be improved by constraining the parameters within a physiologically plausible range during least-square nonlinear regression.

  16. Boon KH, Khalil-Hani M, Malarvili MB, Sia CW
    Comput Methods Programs Biomed, 2016 Oct;134:187-96.
    PMID: 27480743 DOI: 10.1016/j.cmpb.2016.07.016
    This paper proposes a method that predicts the onset of paroxysmal atrial fibrillation (PAF), using heart rate variability (HRV) segments that are shorter than those applied in existing methods, while maintaining good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to stabilize (electrically) and prevent the onset of atrial arrhythmias with different pacing techniques. We investigate the effect of HRV features extracted from different lengths of HRV segments prior to PAF onset with the proposed PAF prediction method. The pre-processing stage of the predictor includes QRS detection, HRV quantification and ectopic beat correction. Time-domain, frequency-domain, non-linear and bispectrum features are then extracted from the quantified HRV. In the feature selection, the HRV feature set and classifier parameters are optimized simultaneously using an optimization procedure based on genetic algorithm (GA). Both full feature set and statistically significant feature subset are optimized by GA respectively. For the statistically significant feature subset, Mann-Whitney U test is used to filter non-statistical significance features that cannot pass the statistical test at 20% significant level. The final stage of our predictor is the classifier that is based on support vector machine (SVM). A 10-fold cross-validation is applied in performance evaluation, and the proposed method achieves 79.3% prediction accuracy using 15-minutes HRV segment. This accuracy is comparable to that achieved by existing methods that use 30-minutes HRV segments, most of which achieves accuracy of around 80%. More importantly, our method significantly outperforms those that applied segments shorter than 30 minutes.
  17. Boon KH, Khalil-Hani M, Malarvili MB
    Comput Methods Programs Biomed, 2018 Jan;153:171-184.
    PMID: 29157449 DOI: 10.1016/j.cmpb.2017.10.012
    This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity.
  18. Othman NA, Azhar MAAS, Damanhuri NS, Mahadi IA, Abbas MH, Shamsuddin SA, et al.
    Comput Methods Programs Biomed, 2023 Jun;236:107566.
    PMID: 37186981 DOI: 10.1016/j.cmpb.2023.107566
    BACKGROUND AND OBJECTIVE: The identification of insulinaemic pharmacokinetic parameters using the least-squares criterion approach is easily influenced by outlying data due to its sensitivity. Furthermore, the least-squares criterion has a tendency to overfit and produce incorrect results. Hence, this research proposes an alternative approach using the artificial neural network (ANN) with two hidden layers to optimize the identifying of insulinaemic pharmacokinetic parameters. The ANN is selected for its ability to avoid overfitting parameters and its faster speed in processing data.

    METHODS: 18 voluntarily participants were recruited from the Canterbury and Otago region of New Zealand to take part in a Dynamic Insulin Sensitivity and Secretion Test (DISST) clinical trial. A total of 46 DISST data were collected. However, due to ambiguous and inconsistency, 4 data had to be removed. Analysis was done using MATLAB 2020a.

    RESULTS AND DISCUSSION: Results show that, with 42 gathered dataset, the ANN generates higher gains, ∅P = 20.73 [12.21, 28.57] mU·L·mmol-1·min-1 and ∅D = 60.42 [26.85, 131.38] mU·L·mmol-1 as compared to the linear least square method, ∅P = 19.67 [11.81, 28.02] mU·L·mmol-1 ·min-1 and ∅D = 46.21 [7.25, 116.71] mU·L·mmol-1. The average value of the insulin sensitivity (SI) of ANN is lower with, SI = 16 × 10-4 L·mU-1 ·min-1 than the linear least square, SI = 17 × 10-4 L·mU-1 ·min-1.

    CONCLUSION: Although the ANN analysis provided a lower SI value, the results were more dependable than the linear least square model because the ANN approach yielded a better model fitting accuracy than the linear least square method with a lower residual error of less than 5%. With the implementation of this ANN architecture, it shows that ANN able to produce minimal error during optimization process particularly when dealing with outlying data. The findings may provide extra information to clinicians, allowing them to gain a better knowledge of the heterogenous aetiology of diabetes and therapeutic intervention options.

  19. Abidemi A, Aziz NAB
    Comput Methods Programs Biomed, 2020 Nov;196:105585.
    PMID: 32554024 DOI: 10.1016/j.cmpb.2020.105585
    Background Dengue is a vector-borne viral disease endemic in Malaysia. The disease is presently a public health issue in the country. Hence, the use of mathematical model to gain insights into the transmission dynamics and derive the optimal control strategies for minimizing the spread of the disease is of great importance. Methods A model involving eight mutually exclusive compartments with the introduction of personal protection, larvicide and adulticide control strategies describing dengue fever transmission dynamics is presented. The control-induced basic reproduction number (R˜0) related to the model is computed using the next generation matrix method. Comparison theorem is used to analyse the global dynamics of the model. The model is fitted to the data related to the 2012 dengue outbreak in Johor, Malaysia, using the least-squares method. In a bid to optimally curtail dengue fever propagation, we apply optimal control theory to investigate the effect of several control strategies of combination of optimal personal protection, larvicide and adulticide controls on dengue fever dynamics. The resulting optimality system is simulated in MATLAB using fourth order Runge-Kutta scheme based on the forward-backward sweep method. In addition, cost-effectiveness analysis is performed to determine the most cost-effective strategy among the various control strategies analysed. Results Analysis of the model with control parameters shows that the model has two disease-free equilibria, namely, trivial equilibrium and biologically realistic disease-free equilibrium, and one endemic equilibrium point. It also reveals that the biologically realistic disease-free equilibrium is both locally and globally asymptotically stable whenever the inequality R˜0<1holds. In the case of model with time-dependent control functions, the optimality levels of the three control functions required to optimally control dengue disease transmission are derived. Conclusion We conclude that dengue fever transmission can be curtailed by adopting any of the several control strategies analysed in this study. Furthermore, a strategy which combines personal protection and adulticide controls is found to be the most cost-effective control strategy.
  20. Kiah ML, Haiqi A, Zaidan BB, Zaidan AA
    Comput Methods Programs Biomed, 2014 Nov;117(2):360-82.
    PMID: 25070757 DOI: 10.1016/j.cmpb.2014.07.002
    The use of open source software in health informatics is increasingly advocated by authors in the literature. Although there is no clear evidence of the superiority of the current open source applications in the healthcare field, the number of available open source applications online is growing and they are gaining greater prominence. This repertoire of open source options is of a great value for any future-planner interested in adopting an electronic medical/health record system, whether selecting an existent application or building a new one. The following questions arise. How do the available open source options compare to each other with respect to functionality, usability and security? Can an implementer of an open source application find sufficient support both as a user and as a developer, and to what extent? Does the available literature provide adequate answers to such questions? This review attempts to shed some light on these aspects.
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links