Displaying publications 1 - 20 of 85 in total

Abstract:
Sort:
  1. Zhang K, Ting HN, Choo YM
    Comput Methods Programs Biomed, 2024 Mar;245:108043.
    PMID: 38306944 DOI: 10.1016/j.cmpb.2024.108043
    BACKGROUND AND OBJECTIVE: Conflict may happen when more than one classifier is used to perform prediction or classification. The recognition model error leads to conflicting evidence. These conflicts can cause decision errors in a baby cry recognition and further decrease its recognition accuracy. Thus, the objective of this study is to propose a method that can effectively minimize the conflict among deep learning models and improve the accuracy of baby cry recognition.

    METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.

    RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.

    CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.

  2. Zainol NM, Damanhuri NS, Othman NA, Chiew YS, Nor MBM, Muhammad Z, et al.
    Comput Methods Programs Biomed, 2022 Jun;220:106835.
    PMID: 35512627 DOI: 10.1016/j.cmpb.2022.106835
    BACKGROUND AND OBJECTIVE: Mechanical ventilation (MV) provides breathing support for acute respiratory distress syndrome (ARDS) patients in the intensive care unit, but is difficult to optimize. Too much, or too little of pressure or volume support can cause further ventilator-induced lung injury, increasing length of MV, cost and mortality. Patient-specific respiratory mechanics can help optimize MV settings. However, model-based estimation of respiratory mechanics is less accurate when patient exhibit un-modeled spontaneous breathing (SB) efforts on top of ventilator support. This study aims to estimate and quantify SB efforts by reconstructing the unaltered passive mechanics airway pressure using NARX model.

    METHODS: Non-linear autoregressive (NARX) model is used to reconstruct missing airway pressure due to the presence of spontaneous breathing effort in mv patients. Then, the incidence of SB patients is estimated. The study uses a total of 10,000 breathing cycles collected from 10 ARDS patients from IIUM Hospital in Kuantan, Malaysia. In this study, there are 2 different ratios of training and validating methods. Firstly, the initial ratio used is 60:40 which indicates 600 breath cycles for training and remaining 400 breath cycles used for testing. Then, the ratio is varied using 70:30 ratio for training and testing data.

    RESULTS AND DISCUSSION: The mean residual error between original airway pressure and reconstructed airway pressure is denoted as the magnitude of effort. The median and interquartile range of mean residual error for both ratio are 0.0557 [0.0230 - 0.0874] and 0.0534 [0.0219 - 0.0870] respectively for all patients. The results also show that Patient 2 has the highest percentage of SB incidence and Patient 10 with the lowest percentage of SB incidence which proved that NARX model is able to perform for both higher incidence of SB effort or when there is a lack of SB effort.

    CONCLUSION: This model is able to produce the SB incidence rate based on 10% threshold. Hence, the proposed NARX model is potentially useful to estimate and identify patient-specific SB effort, which has the potential to further assist clinical decisions and optimize MV settings.

  3. Yildirim O, Baloglu UB, Tan RS, Ciaccio EJ, Acharya UR
    Comput Methods Programs Biomed, 2019 Jul;176:121-133.
    PMID: 31200900 DOI: 10.1016/j.cmpb.2019.05.004
    BACKGROUND AND OBJECTIVE: For diagnosis of arrhythmic heart problems, electrocardiogram (ECG) signals should be recorded and monitored. The long-term signal records obtained are analyzed by expert cardiologists. Devices such as the Holter monitor have limited hardware capabilities. For improved diagnostic capacity, it would be helpful to detect arrhythmic signals automatically. In this study, a novel approach is presented as a candidate solution for these issues.

    METHODS: A convolutional auto-encoder (CAE) based nonlinear compression structure is implemented to reduce the signal size of arrhythmic beats. Long-short term memory (LSTM) classifiers are employed to automatically recognize arrhythmias using ECG features, which are deeply coded with the CAE network.

    RESULTS: Based upon the coded ECG signals, both storage requirement and classification time were considerably reduced. In experimental studies conducted with the MIT-BIH arrhythmia database, ECG signals were compressed by an average 0.70% percentage root mean square difference (PRD) rate, and an accuracy of over 99.0% was observed.

    CONCLUSIONS: One of the significant contributions of this study is that the proposed approach can significantly reduce time duration when using LSTM networks for data analysis. Thus, a novel and effective approach was proposed for both ECG signal compression, and their high-performance automatic recognition, with very low computational cost.

  4. Xu S, Deo RC, Soar J, Barua PD, Faust O, Homaira N, et al.
    Comput Methods Programs Biomed, 2023 Nov;241:107746.
    PMID: 37660550 DOI: 10.1016/j.cmpb.2023.107746
    BACKGROUND AND OBJECTIVE: Obstructive airway diseases, including asthma and Chronic Obstructive Pulmonary Disease (COPD), are two of the most common chronic respiratory health problems. Both of these conditions require health professional expertise in making a diagnosis. Hence, this process is time intensive for healthcare providers and the diagnostic quality is subject to intra- and inter- operator variability. In this study we investigate the role of automated detection of obstructive airway diseases to reduce cost and improve diagnostic quality.

    METHODS: We investigated the existing body of evidence and applied Preferred Reporting Items for Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to search records in IEEE, Google scholar, and PubMed databases. We identified 65 papers that were published from 2013 to 2022 and these papers cover 67 different studies. The review process was structured according to the medical data that was used for disease detection. We identified six main categories, namely air flow, genetic, imaging, signals, and miscellaneous. For each of these categories, we report both disease detection methods and their performance.

    RESULTS: We found that medical imaging was used in 14 of the reviewed studies as data for automated obstructive airway disease detection. Genetics and physiological signals were used in 13 studies. Medical records and air flow were used in 9 and 7 studies, respectively. Most papers were published in 2020 and we found three times more work on Machine Learning (ML) when compared to Deep Learning (DL). Statistical analysis shows that DL techniques achieve higher Accuracy (ACC) when compared to ML. Convolutional Neural Network (CNN) is the most common DL classifier and Support Vector Machine (SVM) is the most widely used ML classifier. During our review, we discovered only two publicly available asthma and COPD datasets. Most studies used private clinical datasets, so data size and data composition are inconsistent.

    CONCLUSIONS: Our review results indicate that Artificial Intelligence (AI) can improve both decision quality and efficiency of health professionals during COPD and asthma diagnosis. However, we found several limitations in this review, such as a lack of dataset consistency, a limited dataset and remote monitoring was not sufficiently explored. We appeal to society to accept and trust computer aided airflow obstructive diseases diagnosis and we encourage health professionals to work closely with AI scientists to promote automated detection in clinical practice and hospital settings.

  5. Wan Zaki WMD, Mat Daud M, Abdani SR, Hussain A, Mutalib HA
    Comput Methods Programs Biomed, 2018 Feb;154:71-78.
    PMID: 29249348 DOI: 10.1016/j.cmpb.2017.10.026
    BACKGROUND AND BJECTIVE: Pterygium is an ocular disease caused by fibrovascular tissue encroachment onto the corneal region. The tissue may cause vision blurring if it grows into the pupil region. In this study, we propose an automatic detection method to differentiate pterygium from non-pterygium (normal) cases on the basis of frontal eye photographed images, also known as anterior segment photographed images.

    METHODS: The pterygium screening system was tested on two normal eye databases (UBIRIS and MILES) and two pterygium databases (Australia Pterygium and Brazil Pterygium). This system comprises four modules: (i) a preprocessing module to enhance the pterygium tissue using HSV-Sigmoid; (ii) a segmentation module to differentiate the corneal region and the pterygium tissue; (iii) a feature extraction module to extract corneal features using circularity ratio, Haralick's circularity, eccentricity, and solidity; and (iv) a classification module to identify the presence or absence of pterygium. System performance was evaluated using support vector machine (SVM) and artificial neural network.

    RESULTS: The three-step frame differencing technique was introduced in the corneal segmentation module. The output image successfully covered the region of interest with an average accuracy of 0.9127. The performance of the proposed system using SVM provided the most promising results of 88.7%, 88.3%, and 95.6% for sensitivity, specificity, and area under the curve, respectively.

    CONCLUSION: A basic platform for computer-aided pterygium screening was successfully developed using the proposed modules. The proposed system can classify pterygium and non-pterygium cases reasonably well. In our future work, a standard grading system will be developed to identify the severity of pterygium cases. This system is expected to increase the awareness of communities in rural areas on pterygium.

  6. Tey WK, Kuang YC, Ooi MP, Khoo JJ
    Comput Methods Programs Biomed, 2018 Mar;155:109-120.
    PMID: 29512490 DOI: 10.1016/j.cmpb.2017.12.004
    Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification.

    BACKGROUND AND OBJECTIVE: Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement.

    METHODS: An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis.

    RESULTS: A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification.

    CONCLUSIONS: The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide.

  7. Teoh YX, Othmani A, Lai KW, Goh SL, Usman J
    Comput Methods Programs Biomed, 2023 Dec;242:107807.
    PMID: 37778138 DOI: 10.1016/j.cmpb.2023.107807
    BACKGROUND AND OBJECTIVE: Knee osteoarthritis (OA) is a debilitating musculoskeletal disorder that causes functional disability. Automatic knee OA diagnosis has great potential of enabling timely and early intervention, that can potentially reverse the degenerative process of knee OA. Yet, it is a tedious task, concerning the heterogeneity of the disorder. Most of the proposed techniques demonstrated single OA diagnostic task widely based on Kellgren Lawrence (KL) standard, a composite score of only a few imaging features (i.e. osteophytes, joint space narrowing and subchondral bone changes). However, only one key disease pattern was tackled. The KL standard fails to represent disease pattern of individual OA features, particularly osteophytes, joint-space narrowing, and pain intensity that play a fundamental role in OA manifestation. In this study, we aim to develop a multitask model using convolutional neural network (CNN) feature extractors and machine learning classifiers to detect nine important OA features: KL grade, knee osteophytes (both knee, medial fibular: OSFM, medial tibial: OSTM, lateral fibular: OSFL, and lateral tibial: OSTL), joint-space narrowing (medial: JSM, and lateral: JSL), and patient-reported pain intensity from plain radiography.

    METHODS: We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers.

    RESULTS: Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction.

    CONCLUSIONS: The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.

  8. Syed-Mohamad SM
    Comput Methods Programs Biomed, 2009 Jan;93(1):83-92.
    PMID: 18789553 DOI: 10.1016/j.cmpb.2008.07.011
    To develop and implement a collective web-based system to monitor child growth in order to study children with malnutrition.
  9. Sidibé D, Sankar S, Lemaître G, Rastgoo M, Massich J, Cheung CY, et al.
    Comput Methods Programs Biomed, 2017 Feb;139:109-117.
    PMID: 28187882 DOI: 10.1016/j.cmpb.2016.11.001
    This paper proposes a method for automatic classification of spectral domain OCT data for the identification of patients with retinal diseases such as Diabetic Macular Edema (DME). We address this issue as an anomaly detection problem and propose a method that not only allows the classification of the OCT volume, but also allows the identification of the individual diseased B-scans inside the volume. Our approach is based on modeling the appearance of normal OCT images with a Gaussian Mixture Model (GMM) and detecting abnormal OCT images as outliers. The classification of an OCT volume is based on the number of detected outliers. Experimental results with two different datasets show that the proposed method achieves a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, the experiments show that the proposed method achieves better classification performance than other recently published works.
  10. Shindi O, Kanesan J, Kendall G, Ramanathan A
    Comput Methods Programs Biomed, 2020 Jun;189:105327.
    PMID: 31978808 DOI: 10.1016/j.cmpb.2020.105327
    BACKGROUND AND OBJECTIVES: In cancer therapy optimization, an optimal amount of drug is determined to not only reduce the tumor size but also to maintain the level of chemo toxicity in the patient's body. The increase in the number of objectives and constraints further burdens the optimization problem. The objective of the present work is to solve a Constrained Multi- Objective Optimization Problem (CMOOP) of the Cancer-Chemotherapy. This optimization results in optimal drug schedule through the minimization of the tumor size and the drug concentration by ensuring the patient's health level during dosing within an acceptable level.

    METHODS: This paper presents two hybrid methodologies that combines optimal control theory with multi-objective swarm and evolutionary algorithms and compares the performance of these methodologies with multi-objective swarm intelligence algorithms such as MOEAD, MODE, MOPSO and M-MOPSO. The hybrid and conventional methodologies are compared by addressing CMOOP.

    RESULTS: The minimized tumor and drug concentration results obtained by the hybrid methodologies demonstrate that they are not only superior to pure swarm intelligence or evolutionary algorithm methodologies but also consumes far less computational time. Further, Second Order Sufficient Condition (SSC) is also used to verify and validate the optimality condition of the constrained multi-objective problem.

    CONCLUSION: The proposed methodologies reduce chemo-medicine administration while maintaining effective tumor killing. This will be helpful for oncologist to discover and find the optimum dose schedule of the chemotherapy that reduces the tumor cells while maintaining the patients' health at a safe level.

  11. Saleh MD, Eswaran C
    Comput Methods Programs Biomed, 2012 Oct;108(1):186-96.
    PMID: 22551841 DOI: 10.1016/j.cmpb.2012.03.004
    Diabetic retinopathy (DR) has become a serious threat in our society, which causes 45% of the legal blindness in diabetes patients. Early detection as well as the periodic screening of DR helps in reducing the progress of this disease and in preventing the subsequent loss of visual capability. This paper provides an automated diagnosis system for DR integrated with a user-friendly interface. The grading of the severity level of DR is based on detecting and analyzing the early clinical signs associated with the disease, such as microaneurysms (MAs) and hemorrhages (HAs). The system extracts some retinal features, such as optic disc, fovea, and retinal tissue for easier segmentation of dark spot lesions in the fundus images. That is followed by the classification of the correctly segmented spots into MAs and HAs. Based on the number and location of MAs and HAs, the system quantifies the severity level of DR. A database of 98 color images is used in order to evaluate the performance of the developed system. From the experimental results, it is found that the proposed system achieves 84.31% and 87.53% values in terms of sensitivity for the detection of MAs and HAs respectively. In terms of specificity, the system achieves 93.63% and 95.08% values for the detection of MAs and HAs respectively. Also, the proposed system achieves 68.98% and 74.91% values in terms of kappa coefficient for the detection of MAs and HAs respectively. Moreover, the system yields sensitivity and specificity values of 89.47% and 95.65% for the classification of DR versus normal.
  12. Redmond DP, Chiew YS, Major V, Chase JG
    Comput Methods Programs Biomed, 2019 Apr;171:67-79.
    PMID: 27697371 DOI: 10.1016/j.cmpb.2016.09.011
    Monitoring of respiratory mechanics is required for guiding patient-specific mechanical ventilation settings in critical care. Many models of respiratory mechanics perform poorly in the presence of variable patient effort. Typical modelling approaches either attempt to mitigate the effect of the patient effort on the airway pressure waveforms, or attempt to capture the size and shape of the patient effort. This work analyses a range of methods to identify respiratory mechanics in volume controlled ventilation modes when there is patient effort. The models are compared using 4 Datasets, each with a sample of 30 breaths before, and 2-3 minutes after sedation has been administered. The sedation will reduce patient efforts, but the underlying pulmonary mechanical properties are unlikely to change during this short time. Model identified parameters from breathing cycles with patient effort are compared to breathing cycles that do not have patient effort. All models have advantages and disadvantages, so model selection may be specific to the respiratory mechanics application. However, in general, the combined method of iterative interpolative pressure reconstruction, and stacking multiple consecutive breaths together has the best performance over the Dataset. The variability of identified elastance when there is patient effort is the lowest with this method, and there is little systematic offset in identified mechanics when sedation is administered.
  13. Pang T, Wong JHD, Ng WL, Chan CS
    Comput Methods Programs Biomed, 2021 May;203:106018.
    PMID: 33714900 DOI: 10.1016/j.cmpb.2021.106018
    BACKGROUND AND OBJECTIVE: The capability of deep learning radiomics (DLR) to extract high-level medical imaging features has promoted the use of computer-aided diagnosis of breast mass detected on ultrasound. Recently, generative adversarial network (GAN) has aided in tackling a general issue in DLR, i.e., obtaining a sufficient number of medical images. However, GAN methods require a pair of input and labeled images, which require an exhaustive human annotation process that is very time-consuming. The aim of this paper is to develop a radiomics model based on a semi-supervised GAN method to perform data augmentation in breast ultrasound images.

    METHODS: A total of 1447 ultrasound images, including 767 benign masses and 680 malignant masses were acquired from a tertiary hospital. A semi-supervised GAN model was developed to augment the breast ultrasound images. The synthesized images were subsequently used to classify breast masses using a convolutional neural network (CNN). The model was validated using a 5-fold cross-validation method.

    RESULTS: The proposed GAN architecture generated high-quality breast ultrasound images, verified by two experienced radiologists. The improved performance of semi-supervised learning increased the quality of the synthetic data produced in comparison to the baseline method. We achieved more accurate breast mass classification results (accuracy 90.41%, sensitivity 87.94%, specificity 85.86%) with our synthetic data augmentation compared to other state-of-the-art methods.

    CONCLUSION: The proposed radiomics model has demonstrated a promising potential to synthesize and classify breast masses on ultrasound in a semi-supervised manner.

  14. Palaniappan R, Sundaraj K, Sundaraj S
    Comput Methods Programs Biomed, 2017 Jul;145:67-72.
    PMID: 28552127 DOI: 10.1016/j.cmpb.2017.04.013
    BACKGROUND: The monitoring of the respiratory rate is vital in several medical conditions, including sleep apnea because patients with sleep apnea exhibit an irregular respiratory rate compared with controls. Therefore, monitoring the respiratory rate by detecting the different breath phases is crucial.

    OBJECTIVES: This study aimed to segment the breath cycles from pulmonary acoustic signals using the newly developed adaptive neuro-fuzzy inference system (ANFIS) based on breath phase detection and to subsequently evaluate the performance of the system.

    METHODS: The normalised averaged power spectral density for each segment was fuzzified, and a set of fuzzy rules was formulated. The ANFIS was developed to detect the breath phases and subsequently perform breath cycle segmentation. To evaluate the performance of the proposed method, the root mean square error (RMSE) and correlation coefficient values were calculated and analysed, and the proposed method was then validated using data collected at KIMS Hospital and the RALE standard dataset.

    RESULTS: The analysis of the correlation coefficient of the neuro-fuzzy model, which was performed to evaluate its performance, revealed a correlation strength of r = 0.9925, and the RMSE for the neuro-fuzzy model was found to equal 0.0069.

    CONCLUSION: The proposed neuro-fuzzy model performs better than the fuzzy inference system (FIS) in detecting the breath phases and segmenting the breath cycles and requires less rules than FIS.

  15. Othman NA, Azhar MAAS, Damanhuri NS, Mahadi IA, Abbas MH, Shamsuddin SA, et al.
    Comput Methods Programs Biomed, 2023 Jun;236:107566.
    PMID: 37186981 DOI: 10.1016/j.cmpb.2023.107566
    BACKGROUND AND OBJECTIVE: The identification of insulinaemic pharmacokinetic parameters using the least-squares criterion approach is easily influenced by outlying data due to its sensitivity. Furthermore, the least-squares criterion has a tendency to overfit and produce incorrect results. Hence, this research proposes an alternative approach using the artificial neural network (ANN) with two hidden layers to optimize the identifying of insulinaemic pharmacokinetic parameters. The ANN is selected for its ability to avoid overfitting parameters and its faster speed in processing data.

    METHODS: 18 voluntarily participants were recruited from the Canterbury and Otago region of New Zealand to take part in a Dynamic Insulin Sensitivity and Secretion Test (DISST) clinical trial. A total of 46 DISST data were collected. However, due to ambiguous and inconsistency, 4 data had to be removed. Analysis was done using MATLAB 2020a.

    RESULTS AND DISCUSSION: Results show that, with 42 gathered dataset, the ANN generates higher gains, ∅P = 20.73 [12.21, 28.57] mU·L·mmol-1·min-1 and ∅D = 60.42 [26.85, 131.38] mU·L·mmol-1 as compared to the linear least square method, ∅P = 19.67 [11.81, 28.02] mU·L·mmol-1 ·min-1 and ∅D = 46.21 [7.25, 116.71] mU·L·mmol-1. The average value of the insulin sensitivity (SI) of ANN is lower with, SI = 16 × 10-4 L·mU-1 ·min-1 than the linear least square, SI = 17 × 10-4 L·mU-1 ·min-1.

    CONCLUSION: Although the ANN analysis provided a lower SI value, the results were more dependable than the linear least square model because the ANN approach yielded a better model fitting accuracy than the linear least square method with a lower residual error of less than 5%. With the implementation of this ANN architecture, it shows that ANN able to produce minimal error during optimization process particularly when dealing with outlying data. The findings may provide extra information to clinicians, allowing them to gain a better knowledge of the heterogenous aetiology of diabetes and therapeutic intervention options.

  16. Omam S, Babini MH, Sim S, Tee R, Nathan V, Namazi H
    Comput Methods Programs Biomed, 2020 Feb;184:105293.
    PMID: 31887618 DOI: 10.1016/j.cmpb.2019.105293
    BACKGROUND AND OBJECTIVE: Human body is covered with skin in different parts. In fact, skin reacts to different changes around human. For instance, when the surrounding temperature changes, human skin will react differently. It is known that the activity of skin is regulated by human brain. In this research, for the first time we investigate the relation between the activities of human skin and brain by mathematical analysis of Galvanic Skin Response (GSR) and Electroencephalography (EEG) signals.

    METHOD: For this purpose, we employ fractal theory and analyze the variations of fractal dimension of GSR and EEG signals when subjects are exposed to different olfactory stimuli in the form of pleasant odors.

    RESULTS: Based on the obtained results, the complexity of GSR signal changes with the complexity of EEG signal in case of different stimuli, where by increasing the molecular complexity of olfactory stimuli, the complexity of EEG and GSR signals increases. The results of statistical analysis showed the significant effect of stimulation on variations of complexity of GSR signal. In addition, based on effect size analysis, fourth odor with greatest molecular complexity had the greatest effect on variations of complexity of EEG and GSR signals.

    CONCLUSION: Therefore, it can be said that human skin reaction changes with the variations in the activity of human brain. The result of analysis in this research can be further used to make a model between the activities of human skin and brain that will enable us to predict skin reaction to different stimuli.

  17. Ninomiya K, Arimura H, Tanaka K, Chan WY, Kabata Y, Mizuno S, et al.
    Comput Methods Programs Biomed, 2023 Jun;236:107544.
    PMID: 37148668 DOI: 10.1016/j.cmpb.2023.107544
    OBJECTIVES: To elucidate a novel radiogenomics approach using three-dimensional (3D) topologically invariant Betti numbers (BNs) for topological characterization of epidermal growth factor receptor (EGFR) Del19 and L858R mutation subtypes.

    METHODS: In total, 154 patients (wild-type EGFR, 72 patients; Del19 mutation, 45 patients; and L858R mutation, 37 patients) were retrospectively enrolled and randomly divided into 92 training and 62 test cases. Two support vector machine (SVM) models to distinguish between wild-type and mutant EGFR (mutation [M] classification) as well as between the Del19 and L858R subtypes (subtype [S] classification) were trained using 3DBN features. These features were computed from 3DBN maps by using histogram and texture analyses. The 3DBN maps were generated using computed tomography (CT) images based on the Čech complex constructed on sets of points in the images. These points were defined by coordinates of voxels with CT values higher than several threshold values. The M classification model was built using image features and demographic parameters of sex and smoking status. The SVM models were evaluated by determining their classification accuracies. The feasibility of the 3DBN model was compared with those of conventional radiomic models based on pseudo-3D BN (p3DBN), two-dimensional BN (2DBN), and CT and wavelet-decomposition (WD) images. The validation of the model was repeated with 100 times random sampling.

    RESULTS: The mean test accuracies for M classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.810, 0.733, 0.838, 0.782, and 0.799, respectively. The mean test accuracies for S classification with 3DBN, p3DBN, 2DBN, CT, and WD images were 0.773, 0.694, 0.657, 0.581, and 0.696, respectively.

    CONCLUSION: 3DBN features, which showed a radiogenomic association with the characteristics of the EGFR Del19/L858R mutation subtypes, yielded higher accuracy for subtype classifications in comparison with conventional features.

  18. Mohd Faizal AS, Thevarajah TM, Khor SM, Chang SW
    Comput Methods Programs Biomed, 2021 Aug;207:106190.
    PMID: 34077865 DOI: 10.1016/j.cmpb.2021.106190
    Cardiovascular disease (CVD) is the leading cause of death worldwide and is a global health issue. Traditionally, statistical models are used commonly in the risk prediction and assessment of CVD. However, the adoption of artificial intelligent (AI) approach is rapidly taking hold in the current era of technology to evaluate patient risks and predict the outcome of CVD. In this review, we outline various conventional risk scores and prediction models and do a comparison with the AI approach. The strengths and limitations of both conventional and AI approaches are discussed. Besides that, biomarker discovery related to CVD are also elucidated as the biomarkers can be used in the risk stratification as well as early detection of the disease. Moreover, problems and challenges involved in current CVD studies are explored. Lastly, future prospects of CVD risk prediction and assessment in the multi-modality of big data integrative approaches are proposed.
  19. Mohammed KI, Zaidan AA, Zaidan BB, Albahri OS, Albahri AS, Alsalem MA, et al.
    Comput Methods Programs Biomed, 2020 Mar;185:105151.
    PMID: 31710981 DOI: 10.1016/j.cmpb.2019.105151
    CONTEXT: Telemedicine has been increasingly used in healthcare to provide services to patients remotely. However, prioritising patients with multiple chronic diseases (MCDs) in telemedicine environment is challenging because it includes decision-making (DM) with regard to the emergency degree of each chronic disease for every patient.

    OBJECTIVE: This paper proposes a novel technique for reorganisation of opinion order to interval levels (TROOIL) to prioritise the patients with MCDs in real-time remote health-monitoring system.

    METHODS: The proposed TROOIL technique comprises six steps for prioritisation of patients with MCDs: (1) conversion of actual data into intervals; (2) rule generation; (3) rule ordering; (4) expert rule validation; (5) data reorganisation; and (6) criteria weighting and ranking alternatives within each rule. The secondary dataset of 500 patients from the most relevant study in a remote prioritisation area was adopted. The dataset contains three diseases, namely, chronic heart disease, high blood pressure (BP) and low BP.

    RESULTS: The proposed TROOIL is an effective technique for prioritising patients with MCDs. In the objective validation, remarkable differences were recognised among the groups' scores, indicating identical ranking results. In the evaluation of issues within all scenarios, the proposed framework has an advantage of 22.95% over the benchmark framework.

    DISCUSSION: Patients with the most severe MCD were treated first on the basis of their highest priority levels. The treatment for patients with less severe cases was delayed more than that for other patients.

    CONCLUSIONS: The proposed TROOIL technique can deal with multiple DM problems in prioritisation of patients with MCDs.

  20. Mirza IA, Abdulhameed M, Vieru D, Shafie S
    Comput Methods Programs Biomed, 2016 Dec;137:149-166.
    PMID: 28110721 DOI: 10.1016/j.cmpb.2016.09.014
    Therapies with magnetic/electromagnetic field are employed to relieve pains or, to accelerate flow of blood-particles, particularly during the surgery. In this paper, a theoretical study of the blood flow along with particles suspension through capillary was made by the electro-magneto-hydrodynamic approach. Analytical solutions to the non-dimensional blood velocity and non-dimensional particles velocity are obtained by means of the Laplace transform with respect to the time variable and the finite Hankel transform with respect to the radial coordinate. The study of thermally transfer characteristics is based on the energy equation for two-phase thermal transport of blood and particles suspension with viscous dissipation, the volumetric heat generation due to Joule heating effect and electromagnetic couple effect. The solution of the nonlinear heat transfer problem is derived by using the velocity field and the integral transform method. The influence of dimensionless system parameters like the electrokinetic width, the Hartman number, Prandtl number, the coefficient of heat generation due to Joule heating and Eckert number on the velocity and temperature fields was studied using the Mathcad software. Results are presented by graphical illustrations.
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links