Displaying publications 21 - 40 of 85 in total

Abstract:
Sort:
  1. Redmond DP, Chiew YS, Major V, Chase JG
    Comput Methods Programs Biomed, 2019 Apr;171:67-79.
    PMID: 27697371 DOI: 10.1016/j.cmpb.2016.09.011
    Monitoring of respiratory mechanics is required for guiding patient-specific mechanical ventilation settings in critical care. Many models of respiratory mechanics perform poorly in the presence of variable patient effort. Typical modelling approaches either attempt to mitigate the effect of the patient effort on the airway pressure waveforms, or attempt to capture the size and shape of the patient effort. This work analyses a range of methods to identify respiratory mechanics in volume controlled ventilation modes when there is patient effort. The models are compared using 4 Datasets, each with a sample of 30 breaths before, and 2-3 minutes after sedation has been administered. The sedation will reduce patient efforts, but the underlying pulmonary mechanical properties are unlikely to change during this short time. Model identified parameters from breathing cycles with patient effort are compared to breathing cycles that do not have patient effort. All models have advantages and disadvantages, so model selection may be specific to the respiratory mechanics application. However, in general, the combined method of iterative interpolative pressure reconstruction, and stacking multiple consecutive breaths together has the best performance over the Dataset. The variability of identified elastance when there is patient effort is the lowest with this method, and there is little systematic offset in identified mechanics when sedation is administered.
  2. Shindi O, Kanesan J, Kendall G, Ramanathan A
    Comput Methods Programs Biomed, 2020 Jun;189:105327.
    PMID: 31978808 DOI: 10.1016/j.cmpb.2020.105327
    BACKGROUND AND OBJECTIVES: In cancer therapy optimization, an optimal amount of drug is determined to not only reduce the tumor size but also to maintain the level of chemo toxicity in the patient's body. The increase in the number of objectives and constraints further burdens the optimization problem. The objective of the present work is to solve a Constrained Multi- Objective Optimization Problem (CMOOP) of the Cancer-Chemotherapy. This optimization results in optimal drug schedule through the minimization of the tumor size and the drug concentration by ensuring the patient's health level during dosing within an acceptable level.

    METHODS: This paper presents two hybrid methodologies that combines optimal control theory with multi-objective swarm and evolutionary algorithms and compares the performance of these methodologies with multi-objective swarm intelligence algorithms such as MOEAD, MODE, MOPSO and M-MOPSO. The hybrid and conventional methodologies are compared by addressing CMOOP.

    RESULTS: The minimized tumor and drug concentration results obtained by the hybrid methodologies demonstrate that they are not only superior to pure swarm intelligence or evolutionary algorithm methodologies but also consumes far less computational time. Further, Second Order Sufficient Condition (SSC) is also used to verify and validate the optimality condition of the constrained multi-objective problem.

    CONCLUSION: The proposed methodologies reduce chemo-medicine administration while maintaining effective tumor killing. This will be helpful for oncologist to discover and find the optimum dose schedule of the chemotherapy that reduces the tumor cells while maintaining the patients' health at a safe level.

  3. Farayola MF, Shafie S, Mohd Siam F, Khan I
    Comput Methods Programs Biomed, 2020 Apr;187:105202.
    PMID: 31835107 DOI: 10.1016/j.cmpb.2019.105202
    Background This paper presents a numerical simulation of normal and cancer cells' population dynamics during radiotherapy. The model used for the simulation was the improved cancer treatment model with radiotherapy. The model simulated the population changes during a fractionated cancer treatment process. The results gave the final populations of the cells, which provided the final volumes of the tumor and normal cells. Method The improved model was obtained by integrating the previous cancer treatment model with the Caputo fractional derivative. In addition, the cells' population decay due to radiation was accounted for by coupling the linear-quadratic model into the improved model. The simulation of the treatment process was done with numerical variables, numerical parameters, and radiation parameters. The numerical variables include the populations of the cells and the time of treatment. The numerical parameters were the model factors which included the proliferation rates of cells, competition coefficients of cells, and perturbation constant for normal cells. The radiation parameters were clinical data based on the treatment procedure. The numerical parameters were obtained from the previous literature while the numerical variables and radiation parameters, which were clinical data, were obtained from reported data of four cancer patients treated with radiotherapy. The four cancer patients had tumor volumes of 28.4 cm3, 18.8 cm3, 30.6 cm3, and 12.6 cm3 and were treated with different treatment plans and a fractionated dose of 1.8 Gy each. The initial populations of cells were obtained by using the tumor volumes. The computer simulations were done with MATLAB. Results The final volumes of the tumors, from the results of the simulations, were 5.67 cm3, 4.36 cm3, 5.74 cm3, and 6.15 cm3 while the normal cells' volumes were 28.17 cm3, 18.68 cm3, 30.34 cm3, and 12.54 cm3. The powers of the derivatives were 0.16774, 0.16557, 0.16835, and 0.16. A variance-based sensitivity analysis was done to corroborate the model with the clinical data. The result showed that the most sensitive factors were the power of the derivative and the cancer cells' proliferation rate. Conclusion The model provided information concerning the status of treatments and can also predict outcomes of other treatment plans.
  4. Farayola MF, Shafie S, Siam FM, Khan I
    Comput Methods Programs Biomed, 2020 May;188:105306.
    PMID: 31901851 DOI: 10.1016/j.cmpb.2019.105306
    BACKGROUND: This paper presents a mathematical model that simulates a radiotherapy cancer treatment process. The model takes into consideration two important radiobiological factors, which are repair and repopulation of cells. The model was used to simulate the fractionated treatment process of six patients. The results gave the population changes in the cells and the final volumes of the normal and cancer cells.

    METHOD: The model was formulated by integrating the Caputo fractional derivative with the previous cancer treatment model. Thereafter, the linear-quadratic with the repopulation model was coupled into the model to account for the cells' population decay due to radiation. The treatment process was then simulated with numerical variables, numerical parameters, and radiation parameters. The numerical parameters which included the proliferation coefficients of the cells, competition coefficients of the cells, and the perturbation constant of the normal cells were obtained from previous literature. The radiation and numerical parameters were obtained from reported clinical data of six patients treated with radiotherapy. The patients had tumor volumes of 24.1cm3, 17.4cm3, 28.4cm3, 18.8cm3, 30.6cm3, and 12.6cm3 with fractionated doses of 2 Gy for the first two patients and 1.8 Gy for the other four. The initial tumor volumes were used to obtain initial populations of cells after which the treatment process was simulated in MATLAB. Subsequently, a global sensitivity analysis was done to corroborate the model with clinical data. Finally, 96 radiation protocols were simulated by using the biologically effective dose formula. These protocols were used to obtain a regression equation connecting the value of the Caputo fractional derivative with the fractionated dose.

    RESULTS: The final tumor volumes, from the results of the simulations, were 3.58cm3, 8.61cm3, 5.68cm3, 4.36cm3, 5.75cm3, and 6.12cm3, while those of the normal cells were 23.87cm3, 17.29cm3, 28.17cm3, 18.68cm3, 30.33cm3, and 12.55cm3. The sensitivity analysis showed that the most sensitive model factors were the value of the Caputo fractional derivative and the proliferation coefficient of the cancer cells. Lastly, the obtained regression equation accounted for 99.14% of the prediction.

    CONCLUSION: The model can simulate a cancer treatment process and predict the results of other radiation protocols.

  5. Liew WS, Tang TB, Lin CH, Lu CK
    Comput Methods Programs Biomed, 2021 Jul;206:106114.
    PMID: 33984661 DOI: 10.1016/j.cmpb.2021.106114
    BACKGROUND AND OBJECTIVE: The increased incidence of colorectal cancer (CRC) and its mortality rate have attracted interest in the use of artificial intelligence (AI) based computer-aided diagnosis (CAD) tools to detect polyps at an early stage. Although these CAD tools have thus far achieved a good accuracy level to detect polyps, they still have room to improve further (e.g. sensitivity). Therefore, a new CAD tool is developed in this study to detect colonic polyps accurately.

    METHODS: In this paper, we propose a novel approach to distinguish colonic polyps by integrating several techniques, including a modified deep residual network, principal component analysis and AdaBoost ensemble learning. A powerful deep residual network architecture, ResNet-50, was investigated to reduce the computational time by altering its architecture. To keep the interference to a minimum, median filter, image thresholding, contrast enhancement, and normalisation techniques were exploited on the endoscopic images to train the classification model. Three publicly available datasets, i.e., Kvasir, ETIS-LaribPolypDB, and CVC-ClinicDB, were merged to train the model, which included images with and without polyps.

    RESULTS: The proposed approach trained with a combination of three datasets achieved Matthews Correlation Coefficient (MCC) of 0.9819 with accuracy, sensitivity, precision, and specificity of 99.10%, 98.82%, 99.37%, and 99.38%, respectively.

    CONCLUSIONS: These results show that our method could repeatedly classify endoscopic images automatically and could be used to effectively develop computer-aided diagnostic tools for early CRC detection.

  6. Goh CH, Tan LK, Lovell NH, Ng SC, Tan MP, Lim E
    Comput Methods Programs Biomed, 2020 Nov;196:105596.
    PMID: 32580054 DOI: 10.1016/j.cmpb.2020.105596
    BACKGROUND AND OBJECTIVES: Continuous monitoring of physiological parameters such as photoplethysmography (PPG) has attracted increased interest due to advances in wearable sensors. However, PPG recordings are susceptible to various artifacts, and thus reducing the reliability of PPG-driven parameters, such as oxygen saturation, heart rate, blood pressure and respiration. This paper proposes a one-dimensional convolution neural network (1-D-CNN) to classify five-second PPG segments into clean or artifact-affected segments, avoiding data-dependent pulse segmentation techniques and heavy manual feature engineering.

    METHODS: Continuous raw PPG waveforms were blindly allocated into segments with an equal length (5s) without leveraging any pulse location information and were normalized with Z-score normalization methods. A 1-D-CNN was designed to automatically learn the intrinsic features of the PPG waveform, and perform the required classification. Several training hyperparameters (initial learning rate and gradient threshold) were varied to investigate the effect of these parameters on the performance of the network. Subsequently, this proposed network was trained and validated with 30 subjects, and then tested with eight subjects, with our local dataset. Moreover, two independent datasets downloaded from the PhysioNet MIMIC II database were used to evaluate the robustness of the proposed network.

    RESULTS: A 13 layer 1-D-CNN model was designed. Within our local study dataset evaluation, the proposed network achieved a testing accuracy of 94.9%. The classification accuracy of two independent datasets also achieved satisfactory accuracy of 93.8% and 86.7% respectively. Our model achieved a comparable performance with most reported works, with the potential to show good generalization as the proposed network was evaluated with multiple cohorts (overall accuracy of 94.5%).

    CONCLUSION: This paper demonstrated the feasibility and effectiveness of applying blind signal processing and deep learning techniques to PPG motion artifact detection, whereby manual feature thresholding was avoided and yet a high generalization ability was achieved.

  7. Abidemi A, Aziz NAB
    Comput Methods Programs Biomed, 2020 Nov;196:105585.
    PMID: 32554024 DOI: 10.1016/j.cmpb.2020.105585
    Background Dengue is a vector-borne viral disease endemic in Malaysia. The disease is presently a public health issue in the country. Hence, the use of mathematical model to gain insights into the transmission dynamics and derive the optimal control strategies for minimizing the spread of the disease is of great importance. Methods A model involving eight mutually exclusive compartments with the introduction of personal protection, larvicide and adulticide control strategies describing dengue fever transmission dynamics is presented. The control-induced basic reproduction number (R˜0) related to the model is computed using the next generation matrix method. Comparison theorem is used to analyse the global dynamics of the model. The model is fitted to the data related to the 2012 dengue outbreak in Johor, Malaysia, using the least-squares method. In a bid to optimally curtail dengue fever propagation, we apply optimal control theory to investigate the effect of several control strategies of combination of optimal personal protection, larvicide and adulticide controls on dengue fever dynamics. The resulting optimality system is simulated in MATLAB using fourth order Runge-Kutta scheme based on the forward-backward sweep method. In addition, cost-effectiveness analysis is performed to determine the most cost-effective strategy among the various control strategies analysed. Results Analysis of the model with control parameters shows that the model has two disease-free equilibria, namely, trivial equilibrium and biologically realistic disease-free equilibrium, and one endemic equilibrium point. It also reveals that the biologically realistic disease-free equilibrium is both locally and globally asymptotically stable whenever the inequality R˜0<1holds. In the case of model with time-dependent control functions, the optimality levels of the three control functions required to optimally control dengue disease transmission are derived. Conclusion We conclude that dengue fever transmission can be curtailed by adopting any of the several control strategies analysed in this study. Furthermore, a strategy which combines personal protection and adulticide controls is found to be the most cost-effective control strategy.
  8. Alsaih K, Yusoff MZ, Tang TB, Faye I, Mériaudeau F
    Comput Methods Programs Biomed, 2020 Oct;195:105566.
    PMID: 32504911 DOI: 10.1016/j.cmpb.2020.105566
    BACKGROUND AND OBJECTIVES: Aged people usually are more to be diagnosed with retinal diseases in developed countries. Retinal capillaries leakage into the retina swells and causes an acute vision loss, which is called age-related macular degeneration (AMD). The disease can not be adequately diagnosed solely using fundus images as depth information is not available. The variations in retina volume assist in monitoring ophthalmological abnormalities. Therefore, high-fidelity AMD segmentation in optical coherence tomography (OCT) imaging modality has raised the attention of researchers as well as those of the medical doctors. Many methods across the years encompassing machine learning approaches and convolutional neural networks (CNN) strategies have been proposed for object detection and image segmentation.

    METHODS: In this paper, we analyze four wide-spread deep learning models designed for the segmentation of three retinal fluids outputting dense predictions in the RETOUCH challenge data. We aim to demonstrate how a patch-based approach could push the performance for each method. Besides, we also evaluate the methods using the OPTIMA challenge dataset for generalizing network performance. The analysis is driven into two sections: the comparison between the four approaches and the significance of patching the images.

    RESULTS: The performance of networks trained on the RETOUCH dataset is higher than human performance. The analysis further generalized the performance of the best network obtained by fine-tuning it and achieved a mean Dice similarity coefficient (DSC) of 0.85. Out of the three types of fluids, intraretinal fluid (IRF) is more recognized, and the highest DSC value of 0.922 is achieved using Spectralis dataset. Additionally, the highest average DSC score is 0.84, which is achieved by PaDeeplabv3+ model using Cirrus dataset.

    CONCLUSIONS: The proposed method segments the three fluids in the retina with high DSC value. Fine-tuning the networks trained on the RETOUCH dataset makes the network perform better and faster than training from scratch. Enriching the networks with inputting a variety of shapes by extracting patches helped to segment the fluids better than using a full image.

  9. Faizal WM, Ghazali NNN, Khor CY, Badruddin IA, Zainon MZ, Yazid AA, et al.
    Comput Methods Programs Biomed, 2020 Nov;196:105627.
    PMID: 32629222 DOI: 10.1016/j.cmpb.2020.105627
    BACKGROUND AND OBJECTIVE: Human upper airway (HUA) has been widely investigated by many researchers covering various aspects, such as the effects of geometrical parameters on the pressure, velocity and airflow characteristics. Clinically significant obstruction can develop anywhere throughout the upper airway, leading to asphyxia and death; this is where recognition and treatment are essential and lifesaving. The availability of advanced computer, either hardware or software, and rapid development in numerical method have encouraged researchers to simulate the airflow characteristics and properties of HUA by using various patient conditions at different ranges of geometry and operating conditions. Computational fluid dynamics (CFD) has emerged as an efficient alternative tool to understand the airflow of HUA and in preparing patients to undergo surgery. The main objective of this article is to review the literature that deals with the CFD approach and modeling in analyzing HUA.

    METHODS: This review article discusses the experimental and computational methods in the study of HUA. The discussion includes computational fluid dynamics approach and steps involved in the modeling used to investigate the flow characteristics of HUA. From inception to May 2020, databases of PubMed, Embase, Scopus, the Cochrane Library, BioMed Central, and Web of Science have been utilized to conduct a thorough investigation of the literature. There had been no language restrictions in publication and study design of the database searches. A total of 117 articles relevant to the topic under investigation were thoroughly and critically reviewed to give a clear information about the subject. The article summarizes the review in the form of method of studying the HUA, CFD approach in HUA, and the application of CFD for predicting HUA obstacle, including the type of CFD commercial software are used in this research area.

    RESULTS: This review found that the human upper airway was well studied through the application of computational fluid dynamics, which had considerably enhanced the understanding of flow in HUA. In addition, it assisted in making strategic and reasonable decision regarding the adoption of treatment methods in clinical settings. The literature suggests that most studies were related to HUA simulation that considerably focused on the aspects of fluid dynamics. However, there is a literature gap in obtaining information on the effects of fluid-structure interaction (FSI). The application of FSI in HUA is still limited in the literature; as such, this could be a potential area for future researchers. Furthermore, majority of researchers present the findings of their work through the mechanism of airflow, such as that of velocity, pressure, and shear stress. This includes the use of Navier-Stokes equation via CFD to help visualize the actual mechanism of the airflow. The above-mentioned technique expresses the turbulent kinetic energy (TKE) in its result to demonstrate the real mechanism of the airflow. Apart from that, key result such as wall shear stress (WSS) can be revealed via turbulent kinetic energy (TKE) and turbulent energy dissipation (TED), where it can be suggestive of wall injury and collapsibility tissue to the HUA.

  10. Jamil DF, Saleem S, Roslan R, Al-Mubaddel FS, Rahimi-Gorji M, Issakhov A, et al.
    Comput Methods Programs Biomed, 2021 May;203:106044.
    PMID: 33756187 DOI: 10.1016/j.cmpb.2021.106044
    BACKGROUND AND OBJECTIVE: Arterial diseases would lead to several serious disorders in the cardiovascular system such as atherosclerosis. These disorders are mainly caused by the presence of fatty deposits, cholesterol and lipoproteins inside blood vessel. This paper deals with the analysis of non-Newtonian magnetic blood flow in an inclined stenosed artery.

    METHODS: The Casson fluid was used to model the blood that flows under the influences of uniformly distributed magnetic field and oscillating pressure gradient. The governing fractional differential equations were expressed using the Caputo Fabrizio fractional derivative without singular kernel.

    RESULTS: The analytical solutions of velocities for non-Newtonian model were then calculated by means of Laplace and finite Hankel transforms. These velocities were then presented graphically. The result shows that the velocity increases with respect to Reynolds number and Casson parameter, while decreases when Hartmann number increases.

    CONCLUSIONS: Casson blood was treated as the non-Newtonian fluid. The MHD blood flow was accelerated by pressure gradient. These findings are beneficial for studying atherosclerosis therapy, the diagnosis and therapeutic treatment of some medical problems.

  11. Pang T, Wong JHD, Ng WL, Chan CS
    Comput Methods Programs Biomed, 2021 May;203:106018.
    PMID: 33714900 DOI: 10.1016/j.cmpb.2021.106018
    BACKGROUND AND OBJECTIVE: The capability of deep learning radiomics (DLR) to extract high-level medical imaging features has promoted the use of computer-aided diagnosis of breast mass detected on ultrasound. Recently, generative adversarial network (GAN) has aided in tackling a general issue in DLR, i.e., obtaining a sufficient number of medical images. However, GAN methods require a pair of input and labeled images, which require an exhaustive human annotation process that is very time-consuming. The aim of this paper is to develop a radiomics model based on a semi-supervised GAN method to perform data augmentation in breast ultrasound images.

    METHODS: A total of 1447 ultrasound images, including 767 benign masses and 680 malignant masses were acquired from a tertiary hospital. A semi-supervised GAN model was developed to augment the breast ultrasound images. The synthesized images were subsequently used to classify breast masses using a convolutional neural network (CNN). The model was validated using a 5-fold cross-validation method.

    RESULTS: The proposed GAN architecture generated high-quality breast ultrasound images, verified by two experienced radiologists. The improved performance of semi-supervised learning increased the quality of the synthetic data produced in comparison to the baseline method. We achieved more accurate breast mass classification results (accuracy 90.41%, sensitivity 87.94%, specificity 85.86%) with our synthetic data augmentation compared to other state-of-the-art methods.

    CONCLUSION: The proposed radiomics model has demonstrated a promising potential to synthesize and classify breast masses on ultrasound in a semi-supervised manner.

  12. Mohd Faizal AS, Thevarajah TM, Khor SM, Chang SW
    Comput Methods Programs Biomed, 2021 Aug;207:106190.
    PMID: 34077865 DOI: 10.1016/j.cmpb.2021.106190
    Cardiovascular disease (CVD) is the leading cause of death worldwide and is a global health issue. Traditionally, statistical models are used commonly in the risk prediction and assessment of CVD. However, the adoption of artificial intelligent (AI) approach is rapidly taking hold in the current era of technology to evaluate patient risks and predict the outcome of CVD. In this review, we outline various conventional risk scores and prediction models and do a comparison with the AI approach. The strengths and limitations of both conventional and AI approaches are discussed. Besides that, biomarker discovery related to CVD are also elucidated as the biomarkers can be used in the risk stratification as well as early detection of the disease. Moreover, problems and challenges involved in current CVD studies are explored. Lastly, future prospects of CVD risk prediction and assessment in the multi-modality of big data integrative approaches are proposed.
  13. Abdul-Kadir NA, Mat Safri N, Othman MA
    Comput Methods Programs Biomed, 2016 Nov;136:143-50.
    PMID: 27686711 DOI: 10.1016/j.cmpb.2016.08.021
    BACKGROUND: Atrial fibrillation (AF) can cause the formation of blood clots in the heart. The clots may move to the brain and cause a stroke. Therefore, this study analyzed the ECG features of AF and normal sinus rhythm signals for AF recognition which were extracted by using a second-order dynamic system (SODS) concept.
    OBJECTIVE: To find the appropriate windowing length for feature extraction based on SODS and to determine a machine learning method that could provide higher accuracy in recognizing AF.
    METHOD: ECG features were extracted based on a dynamic system (DS) that uses a second-order differential equation to describe the short-term behavior of ECG signals according to the natural frequency (ω), damping coefficient, (ξ), and forcing input (u). The extracted features were windowed into 2, 3, 4, 6, 8, and 10 second episodes to find the appropriate windowing size for AF signal processing. ANOVA and t-tests were used to determine the significant features. In addition, pattern recognition machine learning methods (an artificial neural network (ANN) and a support vector machine (SVM)) with k-fold cross validation (k-CV) were used to develop the ECG recognition system.
    RESULTS: Significant differences (p 
  14. Boon KH, Khalil-Hani M, Malarvili MB, Sia CW
    Comput Methods Programs Biomed, 2016 Oct;134:187-96.
    PMID: 27480743 DOI: 10.1016/j.cmpb.2016.07.016
    This paper proposes a method that predicts the onset of paroxysmal atrial fibrillation (PAF), using heart rate variability (HRV) segments that are shorter than those applied in existing methods, while maintaining good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to stabilize (electrically) and prevent the onset of atrial arrhythmias with different pacing techniques. We investigate the effect of HRV features extracted from different lengths of HRV segments prior to PAF onset with the proposed PAF prediction method. The pre-processing stage of the predictor includes QRS detection, HRV quantification and ectopic beat correction. Time-domain, frequency-domain, non-linear and bispectrum features are then extracted from the quantified HRV. In the feature selection, the HRV feature set and classifier parameters are optimized simultaneously using an optimization procedure based on genetic algorithm (GA). Both full feature set and statistically significant feature subset are optimized by GA respectively. For the statistically significant feature subset, Mann-Whitney U test is used to filter non-statistical significance features that cannot pass the statistical test at 20% significant level. The final stage of our predictor is the classifier that is based on support vector machine (SVM). A 10-fold cross-validation is applied in performance evaluation, and the proposed method achieves 79.3% prediction accuracy using 15-minutes HRV segment. This accuracy is comparable to that achieved by existing methods that use 30-minutes HRV segments, most of which achieves accuracy of around 80%. More importantly, our method significantly outperforms those that applied segments shorter than 30 minutes.
  15. Mirza IA, Abdulhameed M, Vieru D, Shafie S
    Comput Methods Programs Biomed, 2016 Dec;137:149-166.
    PMID: 28110721 DOI: 10.1016/j.cmpb.2016.09.014
    Therapies with magnetic/electromagnetic field are employed to relieve pains or, to accelerate flow of blood-particles, particularly during the surgery. In this paper, a theoretical study of the blood flow along with particles suspension through capillary was made by the electro-magneto-hydrodynamic approach. Analytical solutions to the non-dimensional blood velocity and non-dimensional particles velocity are obtained by means of the Laplace transform with respect to the time variable and the finite Hankel transform with respect to the radial coordinate. The study of thermally transfer characteristics is based on the energy equation for two-phase thermal transport of blood and particles suspension with viscous dissipation, the volumetric heat generation due to Joule heating effect and electromagnetic couple effect. The solution of the nonlinear heat transfer problem is derived by using the velocity field and the integral transform method. The influence of dimensionless system parameters like the electrokinetic width, the Hartman number, Prandtl number, the coefficient of heat generation due to Joule heating and Eckert number on the velocity and temperature fields was studied using the Mathcad software. Results are presented by graphical illustrations.
  16. Ahmadi H, Gholamzadeh M, Shahmoradi L, Nilashi M, Rashvand P
    Comput Methods Programs Biomed, 2018 Jul;161:145-172.
    PMID: 29852957 DOI: 10.1016/j.cmpb.2018.04.013
    BACKGROUND AND OBJECTIVE: Diagnosis as the initial step of medical practice, is one of the most important parts of complicated clinical decision making which is usually accompanied with the degree of ambiguity and uncertainty. Since uncertainty is the inseparable nature of medicine, fuzzy logic methods have been used as one of the best methods to decrease this ambiguity. Recently, several kinds of literature have been published related to fuzzy logic methods in a wide range of medical aspects in terms of diagnosis. However, in this context there are a few review articles that have been published which belong to almost ten years ago. Hence, we conducted a systematic review to determine the contribution of utilizing fuzzy logic methods in disease diagnosis in different medical practices.

    METHODS: Eight scientific databases are selected as an appropriate database and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method was employed as the basis method for conducting this systematic and meta-analysis review. Regarding the main objective of this research, some inclusion and exclusion criteria were considered to limit our investigation. To achieve a structured meta-analysis, all eligible articles were classified based on authors, publication year, journals or conferences, applied fuzzy methods, main objectives of the research, problems and research gaps, tools utilized to model the fuzzy system, medical disciplines, sample sizes, the inputs and outputs of the system, findings, results and finally the impact of applied fuzzy methods to improve diagnosis. Then, we analyzed the results obtained from these classifications to indicate the effect of fuzzy methods in decreasing the complexity of diagnosis.

    RESULTS: Consequently, the result of this study approved the effectiveness of applying different fuzzy methods in diseases diagnosis process, presenting new insights for researchers about what kind of diseases which have been more focused. This will help to determine the diagnostic aspects of medical disciplines that are being neglected.

    CONCLUSIONS: Overall, this systematic review provides an appropriate platform for further research by identifying the research needs in the domain of disease diagnosis.

  17. Faust O, Hagiwara Y, Hong TJ, Lih OS, Acharya UR
    Comput Methods Programs Biomed, 2018 Jul;161:1-13.
    PMID: 29852952 DOI: 10.1016/j.cmpb.2018.04.005
    BACKGROUND AND OBJECTIVE: We have cast the net into the ocean of knowledge to retrieve the latest scientific research on deep learning methods for physiological signals. We found 53 research papers on this topic, published from 01.01.2008 to 31.12.2017.

    METHODS: An initial bibliometric analysis shows that the reviewed papers focused on Electromyogram(EMG), Electroencephalogram(EEG), Electrocardiogram(ECG), and Electrooculogram(EOG). These four categories were used to structure the subsequent content review.

    RESULTS: During the content review, we understood that deep learning performs better for big and varied datasets than classic analysis and machine classification methods. Deep learning algorithms try to develop the model by using all the available input.

    CONCLUSIONS: This review paper depicts the application of various deep learning algorithms used till recently, but in future it will be used for more healthcare areas to improve the quality of diagnosis.

  18. Adam M, Oh SL, Sudarshan VK, Koh JE, Hagiwara Y, Tan JH, et al.
    Comput Methods Programs Biomed, 2018 Jul;161:133-143.
    PMID: 29852956 DOI: 10.1016/j.cmpb.2018.04.018
    Cardiovascular diseases (CVDs) are the leading cause of deaths worldwide. The rising mortality rate can be reduced by early detection and treatment interventions. Clinically, electrocardiogram (ECG) signal provides useful information about the cardiac abnormalities and hence employed as a diagnostic modality for the detection of various CVDs. However, subtle changes in these time series indicate a particular disease. Therefore, it may be monotonous, time-consuming and stressful to inspect these ECG beats manually. In order to overcome this limitation of manual ECG signal analysis, this paper uses a novel discrete wavelet transform (DWT) method combined with nonlinear features for automated characterization of CVDs. ECG signals of normal, and dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM) and myocardial infarction (MI) are subjected to five levels of DWT. Relative wavelet of four nonlinear features such as fuzzy entropy, sample entropy, fractal dimension and signal energy are extracted from the DWT coefficients. These features are fed to sequential forward selection (SFS) technique and then ranked using ReliefF method. Our proposed methodology achieved maximum classification accuracy (acc) of 99.27%, sensitivity (sen) of 99.74%, and specificity (spec) of 98.08% with K-nearest neighbor (kNN) classifier using 15 features ranked by the ReliefF method. Our proposed methodology can be used by clinical staff to make faster and accurate diagnosis of CVDs. Thus, the chances of survival can be significantly increased by early detection and treatment of CVDs.
  19. Acharya UR, Oh SL, Hagiwara Y, Tan JH, Adeli H, Subha DP
    Comput Methods Programs Biomed, 2018 Jul;161:103-113.
    PMID: 29852953 DOI: 10.1016/j.cmpb.2018.04.012
    In recent years, advanced neurocomputing and machine learning techniques have been used for Electroencephalogram (EEG)-based diagnosis of various neurological disorders. In this paper, a novel computer model is presented for EEG-based screening of depression using a deep neural network machine learning approach, known as Convolutional Neural Network (CNN). The proposed technique does not require a semi-manually-selected set of features to be fed into a classifier for classification. It learns automatically and adaptively from the input EEG signals to differentiate EEGs obtained from depressive and normal subjects. The model was tested using EEGs obtained from 15 normal and 15 depressed patients. The algorithm attained accuracies of 93.5% and 96.0% using EEG signals from the left and right hemisphere, respectively. It was discovered in this research that the EEG signals from the right hemisphere are more distinctive in depression than those from the left hemisphere. This discovery is consistent with recent research and revelation that the depression is associated with a hyperactive right hemisphere. An exciting extension of this research would be diagnosis of different stages and severity of depression and development of a Depression Severity Index (DSI).
  20. Jamaludin UK, M Suhaimi F, Abdul Razak NN, Md Ralib A, Mat Nor MB, Pretty CG, et al.
    Comput Methods Programs Biomed, 2018 Aug;162:149-155.
    PMID: 29903481 DOI: 10.1016/j.cmpb.2018.03.001
    BACKGROUND AND OBJECTIVE: Blood glucose variability is common in healthcare and it is not related or influenced by diabetes mellitus. To minimise the risk of high blood glucose in critically ill patients, Stochastic Targeted Blood Glucose Control Protocol is used in intensive care unit at hospitals worldwide. Thus, this study focuses on the performance of stochastic modelling protocol in comparison to the current blood glucose management protocols in the Malaysian intensive care unit. Also, this study is to assess the effectiveness of Stochastic Targeted Blood Glucose Control Protocol when it is applied to a cohort of diabetic patients.

    METHODS: Retrospective data from 210 patients were obtained from a general hospital in Malaysia from May 2014 until June 2015, where 123 patients were having comorbid diabetes mellitus. The comparison of blood glucose control protocol performance between both protocol simulations was conducted through blood glucose fitted with physiological modelling on top of virtual trial simulations, mean calculation of simulation error and several graphical comparisons using stochastic modelling.

    RESULTS: Stochastic Targeted Blood Glucose Control Protocol reduces hyperglycaemia by 16% in diabetic and 9% in nondiabetic cohorts. The protocol helps to control blood glucose level in the targeted range of 4.0-10.0 mmol/L for 71.8% in diabetic and 82.7% in nondiabetic cohorts, besides minimising the treatment hour up to 71 h for 123 diabetic patients and 39 h for 87 nondiabetic patients.

    CONCLUSION: It is concluded that Stochastic Targeted Blood Glucose Control Protocol is good in reducing hyperglycaemia as compared to the current blood glucose management protocol in the Malaysian intensive care unit. Hence, the current Malaysian intensive care unit protocols need to be modified to enhance their performance, especially in the integration of insulin and nutrition intervention in decreasing the hyperglycaemia incidences. Improvement in Stochastic Targeted Blood Glucose Control Protocol in terms of uen model is also a must to adapt with the diabetic cohort.

Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links