Displaying publications 1 - 20 of 85 in total

Abstract:
Sort:
  1. Abbasian Ardakani A, Bureau NJ, Ciaccio EJ, Acharya UR
    Comput Methods Programs Biomed, 2022 Mar;215:106609.
    PMID: 34990929 DOI: 10.1016/j.cmpb.2021.106609
    Radiomics is a newcomer field that has opened new windows for precision medicine. It is related to extraction of a large number of quantitative features from medical images, which may be difficult to detect visually. Underlying tumor biology can change physical properties of tissues, which affect patterns of image pixels and radiomics features. The main advantage of radiomics is that it can characterize the whole tumor non-invasively, even after a single sampling from an image. Therefore, it can be linked to a "digital biopsy". Physicians need to know about radiomics features to determine how their values correlate with the appearance of lesions and diseases. Indeed, physicians need practical references to conceive of basics and concepts of each radiomics feature without knowing their sophisticated mathematical formulas. In this review, commonly used radiomics features are illustrated with practical examples to help physicians in their routine diagnostic procedures.
  2. Abdar M, Książek W, Acharya UR, Tan RS, Makarenkov V, Pławiak P
    Comput Methods Programs Biomed, 2019 Oct;179:104992.
    PMID: 31443858 DOI: 10.1016/j.cmpb.2019.104992
    BACKGROUND AND OBJECTIVE: Coronary artery disease (CAD) is one of the commonest diseases around the world. An early and accurate diagnosis of CAD allows a timely administration of appropriate treatment and helps to reduce the mortality. Herein, we describe an innovative machine learning methodology that enables an accurate detection of CAD and apply it to data collected from Iranian patients.

    METHODS: We first tested ten traditional machine learning algorithms, and then the three-best performing algorithms (three types of SVM) were used in the rest of the study. To improve the performance of these algorithms, a data preprocessing with normalization was carried out. Moreover, a genetic algorithm and particle swarm optimization, coupled with stratified 10-fold cross-validation, were used twice: for optimization of classifier parameters and for parallel selection of features.

    RESULTS: The presented approach enhanced the performance of all traditional machine learning algorithms used in this study. We also introduced a new optimization technique called N2Genetic optimizer (a new genetic training). Our experiments demonstrated that N2Genetic-nuSVM provided the accuracy of 93.08% and F1-score of 91.51% when predicting CAD outcomes among the patients included in a well-known Z-Alizadeh Sani dataset. These results are competitive and comparable to the best results in the field.

    CONCLUSIONS: We showed that machine-learning techniques optimized by the proposed approach, can lead to highly accurate models intended for both clinical and research use.

  3. Abdul-Kadir NA, Mat Safri N, Othman MA
    Comput Methods Programs Biomed, 2016 Nov;136:143-50.
    PMID: 27686711 DOI: 10.1016/j.cmpb.2016.08.021
    BACKGROUND: Atrial fibrillation (AF) can cause the formation of blood clots in the heart. The clots may move to the brain and cause a stroke. Therefore, this study analyzed the ECG features of AF and normal sinus rhythm signals for AF recognition which were extracted by using a second-order dynamic system (SODS) concept.
    OBJECTIVE: To find the appropriate windowing length for feature extraction based on SODS and to determine a machine learning method that could provide higher accuracy in recognizing AF.
    METHOD: ECG features were extracted based on a dynamic system (DS) that uses a second-order differential equation to describe the short-term behavior of ECG signals according to the natural frequency (ω), damping coefficient, (ξ), and forcing input (u). The extracted features were windowed into 2, 3, 4, 6, 8, and 10 second episodes to find the appropriate windowing size for AF signal processing. ANOVA and t-tests were used to determine the significant features. In addition, pattern recognition machine learning methods (an artificial neural network (ANN) and a support vector machine (SVM)) with k-fold cross validation (k-CV) were used to develop the ECG recognition system.
    RESULTS: Significant differences (p 
  4. Abidemi A, Aziz NAB
    Comput Methods Programs Biomed, 2020 Nov;196:105585.
    PMID: 32554024 DOI: 10.1016/j.cmpb.2020.105585
    Background Dengue is a vector-borne viral disease endemic in Malaysia. The disease is presently a public health issue in the country. Hence, the use of mathematical model to gain insights into the transmission dynamics and derive the optimal control strategies for minimizing the spread of the disease is of great importance. Methods A model involving eight mutually exclusive compartments with the introduction of personal protection, larvicide and adulticide control strategies describing dengue fever transmission dynamics is presented. The control-induced basic reproduction number (R˜0) related to the model is computed using the next generation matrix method. Comparison theorem is used to analyse the global dynamics of the model. The model is fitted to the data related to the 2012 dengue outbreak in Johor, Malaysia, using the least-squares method. In a bid to optimally curtail dengue fever propagation, we apply optimal control theory to investigate the effect of several control strategies of combination of optimal personal protection, larvicide and adulticide controls on dengue fever dynamics. The resulting optimality system is simulated in MATLAB using fourth order Runge-Kutta scheme based on the forward-backward sweep method. In addition, cost-effectiveness analysis is performed to determine the most cost-effective strategy among the various control strategies analysed. Results Analysis of the model with control parameters shows that the model has two disease-free equilibria, namely, trivial equilibrium and biologically realistic disease-free equilibrium, and one endemic equilibrium point. It also reveals that the biologically realistic disease-free equilibrium is both locally and globally asymptotically stable whenever the inequality R˜0<1holds. In the case of model with time-dependent control functions, the optimality levels of the three control functions required to optimally control dengue disease transmission are derived. Conclusion We conclude that dengue fever transmission can be curtailed by adopting any of the several control strategies analysed in this study. Furthermore, a strategy which combines personal protection and adulticide controls is found to be the most cost-effective control strategy.
  5. Acharya UR, Sree SV, Muthu Rama Krishnan M, Krishnananda N, Ranjan S, Umesh P, et al.
    Comput Methods Programs Biomed, 2013 Dec;112(3):624-32.
    PMID: 23958645 DOI: 10.1016/j.cmpb.2013.07.012
    Coronary Artery Disease (CAD), caused by the buildup of plaque on the inside of the coronary arteries, has a high mortality rate. To efficiently detect this condition from echocardiography images, with lesser inter-observer variability and visual interpretation errors, computer based data mining techniques may be exploited. We have developed and presented one such technique in this paper for the classification of normal and CAD affected cases. A multitude of grayscale features (fractal dimension, entropies based on the higher order spectra, features based on image texture and local binary patterns, and wavelet based features) were extracted from echocardiography images belonging to a huge database of 400 normal cases and 400 CAD patients. Only the features that had good discriminating capability were selected using t-test. Several combinations of the resultant significant features were used to evaluate many supervised classifiers to find the combination that presents a good accuracy. We observed that the Gaussian Mixture Model (GMM) classifier trained with a feature subset made up of nine significant features presented the highest accuracy, sensitivity, specificity, and positive predictive value of 100%. We have also developed a novel, highly discriminative HeartIndex, which is a single number that is calculated from the combination of the features, in order to objectively classify the images from either of the two classes. Such an index allows for an easier implementation of the technique for automated CAD detection in the computers in hospitals and clinics.
  6. Acharya UR, Faust O, Sree V, Swapna G, Martis RJ, Kadri NA, et al.
    Comput Methods Programs Biomed, 2014;113(1):55-68.
    PMID: 24119391 DOI: 10.1016/j.cmpb.2013.08.017
    Coronary artery disease (CAD) is one of the dangerous cardiac disease, often may lead to sudden cardiac death. It is difficult to diagnose CAD by manual inspection of electrocardiogram (ECG) signals. To automate this detection task, in this study, we extracted the heart rate (HR) from the ECG signals and used them as base signal for further analysis. We then analyzed the HR signals of both normal and CAD subjects using (i) time domain, (ii) frequency domain and (iii) nonlinear techniques. The following are the nonlinear methods that were used in this work: Poincare plots, Recurrence Quantification Analysis (RQA) parameters, Shannon entropy, Approximate Entropy (ApEn), Sample Entropy (SampEn), Higher Order Spectra (HOS) methods, Detrended Fluctuation Analysis (DFA), Empirical Mode Decomposition (EMD), Cumulants, and Correlation Dimension. As a result of the analysis, we present unique recurrence, Poincare and HOS plots for normal and CAD subjects. We have also observed significant variations in the range of these features with respect to normal and CAD classes, and have presented the same in this paper. We found that the RQA parameters were higher for CAD subjects indicating more rhythm. Since the activity of CAD subjects is less, similar signal patterns repeat more frequently compared to the normal subjects. The entropy based parameters, ApEn and SampEn, are lower for CAD subjects indicating lower entropy (less activity due to impairment) for CAD. Almost all HOS parameters showed higher values for the CAD group, indicating the presence of higher frequency content in the CAD signals. Thus, our study provides a deep insight into how such nonlinear features could be exploited to effectively and reliably detect the presence of CAD.
  7. Acharya UR, Oh SL, Hagiwara Y, Tan JH, Adeli H, Subha DP
    Comput Methods Programs Biomed, 2018 Jul;161:103-113.
    PMID: 29852953 DOI: 10.1016/j.cmpb.2018.04.012
    In recent years, advanced neurocomputing and machine learning techniques have been used for Electroencephalogram (EEG)-based diagnosis of various neurological disorders. In this paper, a novel computer model is presented for EEG-based screening of depression using a deep neural network machine learning approach, known as Convolutional Neural Network (CNN). The proposed technique does not require a semi-manually-selected set of features to be fed into a classifier for classification. It learns automatically and adaptively from the input EEG signals to differentiate EEGs obtained from depressive and normal subjects. The model was tested using EEGs obtained from 15 normal and 15 depressed patients. The algorithm attained accuracies of 93.5% and 96.0% using EEG signals from the left and right hemisphere, respectively. It was discovered in this research that the EEG signals from the right hemisphere are more distinctive in depression than those from the left hemisphere. This discovery is consistent with recent research and revelation that the depression is associated with a hyperactive right hemisphere. An exciting extension of this research would be diagnosis of different stages and severity of depression and development of a Depression Severity Index (DSI).
  8. Acharya UR, Raghavendra U, Koh JEW, Meiburger KM, Ciaccio EJ, Hagiwara Y, et al.
    Comput Methods Programs Biomed, 2018 Nov;166:91-98.
    PMID: 30415722 DOI: 10.1016/j.cmpb.2018.10.006
    BACKGROUND AND OBJECTIVE: Liver fibrosis is a type of chronic liver injury that is characterized by an excessive deposition of extracellular matrix protein. Early detection of liver fibrosis may prevent further growth toward liver cirrhosis and hepatocellular carcinoma. In the past, the only method to assess liver fibrosis was through biopsy, but this examination is invasive, expensive, prone to sampling errors, and may cause complications such as bleeding. Ultrasound-based elastography is a promising tool to measure tissue elasticity in real time; however, this technology requires an upgrade of the ultrasound system and software. In this study, a novel computer-aided diagnosis tool is proposed to automatically detect and classify the various stages of liver fibrosis based upon conventional B-mode ultrasound images.

    METHODS: The proposed method uses a 2D contourlet transform and a set of texture features that are efficiently extracted from the transformed image. Then, the combination of a kernel discriminant analysis (KDA)-based feature reduction technique and analysis of variance (ANOVA)-based feature ranking technique was used, and the images were then classified into various stages of liver fibrosis.

    RESULTS: Our 2D contourlet transform and texture feature analysis approach achieved a 91.46% accuracy using only four features input to the probabilistic neural network classifier, to classify the five stages of liver fibrosis. It also achieved a 92.16% sensitivity and 88.92% specificity for the same model. The evaluation was done on a database of 762 ultrasound images belonging to five different stages of liver fibrosis.

    CONCLUSIONS: The findings suggest that the proposed method can be useful to automatically detect and classify liver fibrosis, which would greatly assist clinicians in making an accurate diagnosis.

  9. Acharya UR, Faust O, Ciaccio EJ, Koh JEW, Oh SL, Tan RS, et al.
    Comput Methods Programs Biomed, 2019 Jul;175:163-178.
    PMID: 31104705 DOI: 10.1016/j.cmpb.2019.04.018
    BACKGROUND AND OBJECTIVE: Complex fractionated atrial electrograms (CFAE) may contain information concerning the electrophysiological substrate of atrial fibrillation (AF); therefore they are of interest to guide catheter ablation treatment of AF. Electrogram signals are shaped by activation events, which are dynamical in nature. This makes it difficult to establish those signal properties that can provide insight into the ablation site location. Nonlinear measures may improve information. To test this hypothesis, we used nonlinear measures to analyze CFAE.

    METHODS: CFAE from several atrial sites, recorded for a duration of 16 s, were acquired from 10 patients with persistent and 9 patients with paroxysmal AF. These signals were appraised using non-overlapping windows of 1-, 2- and 4-s durations. The resulting data sets were analyzed with Recurrence Plots (RP) and Recurrence Quantification Analysis (RQA). The data was also quantified via entropy measures.

    RESULTS: RQA exhibited unique plots for persistent versus paroxysmal AF. Similar patterns were observed to be repeated throughout the RPs. Trends were consistent for signal segments of 1 and 2 s as well as 4 s in duration. This was suggestive that the underlying signal generation process is also repetitive, and that repetitiveness can be detected even in 1-s sequences. The results also showed that most entropy metrics exhibited higher measurement values (closer to equilibrium) for persistent AF data. It was also found that Determinism (DET), Trapping Time (TT), and Modified Multiscale Entropy (MMSE), extracted from signals that were acquired from locations at the posterior atrial free wall, are highly discriminative of persistent versus paroxysmal AF data.

    CONCLUSIONS: Short data sequences are sufficient to provide information to discern persistent versus paroxysmal AF data with a significant difference, and can be useful to detect repeating patterns of atrial activation.

  10. Adam M, Oh SL, Sudarshan VK, Koh JE, Hagiwara Y, Tan JH, et al.
    Comput Methods Programs Biomed, 2018 Jul;161:133-143.
    PMID: 29852956 DOI: 10.1016/j.cmpb.2018.04.018
    Cardiovascular diseases (CVDs) are the leading cause of deaths worldwide. The rising mortality rate can be reduced by early detection and treatment interventions. Clinically, electrocardiogram (ECG) signal provides useful information about the cardiac abnormalities and hence employed as a diagnostic modality for the detection of various CVDs. However, subtle changes in these time series indicate a particular disease. Therefore, it may be monotonous, time-consuming and stressful to inspect these ECG beats manually. In order to overcome this limitation of manual ECG signal analysis, this paper uses a novel discrete wavelet transform (DWT) method combined with nonlinear features for automated characterization of CVDs. ECG signals of normal, and dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM) and myocardial infarction (MI) are subjected to five levels of DWT. Relative wavelet of four nonlinear features such as fuzzy entropy, sample entropy, fractal dimension and signal energy are extracted from the DWT coefficients. These features are fed to sequential forward selection (SFS) technique and then ranked using ReliefF method. Our proposed methodology achieved maximum classification accuracy (acc) of 99.27%, sensitivity (sen) of 99.74%, and specificity (spec) of 98.08% with K-nearest neighbor (kNN) classifier using 15 features ranked by the ReliefF method. Our proposed methodology can be used by clinical staff to make faster and accurate diagnosis of CVDs. Thus, the chances of survival can be significantly increased by early detection and treatment of CVDs.
  11. Ahmad M, Jung LT, Bhuiyan AA
    Comput Methods Programs Biomed, 2017 Oct;149:11-17.
    PMID: 28802326 DOI: 10.1016/j.cmpb.2017.06.021
    BACKGROUND AND OBJECTIVE: Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals.

    METHODS: This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise.

    RESULTS: Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms.

    CONCLUSION: This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters.

  12. Ahmadi H, Gholamzadeh M, Shahmoradi L, Nilashi M, Rashvand P
    Comput Methods Programs Biomed, 2018 Jul;161:145-172.
    PMID: 29852957 DOI: 10.1016/j.cmpb.2018.04.013
    BACKGROUND AND OBJECTIVE: Diagnosis as the initial step of medical practice, is one of the most important parts of complicated clinical decision making which is usually accompanied with the degree of ambiguity and uncertainty. Since uncertainty is the inseparable nature of medicine, fuzzy logic methods have been used as one of the best methods to decrease this ambiguity. Recently, several kinds of literature have been published related to fuzzy logic methods in a wide range of medical aspects in terms of diagnosis. However, in this context there are a few review articles that have been published which belong to almost ten years ago. Hence, we conducted a systematic review to determine the contribution of utilizing fuzzy logic methods in disease diagnosis in different medical practices.

    METHODS: Eight scientific databases are selected as an appropriate database and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method was employed as the basis method for conducting this systematic and meta-analysis review. Regarding the main objective of this research, some inclusion and exclusion criteria were considered to limit our investigation. To achieve a structured meta-analysis, all eligible articles were classified based on authors, publication year, journals or conferences, applied fuzzy methods, main objectives of the research, problems and research gaps, tools utilized to model the fuzzy system, medical disciplines, sample sizes, the inputs and outputs of the system, findings, results and finally the impact of applied fuzzy methods to improve diagnosis. Then, we analyzed the results obtained from these classifications to indicate the effect of fuzzy methods in decreasing the complexity of diagnosis.

    RESULTS: Consequently, the result of this study approved the effectiveness of applying different fuzzy methods in diseases diagnosis process, presenting new insights for researchers about what kind of diseases which have been more focused. This will help to determine the diagnostic aspects of medical disciplines that are being neglected.

    CONCLUSIONS: Overall, this systematic review provides an appropriate platform for further research by identifying the research needs in the domain of disease diagnosis.

  13. Akhbar MFA
    Comput Methods Programs Biomed, 2023 Apr;231:107361.
    PMID: 36736133 DOI: 10.1016/j.cmpb.2023.107361
    BACKGROUND AND OBJECTIVE: Conventional surgical drill bits suffer from several drawbacks, including extreme heat generation, breakage, jam, and undesired breakthrough. Understanding the impacts of drill margin on bone damage can provide insights that lay the foundation for improvement in the existing surgical drill bit. However, research on drill margins in bone drilling is lacking. This work assesses the influences of margin height and width on thermomechanical damage in bone drilling.

    METHODS: Thermomechanical damage-maximum bone temperature, osteonecrosis diameter, osteonecrosis depth, maximum thrust force, and torque-were calculated using the finite element method under various margin heights (0.05-0.25 mm) and widths (0.02-0.26 mm). The simulation results were validated with experimental tests and previous research data.

    RESULTS: The effect of margin height in increasing the maximum bone temperature, osteonecrosis diameter, and depth were at least 19.1%, 41.9%, and 59.6%, respectively. The thrust force and torque are highly sensitive to margin height. A higher margin height (0.21-0.25 mm) reduced the thrust force by 54.0% but increased drilling torque by 142.2%. The bone temperature, osteonecrosis diameter, and depth were 16.5%, 56.5%, and 81.4% lower, respectively, with increasing margin width. The minimum thrust force (11.1 N) and torque (41.9 Nmm) were produced with the highest margin width (0.26 mm). The margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 produced the highest sum of weightage.

    CONCLUSIONS: A surgical drill bit with a margin height of 0.05-0.13 mm and a margin width of 0.22-0.26 mm can produce minimum thermomechanical damage in cortical bone drilling. The insights regarding the suitable ranges for margin height and width from this study could be adopted in future research devoted to optimizing the margin of the existing surgical drill bit.

  14. Al-Qaysi ZT, Zaidan BB, Zaidan AA, Suzani MS
    Comput Methods Programs Biomed, 2018 Oct;164:221-237.
    PMID: 29958722 DOI: 10.1016/j.cmpb.2018.06.012
    CONTEXT: Intelligent wheelchair technology has recently been utilised to address several mobility problems. Techniques based on brain-computer interface (BCI) are currently used to develop electric wheelchairs. Using human brain control in wheelchairs for people with disability has elicited widespread attention due to its flexibility.

    OBJECTIVE: This study aims to determine the background of recent studies on wheelchair control based on BCI for disability and map the literature survey into a coherent taxonomy. The study intends to identify the most important aspects in this emerging field as an impetus for using BCI for disability in electric-powered wheelchair (EPW) control, which remains a challenge. The study also attempts to provide recommendations for solving other existing limitations and challenges.

    METHODS: We systematically searched all articles about EPW control based on BCI for disability in three popular databases: ScienceDirect, IEEE and Web of Science. These databases contain numerous articles that considerably influenced this field and cover most of the relevant theoretical and technical issues.

    RESULTS: We selected 100 articles on the basis of our inclusion and exclusion criteria. A large set of articles (55) discussed on developing real-time wheelchair control systems based on BCI for disability signals. Another set of articles (25) focused on analysing BCI for disability signals for wheelchair control. The third set of articles (14) considered the simulation of wheelchair control based on BCI for disability signals. Four articles designed a framework for wheelchair control based on BCI for disability signals. Finally, one article reviewed concerns regarding wheelchair control based on BCI for disability signals.

    DISCUSSION: Since 2007, researchers have pursued the possibility of using BCI for disability in EPW control through different approaches. Regardless of type, articles have focused on addressing limitations that impede the full efficiency of BCI for disability and recommended solutions for these limitations.

    CONCLUSIONS: Studies on wheelchair control based on BCI for disability considerably influence society due to the large number of people with disability. Therefore, we aim to provide researchers and developers with a clear understanding of this platform and highlight the challenges and gaps in the current and future studies.

  15. Alade IO, Bagudu A, Oyehan TA, Rahman MAA, Saleh TA, Olatunji SO
    Comput Methods Programs Biomed, 2018 Sep;163:135-142.
    PMID: 30119848 DOI: 10.1016/j.cmpb.2018.05.029
    BACKGROUND AND OBJECTIVES: The refractive index of hemoglobin plays important role in hematology due to its strong correlation with the pathophysiology of different diseases. Measurement of the real part of the refractive index remains a challenge due to strong absorption of the hemoglobin especially at relevant high physiological concentrations. So far, only a few studies on direct measurement of refractive index have been reported and there are no firm agreements on the reported values of refractive index of hemoglobin due to measurement artifacts. In addition, it is time consuming, laborious and expensive to perform several experiments to obtain the refractive index of hemoglobin. In this work, we proposed a very rapid and accurate computational intelligent approach using Genetic Algorithm/Support Vector Regression models to estimate the real part of the refractive index for oxygenated and deoxygenated hemoglobin samples.

    METHODS: These models utilized experimental data of wavelengths and hemoglobin concentrations in building highly accurate Genetic Algorithm/Support Vector Regression model (GA-SVR).

    RESULTS: The developed methodology showed high accuracy as indicated by the low root mean square error values of 4.65 × 10-4 and 4.62 × 10-4 for oxygenated and deoxygenated hemoglobin, respectively. In addition, the models exhibited 99.85 and 99.84% correlation coefficients (r) for the oxygenated and deoxygenated hemoglobin, thus, validating the strong agreement between the predicted and the experimental results CONCLUSIONS: Due to the accuracy and relative simplicity of the proposed models, we envisage that these models would serve as important references for future studies on optical properties of blood.

  16. Albahri OS, Al-Obaidi JR, Zaidan AA, Albahri AS, Zaidan BB, Salih MM, et al.
    Comput Methods Programs Biomed, 2020 Nov;196:105617.
    PMID: 32593060 DOI: 10.1016/j.cmpb.2020.105617
    CONTEXT: People who have recently recovered from the threat of deteriorating coronavirus disease-2019 (COVID-19) have antibodies to the coronavirus circulating in their blood. Thus, the transfusion of these antibodies to deteriorating patients could theoretically help boost their immune system. Biologically, two challenges need to be surmounted to allow convalescent plasma (CP) transfusion to rescue the most severe COVID-19 patients. First, convalescent subjects must meet donor selection plasma criteria and comply with national health requirements and known standard routine procedures. Second, multi-criteria decision-making (MCDM) problems should be considered in the selection of the most suitable CP and the prioritisation of patients with COVID-19.

    OBJECTIVE: This paper presents a rescue framework for the transfusion of the best CP to the most critical patients with COVID-19 on the basis of biological requirements by using machine learning and novel MCDM methods.

    METHOD: The proposed framework is illustrated on the basis of two distinct and consecutive phases (i.e. testing and development). In testing, ABO compatibility is assessed after classifying donors into the four blood types, namely, A, B, AB and O, to indicate the suitability and safety of plasma for administration in order to refine the CP tested list repository. The development phase includes patient and donor sides. In the patient side, prioritisation is performed using a contracted patient decision matrix constructed between 'serological/protein biomarkers and the ratio of the partial pressure of oxygen in arterial blood to fractional inspired oxygen criteria' and 'patient list based on novel MCDM method known as subjective and objective decision by opinion score method'. Then, the patients with the most urgent need are classified into the four blood types and matched with a tested CP list from the test phase in the donor side. Thereafter, the prioritisation of CP tested list is performed using the contracted CP decision matrix.

    RESULT: An intelligence-integrated concept is proposed to identify the most appropriate CP for corresponding prioritised patients with COVID-19 to help doctors hasten treatments.

    DISCUSSION: The proposed framework implies the benefits of providing effective care and prevention of the extremely rapidly spreading COVID-19 from affecting patients and the medical sector.

  17. Alsaih K, Yusoff MZ, Tang TB, Faye I, Mériaudeau F
    Comput Methods Programs Biomed, 2020 Oct;195:105566.
    PMID: 32504911 DOI: 10.1016/j.cmpb.2020.105566
    BACKGROUND AND OBJECTIVES: Aged people usually are more to be diagnosed with retinal diseases in developed countries. Retinal capillaries leakage into the retina swells and causes an acute vision loss, which is called age-related macular degeneration (AMD). The disease can not be adequately diagnosed solely using fundus images as depth information is not available. The variations in retina volume assist in monitoring ophthalmological abnormalities. Therefore, high-fidelity AMD segmentation in optical coherence tomography (OCT) imaging modality has raised the attention of researchers as well as those of the medical doctors. Many methods across the years encompassing machine learning approaches and convolutional neural networks (CNN) strategies have been proposed for object detection and image segmentation.

    METHODS: In this paper, we analyze four wide-spread deep learning models designed for the segmentation of three retinal fluids outputting dense predictions in the RETOUCH challenge data. We aim to demonstrate how a patch-based approach could push the performance for each method. Besides, we also evaluate the methods using the OPTIMA challenge dataset for generalizing network performance. The analysis is driven into two sections: the comparison between the four approaches and the significance of patching the images.

    RESULTS: The performance of networks trained on the RETOUCH dataset is higher than human performance. The analysis further generalized the performance of the best network obtained by fine-tuning it and achieved a mean Dice similarity coefficient (DSC) of 0.85. Out of the three types of fluids, intraretinal fluid (IRF) is more recognized, and the highest DSC value of 0.922 is achieved using Spectralis dataset. Additionally, the highest average DSC score is 0.84, which is achieved by PaDeeplabv3+ model using Cirrus dataset.

    CONCLUSIONS: The proposed method segments the three fluids in the retina with high DSC value. Fine-tuning the networks trained on the RETOUCH dataset makes the network perform better and faster than training from scratch. Enriching the networks with inputting a variety of shapes by extracting patches helped to segment the fluids better than using a full image.

  18. Alsalem MA, Zaidan AA, Zaidan BB, Hashim M, Madhloom HT, Azeez ND, et al.
    Comput Methods Programs Biomed, 2018 May;158:93-112.
    PMID: 29544792 DOI: 10.1016/j.cmpb.2018.02.005
    CONTEXT: Acute leukaemia diagnosis is a field requiring automated solutions, tools and methods and the ability to facilitate early detection and even prediction. Many studies have focused on the automatic detection and classification of acute leukaemia and their subtypes to promote enable highly accurate diagnosis.

    OBJECTIVE: This study aimed to review and analyse literature related to the detection and classification of acute leukaemia. The factors that were considered to improve understanding on the field's various contextual aspects in published studies and characteristics were motivation, open challenges that confronted researchers and recommendations presented to researchers to enhance this vital research area.

    METHODS: We systematically searched all articles about the classification and detection of acute leukaemia, as well as their evaluation and benchmarking, in three main databases: ScienceDirect, Web of Science and IEEE Xplore from 2007 to 2017. These indices were considered to be sufficiently extensive to encompass our field of literature.

    RESULTS: Based on our inclusion and exclusion criteria, 89 articles were selected. Most studies (58/89) focused on the methods or algorithms of acute leukaemia classification, a number of papers (22/89) covered the developed systems for the detection or diagnosis of acute leukaemia and few papers (5/89) presented evaluation and comparative studies. The smallest portion (4/89) of articles comprised reviews and surveys.

    DISCUSSION: Acute leukaemia diagnosis, which is a field requiring automated solutions, tools and methods, entails the ability to facilitate early detection or even prediction. Many studies have been performed on the automatic detection and classification of acute leukaemia and their subtypes to promote accurate diagnosis.

    CONCLUSIONS: Research areas on medical-image classification vary, but they are all equally vital. We expect this systematic review to help emphasise current research opportunities and thus extend and create additional research fields.

  19. Ang CYS, Chiew YS, Vu LH, Cove ME
    Comput Methods Programs Biomed, 2022 Mar;215:106601.
    PMID: 34973606 DOI: 10.1016/j.cmpb.2021.106601
    BACKGROUND: Spontaneous breathing (SB) effort during mechanical ventilation (MV) is an important metric of respiratory drive. However, SB effort varies due to a variety of factors, including evolving pathology and sedation levels. Therefore, assessment of SB efforts needs to be continuous and non-invasive. This is important to prevent both over- and under-assistance with MV. In this study, a machine learning model, Convolutional Autoencoder (CAE) is developed to quantify the magnitude of SB effort using only bedside MV airway pressure and flow waveform.

    METHOD: The CAE model was trained using 12,170,655 simulated SB flow and normal flow data (NB). The paired SB and NB flow data were simulated using a Gaussian Effort Model (GEM) with 5 basis functions. When the CAE model is given a SB flow input, it is capable of predicting a corresponding NB flow for the SB flow input. The magnitude of SB effort (SBEMag) is then quantified as the difference between the SB and NB flows. The CAE model was used to evaluate the SBEMag of 9 pressure control/ support datasets. Results were validated using a mean squared error (MSE) fitting between clinical and training SB flows.

    RESULTS: The CAE model was able to produce NB flows from the clinical SB flows with the median SBEMag of the 9 datasets being 25.39% [IQR: 21.87-25.57%]. The absolute error in SBEMag using MSE validation yields a median of 4.77% [IQR: 3.77-8.56%] amongst the cohort. This shows the ability of the GEM to capture the intrinsic details present in SB flow waveforms. Analysis also shows both intra-patient and inter-patient variability in SBEMag.

    CONCLUSION: A Convolutional Autoencoder model was developed with simulated SB and NB flow data and is capable of quantifying the magnitude of patient spontaneous breathing effort. This provides potential application for real-time monitoring of patient respiratory drive for better management of patient-ventilator interaction.

Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links