Displaying publications 1 - 20 of 267 in total

Abstract:
Sort:
  1. Harun MA, Safari MJS, Gul E, Ab Ghani A
    Environ Sci Pollut Res Int, 2021 Oct;28(38):53097-53115.
    PMID: 34023993 DOI: 10.1007/s11356-021-14479-0
    The investigation of sediment transport in tropical rivers is essential for planning effective integrated river basin management to predict the changes in rivers. The characteristics of rivers and sediment in the tropical region are different compared to those of the rivers in Europe and the USA, where the median sediment size tends to be much more refined. The origins of the rivers are mainly tropical forests. Due to the complexity of determining sediment transport, many sediment transport equations were recommended in the literature. However, the accuracy of the prediction results remains low, particularly for the tropical rivers. The majority of the existing equations were developed using multiple non-linear regression (MNLR). Machine learning has recently been the method of choice to increase model prediction accuracy in complex hydrological problems. Compared to the conventional MNLR method, machine learning algorithms have advanced and can produce a useful prediction model. In this research, three machine learning models, namely evolutionary polynomial regression (EPR), multi-gene genetic programming (MGGP) and M5 tree model (M5P), were implemented to model sediment transport for rivers in Malaysia. The formulated variables for the prediction model were originated from the revised equations reported in the relevant literature for Malaysian rivers. Among the three machine learning models, in terms of different statistical measurement criteria, EPR gives the best prediction model, followed by MGGP and M5P. Machine learning is excellent at improving the prediction distribution of high data values but lacks accuracy compared to observations of lower data values. These results indicate that further study needs to be done to improve the machine learning model's accuracy to predict sediment transport.
    Matched MeSH terms: Machine Learning
  2. Hatmal MM, Alshaer W, Mahmoud IS, Al-Hatamleh MAI, Al-Ameer HJ, Abuyaman O, et al.
    PLoS One, 2021;16(10):e0257857.
    PMID: 34648514 DOI: 10.1371/journal.pone.0257857
    CD36 (cluster of differentiation 36) is a membrane protein involved in lipid metabolism and has been linked to pathological conditions associated with metabolic disorders, such as diabetes and dyslipidemia. A case-control study was conducted and included 177 patients with type-2 diabetes mellitus (T2DM) and 173 control subjects to study the involvement of CD36 gene rs1761667 (G>A) and rs1527483 (C>T) polymorphisms in the pathogenesis of T2DM and dyslipidemia among Jordanian population. Lipid profile, blood sugar, gender and age were measured and recorded. Also, genotyping analysis for both polymorphisms was performed. Following statistical analysis, 10 different neural networks and machine learning (ML) tools were used to predict subjects with diabetes or dyslipidemia. Towards further understanding of the role of CD36 protein and gene in T2DM and dyslipidemia, a protein-protein interaction network and meta-analysis were carried out. For both polymorphisms, the genotypic frequencies were not significantly different between the two groups (p > 0.05). On the other hand, some ML tools like multilayer perceptron gave high prediction accuracy (≥ 0.75) and Cohen's kappa (κ) (≥ 0.5). Interestingly, in K-star tool, the accuracy and Cohen's κ values were enhanced by including the genotyping results as inputs (0.73 and 0.46, respectively, compared to 0.67 and 0.34 without including them). This study confirmed, for the first time, that there is no association between CD36 polymorphisms and T2DM or dyslipidemia among Jordanian population. Prediction of T2DM and dyslipidemia, using these extensive ML tools and based on such input data, is a promising approach for developing diagnostic and prognostic prediction models for a wide spectrum of diseases, especially based on large medical databases.
    Matched MeSH terms: Machine Learning
  3. Muazu Musa R, P P Abdul Majeed A, Taha Z, Chang SW, Ab Nasir AF, Abdullah MR
    PLoS One, 2019;14(1):e0209638.
    PMID: 30605456 DOI: 10.1371/journal.pone.0209638
    k-nearest neighbour (k-NN) has been shown to be an effective learning algorithm for classification and prediction. However, the application of k-NN for prediction and classification in specific sport is still in its infancy. The present study classified and predicted high and low potential archers from a set of physical fitness variables trained on a variation of k-NN algorithms and logistic regression. 50 youth archers with the mean age and standard deviation of (17.0 ± 0.56) years drawn from various archery programmes completed a one end archery shooting score test. Standard fitness measurements of the handgrip, vertical jump, standing broad jump, static balance, upper muscle strength and the core muscle strength were conducted. Multiple linear regression was utilised to ascertain the significant variables that affect the shooting score. It was demonstrated from the analysis that core muscle strength and vertical jump were statistically significant. Hierarchical agglomerative cluster analysis (HACA) was used to cluster the archers based on the significant variables identified. k-NN model variations, i.e., fine, medium, coarse, cosine, cubic and weighted functions as well as logistic regression, were trained based on the significant performance variables. The HACA clustered the archers into high potential archers (HPA) and low potential archers (LPA). The weighted k-NN outperformed all the tested models at itdemonstrated reasonably good classification on the evaluated indicators with an accuracy of 82.5 ± 4.75% for the prediction of the HPA and the LPA. Moreover, the performance of the classifiers was further investigated against fresh data, which also indicates the efficacy of the weighted k-NN model. These findings could be valuable to coaches and sports managers to recognise high potential archers from a combination of the selected few physical fitness performance indicators identified which would subsequently save cost, time and energy for a talent identification programme.
    Matched MeSH terms: Machine Learning
  4. Kee OT, Harun H, Mustafa N, Abdul Murad NA, Chin SF, Jaafar R, et al.
    Cardiovasc Diabetol, 2023 Jan 19;22(1):13.
    PMID: 36658644 DOI: 10.1186/s12933-023-01741-7
    Prediction model has been the focus of studies since the last century in the diagnosis and prognosis of various diseases. With the advancement in computational technology, machine learning (ML) has become the widely used tool to develop a prediction model. This review is to investigate the current development of prediction model for the risk of cardiovascular disease (CVD) among type 2 diabetes (T2DM) patients using machine learning. A systematic search on Scopus and Web of Science (WoS) was conducted to look for relevant articles based on the research question. The risk of bias (ROB) for all articles were assessed based on the Prediction model Risk of Bias Assessment Tool (PROBAST) statement. Neural network with 76.6% precision, 88.06% sensitivity, and area under the curve (AUC) of 0.91 was found to be the most reliable algorithm in developing prediction model for cardiovascular disease among type 2 diabetes patients. The overall concern of applicability of all included studies is low. While two out of 10 studies were shown to have high ROB, another studies ROB are unknown due to the lack of information. The adherence to reporting standards was conducted based on the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) standard where the overall score is 53.75%. It is highly recommended that future model development should adhere to the PROBAST and TRIPOD assessment to reduce the risk of bias and ensure its applicability in clinical settings. Potential lipid peroxidation marker is also recommended in future cardiovascular disease prediction model to improve overall model applicability.
    Matched MeSH terms: Machine Learning
  5. Ngugi HN, Ezugwu AE, Akinyelu AA, Abualigah L
    Environ Monit Assess, 2024 Feb 24;196(3):302.
    PMID: 38401024 DOI: 10.1007/s10661-024-12454-z
    Digital image processing has witnessed a significant transformation, owing to the adoption of deep learning (DL) algorithms, which have proven to be vastly superior to conventional methods for crop detection. These DL algorithms have recently found successful applications across various domains, translating input data, such as images of afflicted plants, into valuable insights, like the identification of specific crop diseases. This innovation has spurred the development of cutting-edge techniques for early detection and diagnosis of crop diseases, leveraging tools such as convolutional neural networks (CNN), K-nearest neighbour (KNN), support vector machines (SVM), and artificial neural networks (ANN). This paper offers an all-encompassing exploration of the contemporary literature on methods for diagnosing, categorizing, and gauging the severity of crop diseases. The review examines the performance analysis of the latest machine learning (ML) and DL techniques outlined in these studies. It also scrutinizes the methodologies and datasets and outlines the prevalent recommendations and identified gaps within different research investigations. As a conclusion, the review offers insights into potential solutions and outlines the direction for future research in this field. The review underscores that while most studies have concentrated on traditional ML algorithms and CNN, there has been a noticeable dearth of focus on emerging DL algorithms like capsule neural networks and vision transformers. Furthermore, it sheds light on the fact that several datasets employed for training and evaluating DL models have been tailored to suit specific crop types, emphasizing the pressing need for a comprehensive and expansive image dataset encompassing a wider array of crop varieties. Moreover, the survey draws attention to the prevailing trend where the majority of research endeavours have concentrated on individual plant diseases, ML, or DL algorithms. In light of this, it advocates for the development of a unified framework that harnesses an ensemble of ML and DL algorithms to address the complexities of multiple plant diseases effectively.
    Matched MeSH terms: Machine Learning
  6. Kumar, Yogan Jaya, Naomie Salim, Ahmed Hamza Osman, Abuobieda, Albaraa
    MyJurnal
    Cross-document Structure Theory (CST) has recently been proposed to facilitate tasks related to multidocument analysis. Classifying and identifying the CST relationships between sentences across topically related documents have since been proven as necessary. However, there have not been sufficient studies presented in literature to automatically identify these CST relationships. In this study, a supervised machine learning technique, i.e. Support Vector Machines (SVMs), was applied to identify four types of CST relationships, namely “Identity”, “Overlap”, “Subsumption”, and “Description” on the datasets obtained from CSTBank corpus. The performance of the SVMs classification was measured using Precision, Recall and F-measure. In addition, the results obtained using SVMs were also compared with those from the previous literature using boosting classification algorithm. It was found that SVMs yielded better results in classifying the four CST relationships.
    Matched MeSH terms: Supervised Machine Learning
  7. Barua PD, Muhammad Gowdh NF, Rahmat K, Ramli N, Ng WL, Chan WY, et al.
    PMID: 34360343 DOI: 10.3390/ijerph18158052
    COVID-19 and pneumonia detection using medical images is a topic of immense interest in medical and healthcare research. Various advanced medical imaging and machine learning techniques have been presented to detect these respiratory disorders accurately. In this work, we have proposed a novel COVID-19 detection system using an exemplar and hybrid fused deep feature generator with X-ray images. The proposed Exemplar COVID-19FclNet9 comprises three basic steps: exemplar deep feature generation, iterative feature selection and classification. The novelty of this work is the feature extraction using three pre-trained convolutional neural networks (CNNs) in the presented feature extraction phase. The common aspects of these pre-trained CNNs are that they have three fully connected layers, and these networks are AlexNet, VGG16 and VGG19. The fully connected layer of these networks is used to generate deep features using an exemplar structure, and a nine-feature generation method is obtained. The loss values of these feature extractors are computed, and the best three extractors are selected. The features of the top three fully connected features are merged. An iterative selector is used to select the most informative features. The chosen features are classified using a support vector machine (SVM) classifier. The proposed COVID-19FclNet9 applied nine deep feature extraction methods by using three deep networks together. The most appropriate deep feature generation model selection and iterative feature selection have been employed to utilise their advantages together. By using these techniques, the image classification ability of the used three deep networks has been improved. The presented model is developed using four X-ray image corpora (DB1, DB2, DB3 and DB4) with two, three and four classes. The proposed Exemplar COVID-19FclNet9 achieved a classification accuracy of 97.60%, 89.96%, 98.84% and 99.64% using the SVM classifier with 10-fold cross-validation for four datasets, respectively. Our developed Exemplar COVID-19FclNet9 model has achieved high classification accuracy for all four databases and may be deployed for clinical application.
    Matched MeSH terms: Machine Learning
  8. Faust O, Hagiwara Y, Hong TJ, Lih OS, Acharya UR
    Comput Methods Programs Biomed, 2018 Jul;161:1-13.
    PMID: 29852952 DOI: 10.1016/j.cmpb.2018.04.005
    BACKGROUND AND OBJECTIVE: We have cast the net into the ocean of knowledge to retrieve the latest scientific research on deep learning methods for physiological signals. We found 53 research papers on this topic, published from 01.01.2008 to 31.12.2017.

    METHODS: An initial bibliometric analysis shows that the reviewed papers focused on Electromyogram(EMG), Electroencephalogram(EEG), Electrocardiogram(ECG), and Electrooculogram(EOG). These four categories were used to structure the subsequent content review.

    RESULTS: During the content review, we understood that deep learning performs better for big and varied datasets than classic analysis and machine classification methods. Deep learning algorithms try to develop the model by using all the available input.

    CONCLUSIONS: This review paper depicts the application of various deep learning algorithms used till recently, but in future it will be used for more healthcare areas to improve the quality of diagnosis.

    Matched MeSH terms: Machine Learning*
  9. Faust O, Shenfield A, Kareem M, San TR, Fujita H, Acharya UR
    Comput Biol Med, 2018 11 01;102:327-335.
    PMID: 30031535 DOI: 10.1016/j.compbiomed.2018.07.001
    Atrial Fibrillation (AF), either permanent or intermittent (paroxysnal AF), increases the risk of cardioembolic stroke. Accurate diagnosis of AF is obligatory for initiation of effective treatment to prevent stroke. Long term cardiac monitoring improves the likelihood of diagnosing paroxysmal AF. We used a deep learning system to detect AF beats in Heart Rate (HR) signals. The data was partitioned with a sliding window of 100 beats. The resulting signal blocks were directly fed into a deep Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The system was validated and tested with data from the MIT-BIH Atrial Fibrillation Database. It achieved 98.51% accuracy with 10-fold cross-validation (20 subjects) and 99.77% with blindfold validation (3 subjects). The proposed system structure is straight forward, because there is no need for information reduction through feature extraction. All the complexity resides in the deep learning system, which gets the entire information from a signal block. This setup leads to the robust performance for unknown data, as measured with the blind fold validation. The proposed Computer-Aided Diagnosis (CAD) system can be used for long-term monitoring of the human heart. To the best of our knowledge, the proposed system is the first to incorporate deep learning for AF beat detection.
    Matched MeSH terms: Machine Learning
  10. Yıldırım Ö, Pławiak P, Tan RS, Acharya UR
    Comput Biol Med, 2018 11 01;102:411-420.
    PMID: 30245122 DOI: 10.1016/j.compbiomed.2018.09.009
    This article presents a new deep learning approach for cardiac arrhythmia (17 classes) detection based on long-duration electrocardiography (ECG) signal analysis. Cardiovascular disease prevention is one of the most important tasks of any health care system as about 50 million people are at risk of heart disease in the world. Although automatic analysis of ECG signal is very popular, current methods are not satisfactory. The goal of our research was to design a new method based on deep learning to efficiently and quickly classify cardiac arrhythmias. Described research are based on 1000 ECG signal fragments from the MIT - BIH Arrhythmia database for one lead (MLII) from 45 persons. Approach based on the analysis of 10-s ECG signal fragments (not a single QRS complex) is applied (on average, 13 times less classifications/analysis). A complete end-to-end structure was designed instead of the hand-crafted feature extraction and selection used in traditional methods. Our main contribution is to design a new 1D-Convolutional Neural Network model (1D-CNN). The proposed method is 1) efficient, 2) fast (real-time classification) 3) non-complex and 4) simple to use (combined feature extraction and selection, and classification in one stage). Deep 1D-CNN achieved a recognition overall accuracy of 17 cardiac arrhythmia disorders (classes) at a level of 91.33% and classification time per single sample of 0.015 s. Compared to the current research, our results are one of the best results to date, and our solution can be implemented in mobile devices and cloud computing.
    Matched MeSH terms: Machine Learning
  11. Alizadehsani R, Abdar M, Roshanzamir M, Khosravi A, Kebria PM, Khozeimeh F, et al.
    Comput Biol Med, 2019 08;111:103346.
    PMID: 31288140 DOI: 10.1016/j.compbiomed.2019.103346
    Coronary artery disease (CAD) is the most common cardiovascular disease (CVD) and often leads to a heart attack. It annually causes millions of deaths and billions of dollars in financial losses worldwide. Angiography, which is invasive and risky, is the standard procedure for diagnosing CAD. Alternatively, machine learning (ML) techniques have been widely used in the literature as fast, affordable, and noninvasive approaches for CAD detection. The results that have been published on ML-based CAD diagnosis differ substantially in terms of the analyzed datasets, sample sizes, features, location of data collection, performance metrics, and applied ML techniques. Due to these fundamental differences, achievements in the literature cannot be generalized. This paper conducts a comprehensive and multifaceted review of all relevant studies that were published between 1992 and 2019 for ML-based CAD diagnosis. The impacts of various factors, such as dataset characteristics (geographical location, sample size, features, and the stenosis of each coronary artery) and applied ML techniques (feature selection, performance metrics, and method) are investigated in detail. Finally, the important challenges and shortcomings of ML-based CAD diagnosis are discussed.
    Matched MeSH terms: Machine Learning*
  12. Khare SK, Acharya UR
    Comput Biol Med, 2023 Mar;155:106676.
    PMID: 36827785 DOI: 10.1016/j.compbiomed.2023.106676
    BACKGROUND: Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder that affects a person's sleep, mood, anxiety, and learning. Early diagnosis and timely medication can help individuals with ADHD perform daily tasks without difficulty. Electroencephalogram (EEG) signals can help neurologists to detect ADHD by examining the changes occurring in it. The EEG signals are complex, non-linear, and non-stationary. It is difficult to find the subtle differences between ADHD and healthy control EEG signals visually. Also, making decisions from existing machine learning (ML) models do not guarantee similar performance (unreliable).

    METHOD: The paper explores a combination of variational mode decomposition (VMD), and Hilbert transform (HT) called VMD-HT to extract hidden information from EEG signals. Forty-one statistical parameters extracted from the absolute value of analytical mode functions (AMF) have been classified using the explainable boosted machine (EBM) model. The interpretability of the model is tested using statistical analysis and performance measurement. The importance of the features, channels and brain regions has been identified using the glass-box and black-box approach. The model's local and global explainability has been visualized using Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Partial Dependence Plot (PDP), and Morris sensitivity. To the best of our knowledge, this is the first work that explores the explainability of the model prediction in ADHD detection, particularly for children.

    RESULTS: Our results show that the explainable model has provided an accuracy of 99.81%, a sensitivity of 99.78%, 99.84% specificity, an F-1 measure of 99.83%, the precision of 99.87%, a false detection rate of 0.13%, and Mathew's correlation coefficient, negative predicted value, and critical success index of 99.61%, 99.73%, and 99.66%, respectively in detecting the ADHD automatically with ten-fold cross-validation. The model has provided an area under the curve of 100% while the detection rate of 99.87% and 99.73% has been obtained for ADHD and HC, respectively.

    CONCLUSIONS: The model show that the interpretability and explainability of frontal region is highest compared to pre-frontal, central, parietal, occipital, and temporal regions. Our findings has provided important insight into the developed model which is highly reliable, robust, interpretable, and explainable for the clinicians to detect ADHD in children. Early and rapid ADHD diagnosis using robust explainable technologies may reduce the cost of treatment and lessen the number of patients undergoing lengthy diagnosis procedures.

    Matched MeSH terms: Machine Learning
  13. Kaplan E, Chan WY, Altinsoy HB, Baygin M, Barua PD, Chakraborty S, et al.
    J Digit Imaging, 2023 Dec;36(6):2441-2460.
    PMID: 37537514 DOI: 10.1007/s10278-023-00889-8
    Detecting neurological abnormalities such as brain tumors and Alzheimer's disease (AD) using magnetic resonance imaging (MRI) images is an important research topic in the literature. Numerous machine learning models have been used to detect brain abnormalities accurately. This study addresses the problem of detecting neurological abnormalities in MRI. The motivation behind this problem lies in the need for accurate and efficient methods to assist neurologists in the diagnosis of these disorders. In addition, many deep learning techniques have been applied to MRI to develop accurate brain abnormality detection models, but these networks have high time complexity. Hence, a novel hand-modeled feature-based learning network is presented to reduce the time complexity and obtain high classification performance. The model proposed in this work uses a new feature generation architecture named pyramid and fixed-size patch (PFP). The main aim of the proposed PFP structure is to attain high classification performance using essential feature extractors with both multilevel and local features. Furthermore, the PFP feature extractor generates low- and high-level features using a handcrafted extractor. To obtain the high discriminative feature extraction ability of the PFP, we have used histogram-oriented gradients (HOG); hence, it is named PFP-HOG. Furthermore, the iterative Chi2 (IChi2) is utilized to choose the clinically significant features. Finally, the k-nearest neighbors (kNN) with tenfold cross-validation is used for automated classification. Four MRI neurological databases (AD dataset, brain tumor dataset 1, brain tumor dataset 2, and merged dataset) have been utilized to develop our model. PFP-HOG and IChi2-based models attained 100%, 94.98%, 98.19%, and 97.80% using the AD dataset, brain tumor dataset1, brain tumor dataset 2, and merged brain MRI dataset, respectively. These findings not only provide an accurate and robust classification of various neurological disorders using MRI but also hold the potential to assist neurologists in validating manual MRI brain abnormality screening.
    Matched MeSH terms: Machine Learning
  14. Acharya UR, Hagiwara Y, Adeli H
    Epilepsy Behav, 2018 11;88:251-261.
    PMID: 30317059 DOI: 10.1016/j.yebeh.2018.09.030
    In the past two decades, significant advances have been made on automated electroencephalogram (EEG)-based diagnosis of epilepsy and seizure detection. A number of innovative algorithms have been introduced that can aid in epilepsy diagnosis with a high degree of accuracy. In recent years, the frontiers of computational epilepsy research have moved to seizure prediction, a more challenging problem. While antiepileptic medication can result in complete seizure freedom in many patients with epilepsy, up to one-third of patients living with epilepsy will have medically intractable epilepsy, where medications reduce seizure frequency but do not completely control seizures. If a seizure can be predicted prior to its clinical manifestation, then there is potential for abortive treatment to be given, either self-administered or via an implanted device administering medication or electrical stimulation. This will have a far-reaching impact on the treatment of epilepsy and patient's quality of life. This paper presents a state-of-the-art review of recent efforts and journal articles on seizure prediction. The technologies developed for epilepsy diagnosis and seizure detection are being adapted and extended for seizure prediction. The paper ends with some novel ideas for seizure prediction using the increasingly ubiquitous machine learning technology, particularly deep neural network machine learning.
    Matched MeSH terms: Machine Learning/trends*
  15. Bhat S, Acharya UR, Hagiwara Y, Dadmehr N, Adeli H
    Comput Biol Med, 2018 11 01;102:234-241.
    PMID: 30253869 DOI: 10.1016/j.compbiomed.2018.09.008
    Parkinson's disease (PD) is a neurodegenerative disease of the central nervous system caused due to the loss of dopaminergic neurons. It is classified under movement disorder as patients with PD present with tremor, rigidity, postural changes, and a decrease in spontaneous movements. Comorbidities including anxiety, depression, fatigue, and sleep disorders are observed prior to the diagnosis of PD. Gene mutations, exposure to toxic substances, and aging are considered as the causative factors of PD even though its genesis is unknown. This paper reviews PD etiologies, progression, and in particular measurable indicators of PD such as neuroimaging and electrophysiology modalities. In addition to gene therapy, neuroprotective, pharmacological, and neural transplantation treatments, researchers are actively aiming at identifying biological markers of PD with the goal of early diagnosis. Neuroimaging modalities used together with advanced machine learning techniques offer a promising path for the early detection and intervention in PD patients.
    Matched MeSH terms: Machine Learning
  16. Acharya UR, Oh SL, Hagiwara Y, Tan JH, Adeli H
    Comput Biol Med, 2018 09 01;100:270-278.
    PMID: 28974302 DOI: 10.1016/j.compbiomed.2017.09.017
    An encephalogram (EEG) is a commonly used ancillary test to aide in the diagnosis of epilepsy. The EEG signal contains information about the electrical activity of the brain. Traditionally, neurologists employ direct visual inspection to identify epileptiform abnormalities. This technique can be time-consuming, limited by technical artifact, provides variable results secondary to reader expertise level, and is limited in identifying abnormalities. Therefore, it is essential to develop a computer-aided diagnosis (CAD) system to automatically distinguish the class of these EEG signals using machine learning techniques. This is the first study to employ the convolutional neural network (CNN) for analysis of EEG signals. In this work, a 13-layer deep convolutional neural network (CNN) algorithm is implemented to detect normal, preictal, and seizure classes. The proposed technique achieved an accuracy, specificity, and sensitivity of 88.67%, 90.00% and 95.00%, respectively.
    Matched MeSH terms: Machine Learning
  17. He Q, Shahabi H, Shirzadi A, Li S, Chen W, Wang N, et al.
    Sci Total Environ, 2019 May 01;663:1-15.
    PMID: 30708212 DOI: 10.1016/j.scitotenv.2019.01.329
    Landslides are major hazards for human activities often causing great damage to human lives and infrastructure. Therefore, the main aim of the present study is to evaluate and compare three machine learning algorithms (MLAs) including Naïve Bayes (NB), radial basis function (RBF) Classifier, and RBF Network for landslide susceptibility mapping (LSM) at Longhai area in China. A total of 14 landslide conditioning factors were obtained from various data sources, then the frequency ratio (FR) and support vector machine (SVM) methods were used for the correlation and selection the most important factors for modelling process, respectively. Subsequently, the resulting three models were validated and compared using some statistical metrics including area under the receiver operating characteristics (AUROC) curve, and Friedman and Wilcoxon signed-rank tests The results indicated that the RBF Classifier model had the highest goodness-of-fit and performance based on the training and validation datasets. The results concluded that the RBF Classifier model outperformed and outclassed (AUROC = 0.881), the NB (AUROC = 0.872) and the RBF Network (AUROC = 0.854) models. The obtained results pointed out that the RBF Classifier model is a promising method for spatial prediction of landslide over the world.
    Matched MeSH terms: Machine Learning
  18. Azareh A, Rahmati O, Rafiei-Sardooi E, Sankey JB, Lee S, Shahabi H, et al.
    Sci Total Environ, 2019 Mar 10;655:684-696.
    PMID: 30476849 DOI: 10.1016/j.scitotenv.2018.11.235
    Gully erosion susceptibility mapping is a fundamental tool for land-use planning aimed at mitigating land degradation. However, the capabilities of some state-of-the-art data-mining models for developing accurate maps of gully erosion susceptibility have not yet been fully investigated. This study assessed and compared the performance of two different types of data-mining models for accurately mapping gully erosion susceptibility at a regional scale in Chavar, Ilam, Iran. The two methods evaluated were: Certainty Factor (CF), a bivariate statistical model; and Maximum Entropy (ME), an advanced machine learning model. Several geographic and environmental factors that can contribute to gully erosion were considered as predictor variables of gully erosion susceptibility. Based on an existing differential GPS survey inventory of gully erosion, a total of 63 eroded gullies were spatially randomly split in a 70:30 ratio for use in model calibration and validation, respectively. Accuracy assessments completed with the receiver operating characteristic curve method showed that the ME-based regional gully susceptibility map has an area under the curve (AUC) value of 88.6% whereas the CF-based map has an AUC of 81.8%. According to jackknife tests that were used to investigate the relative importance of predictor variables, aspect, distance to river, lithology and land use are the most influential factors for the spatial distribution of gully erosion susceptibility in this region of Iran. The gully erosion susceptibility maps produced in this study could be useful tools for land managers and engineers tasked with road development, urbanization and other future development.
    Matched MeSH terms: Machine Learning
  19. Veeraragavan S, Gopalai AA, Gouwanda D, Ahmad SA
    Front Physiol, 2020;11:587057.
    PMID: 33240106 DOI: 10.3389/fphys.2020.587057
    Gait analysis plays a key role in the diagnosis of Parkinson's Disease (PD), as patients generally exhibit abnormal gait patterns compared to healthy controls. Current diagnosis and severity assessment procedures entail manual visual examinations of motor tasks, speech, and handwriting, among numerous other tests, which can vary between clinicians based on their expertise and visual observation of gait tasks. Automating gait differentiation procedure can serve as a useful tool in early diagnosis and severity assessment of PD and limits the data collection to solely walking gait. In this research, a holistic, non-intrusive method is proposed to diagnose and assess PD severity in its early and moderate stages by using only Vertical Ground Reaction Force (VGRF). From the VGRF data, gait features are extracted and selected to use as training features for the Artificial Neural Network (ANN) model to diagnose PD using cross validation. If the diagnosis is positive, another ANN model will predict their Hoehn and Yahr (H&Y) score to assess their PD severity using the same VGRF data. PD Diagnosis is achieved with a high accuracy of 97.4% using simple network architecture. Additionally, the results indicate a better performance compared to other complex machine learning models that have been researched previously. Severity Assessment is also performed on the H&Y scale with 87.1% accuracy. The results of this study show that it is plausible to use only VGRF data in diagnosing and assessing early stage Parkinson's Disease, helping patients manage the symptoms earlier and giving them a better quality of life.
    Matched MeSH terms: Machine Learning
  20. Boo KBW, El-Shafie A, Othman F, Khan MMH, Birima AH, Ahmed AN
    Water Res, 2024 Mar 15;252:121249.
    PMID: 38330715 DOI: 10.1016/j.watres.2024.121249
    Groundwater, the world's most abundant source of freshwater, is rapidly depleting in many regions due to a variety of factors. Accurate forecasting of groundwater level (GWL) is essential for effective management of this vital resource, but it remains a complex and challenging task. In recent years, there has been a notable increase in the use of machine learning (ML) techniques to model GWL, with many studies reporting exceptional results. In this paper, we present a comprehensive review of 142 relevant articles indexed by the Web of Science from 2017 to 2023, focusing on key ML models, including artificial neural networks (ANN), adaptive neuro-fuzzy inference systems (ANFIS), support vector regression (SVR), evolutionary computing (EC), deep learning (DL), ensemble learning (EN), and hybrid-modeling (HM). We also discussed key modeling concepts such as dataset size, data splitting, input variable selection, forecasting time-step, performance metrics (PM), study zones, and aquifers, highlighting best practices for optimal GWL forecasting with ML. This review provides valuable insights and recommendations for researchers and water management agencies working in the field of groundwater management and hydrology.
    Matched MeSH terms: Machine Learning
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links