Displaying publications 21 - 40 of 421 in total

Abstract:
Sort:
  1. Alzu'bi D, Abdullah M, Hmeidi I, AlAzab R, Gharaibeh M, El-Heis M, et al.
    J Healthc Eng, 2022;2022:3861161.
    PMID: 37323471 DOI: 10.1155/2022/3861161
    Kidney tumor (KT) is one of the diseases that have affected our society and is the seventh most common tumor in both men and women worldwide. The early detection of KT has significant benefits in reducing death rates, producing preventive measures that reduce effects, and overcoming the tumor. Compared to the tedious and time-consuming traditional diagnosis, automatic detection algorithms of deep learning (DL) can save diagnosis time, improve test accuracy, reduce costs, and reduce the radiologist's workload. In this paper, we present detection models for diagnosing the presence of KTs in computed tomography (CT) scans. Toward detecting and classifying KT, we proposed 2D-CNN models; three models are concerning KT detection such as a 2D convolutional neural network with six layers (CNN-6), a ResNet50 with 50 layers, and a VGG16 with 16 layers. The last model is for KT classification as a 2D convolutional neural network with four layers (CNN-4). In addition, a novel dataset from the King Abdullah University Hospital (KAUH) has been collected that consists of 8,400 images of 120 adult patients who have performed CT scans for suspected kidney masses. The dataset was divided into 80% for the training set and 20% for the testing set. The accuracy results for the detection models of 2D CNN-6 and ResNet50 reached 97%, 96%, and 60%, respectively. At the same time, the accuracy results for the classification model of the 2D CNN-4 reached 92%. Our novel models achieved promising results; they enhance the diagnosis of patient conditions with high accuracy, reducing radiologist's workload and providing them with a tool that can automatically assess the condition of the kidneys, reducing the risk of misdiagnosis. Furthermore, increasing the quality of healthcare service and early detection can change the disease's track and preserve the patient's life.
  2. Ngugi HN, Ezugwu AE, Akinyelu AA, Abualigah L
    Environ Monit Assess, 2024 Feb 24;196(3):302.
    PMID: 38401024 DOI: 10.1007/s10661-024-12454-z
    Digital image processing has witnessed a significant transformation, owing to the adoption of deep learning (DL) algorithms, which have proven to be vastly superior to conventional methods for crop detection. These DL algorithms have recently found successful applications across various domains, translating input data, such as images of afflicted plants, into valuable insights, like the identification of specific crop diseases. This innovation has spurred the development of cutting-edge techniques for early detection and diagnosis of crop diseases, leveraging tools such as convolutional neural networks (CNN), K-nearest neighbour (KNN), support vector machines (SVM), and artificial neural networks (ANN). This paper offers an all-encompassing exploration of the contemporary literature on methods for diagnosing, categorizing, and gauging the severity of crop diseases. The review examines the performance analysis of the latest machine learning (ML) and DL techniques outlined in these studies. It also scrutinizes the methodologies and datasets and outlines the prevalent recommendations and identified gaps within different research investigations. As a conclusion, the review offers insights into potential solutions and outlines the direction for future research in this field. The review underscores that while most studies have concentrated on traditional ML algorithms and CNN, there has been a noticeable dearth of focus on emerging DL algorithms like capsule neural networks and vision transformers. Furthermore, it sheds light on the fact that several datasets employed for training and evaluating DL models have been tailored to suit specific crop types, emphasizing the pressing need for a comprehensive and expansive image dataset encompassing a wider array of crop varieties. Moreover, the survey draws attention to the prevailing trend where the majority of research endeavours have concentrated on individual plant diseases, ML, or DL algorithms. In light of this, it advocates for the development of a unified framework that harnesses an ensemble of ML and DL algorithms to address the complexities of multiple plant diseases effectively.
  3. Barua PD, Muhammad Gowdh NF, Rahmat K, Ramli N, Ng WL, Chan WY, et al.
    PMID: 34360343 DOI: 10.3390/ijerph18158052
    COVID-19 and pneumonia detection using medical images is a topic of immense interest in medical and healthcare research. Various advanced medical imaging and machine learning techniques have been presented to detect these respiratory disorders accurately. In this work, we have proposed a novel COVID-19 detection system using an exemplar and hybrid fused deep feature generator with X-ray images. The proposed Exemplar COVID-19FclNet9 comprises three basic steps: exemplar deep feature generation, iterative feature selection and classification. The novelty of this work is the feature extraction using three pre-trained convolutional neural networks (CNNs) in the presented feature extraction phase. The common aspects of these pre-trained CNNs are that they have three fully connected layers, and these networks are AlexNet, VGG16 and VGG19. The fully connected layer of these networks is used to generate deep features using an exemplar structure, and a nine-feature generation method is obtained. The loss values of these feature extractors are computed, and the best three extractors are selected. The features of the top three fully connected features are merged. An iterative selector is used to select the most informative features. The chosen features are classified using a support vector machine (SVM) classifier. The proposed COVID-19FclNet9 applied nine deep feature extraction methods by using three deep networks together. The most appropriate deep feature generation model selection and iterative feature selection have been employed to utilise their advantages together. By using these techniques, the image classification ability of the used three deep networks has been improved. The presented model is developed using four X-ray image corpora (DB1, DB2, DB3 and DB4) with two, three and four classes. The proposed Exemplar COVID-19FclNet9 achieved a classification accuracy of 97.60%, 89.96%, 98.84% and 99.64% using the SVM classifier with 10-fold cross-validation for four datasets, respectively. Our developed Exemplar COVID-19FclNet9 model has achieved high classification accuracy for all four databases and may be deployed for clinical application.
  4. Hagiwara Y, Koh JEW, Tan JH, Bhandary SV, Laude A, Ciaccio EJ, et al.
    Comput Methods Programs Biomed, 2018 Oct;165:1-12.
    PMID: 30337064 DOI: 10.1016/j.cmpb.2018.07.012
    BACKGROUND AND OBJECTIVES: Glaucoma is an eye condition which leads to permanent blindness when the disease progresses to an advanced stage. It occurs due to inappropriate intraocular pressure within the eye, resulting in damage to the optic nerve. Glaucoma does not exhibit any symptoms in its nascent stage and thus, it is important to diagnose early to prevent blindness. Fundus photography is widely used by ophthalmologists to assist in diagnosis of glaucoma and is cost-effective.

    METHODS: The morphological features of the disc that is characteristic of glaucoma are clearly seen in the fundus images. However, manual inspection of the acquired fundus images may be prone to inter-observer variation. Therefore, a computer-aided detection (CAD) system is proposed to make an accurate, reliable and fast diagnosis of glaucoma based on the optic nerve features of fundus imaging. In this paper, we reviewed existing techniques to automatically diagnose glaucoma.

    RESULTS: The use of CAD is very effective in the diagnosis of glaucoma and can assist the clinicians to alleviate their workload significantly. We have also discussed the advantages of employing state-of-art techniques, including deep learning (DL), when developing the automated system. The DL methods are effective in glaucoma diagnosis.

    CONCLUSIONS: Novel DL algorithms with big data availability are required to develop a reliable CAD system. Such techniques can be employed to diagnose other eye diseases accurately.

  5. Ay B, Yildirim O, Talo M, Baloglu UB, Aydin G, Puthankattil SD, et al.
    J Med Syst, 2019 May 28;43(7):205.
    PMID: 31139932 DOI: 10.1007/s10916-019-1345-y
    Depression affects large number of people across the world today and it is considered as the global problem. It is a mood disorder which can be detected using electroencephalogram (EEG) signals. The manual detection of depression by analyzing the EEG signals requires lot of experience, tedious and time consuming. Hence, a fully automated depression diagnosis system developed using EEG signals will help the clinicians. Therefore, we propose a deep hybrid model developed using convolutional neural network (CNN) and long-short term memory (LSTM) architectures to detect depression using EEG signals. In the deep model, temporal properties of the signals are learned with CNN layers and the sequence learning process is provided through the LSTM layers. In this work, we have used EEG signals obtained from left and right hemispheres of the brain. Our work has provided 99.12% and 97.66% classification accuracies for the right and left hemisphere EEG signals respectively. Hence, we can conclude that the developed CNN-LSTM model is accurate and fast in detecting the depression using EEG signals. It can be employed in psychiatry wards of the hospitals to detect the depression using EEG signals accurately and thus aid the psychiatrists.
  6. Yildirim O, Baloglu UB, Acharya UR
    PMID: 30791379 DOI: 10.3390/ijerph16040599
    Sleep disorder is a symptom of many neurological diseases that may significantly affect the quality of daily life. Traditional methods are time-consuming and involve the manual scoring of polysomnogram (PSG) signals obtained in a laboratory environment. However, the automated monitoring of sleep stages can help detect neurological disorders accurately as well. In this study, a flexible deep learning model is proposed using raw PSG signals. A one-dimensional convolutional neural network (1D-CNN) is developed using electroencephalogram (EEG) and electrooculogram (EOG) signals for the classification of sleep stages. The performance of the system is evaluated using two public databases (sleep-edf and sleep-edfx). The developed model yielded the highest accuracies of 98.06%, 94.64%, 92.36%, 91.22%, and 91.00% for two to six sleep classes, respectively, using the sleep-edf database. Further, the proposed model obtained the highest accuracies of 97.62%, 94.34%, 92.33%, 90.98%, and 89.54%, respectively for the same two to six sleep classes using the sleep-edfx dataset. The developed deep learning model is ready for clinical usage, and can be tested with big PSG data.
  7. Yildirim O, Baloglu UB, Tan RS, Ciaccio EJ, Acharya UR
    Comput Methods Programs Biomed, 2019 Jul;176:121-133.
    PMID: 31200900 DOI: 10.1016/j.cmpb.2019.05.004
    BACKGROUND AND OBJECTIVE: For diagnosis of arrhythmic heart problems, electrocardiogram (ECG) signals should be recorded and monitored. The long-term signal records obtained are analyzed by expert cardiologists. Devices such as the Holter monitor have limited hardware capabilities. For improved diagnostic capacity, it would be helpful to detect arrhythmic signals automatically. In this study, a novel approach is presented as a candidate solution for these issues.

    METHODS: A convolutional auto-encoder (CAE) based nonlinear compression structure is implemented to reduce the signal size of arrhythmic beats. Long-short term memory (LSTM) classifiers are employed to automatically recognize arrhythmias using ECG features, which are deeply coded with the CAE network.

    RESULTS: Based upon the coded ECG signals, both storage requirement and classification time were considerably reduced. In experimental studies conducted with the MIT-BIH arrhythmia database, ECG signals were compressed by an average 0.70% percentage root mean square difference (PRD) rate, and an accuracy of over 99.0% was observed.

    CONCLUSIONS: One of the significant contributions of this study is that the proposed approach can significantly reduce time duration when using LSTM networks for data analysis. Thus, a novel and effective approach was proposed for both ECG signal compression, and their high-performance automatic recognition, with very low computational cost.

    Matched MeSH terms: Neural Networks (Computer)*
  8. Tan JH, Hagiwara Y, Pang W, Lim I, Oh SL, Adam M, et al.
    Comput Biol Med, 2018 03 01;94:19-26.
    PMID: 29358103 DOI: 10.1016/j.compbiomed.2017.12.023
    Coronary artery disease (CAD) is the most common cause of heart disease globally. This is because there is no symptom exhibited in its initial phase until the disease progresses to an advanced stage. The electrocardiogram (ECG) is a widely accessible diagnostic tool to diagnose CAD that captures abnormal activity of the heart. However, it lacks diagnostic sensitivity. One reason is that, it is very challenging to visually interpret the ECG signal due to its very low amplitude. Hence, identification of abnormal ECG morphology by clinicians may be prone to error. Thus, it is essential to develop a software which can provide an automated and objective interpretation of the ECG signal. This paper proposes the implementation of long short-term memory (LSTM) network with convolutional neural network (CNN) to automatically diagnose CAD ECG signals accurately. Our proposed deep learning model is able to detect CAD ECG signals with a diagnostic accuracy of 99.85% with blindfold strategy. The developed prototype model is ready to be tested with an appropriate huge database before the clinical usage.
    Matched MeSH terms: Neural Networks (Computer)*
  9. Khare SK, Bajaj V, Acharya UR
    Physiol Meas, 2023 Mar 08;44(3).
    PMID: 36787641 DOI: 10.1088/1361-6579/acbc06
    Objective.Schizophrenia (SZ) is a severe chronic illness characterized by delusions, cognitive dysfunctions, and hallucinations that impact feelings, behaviour, and thinking. Timely detection and treatment of SZ are necessary to avoid long-term consequences. Electroencephalogram (EEG) signals are one form of a biomarker that can reveal hidden changes in the brain during SZ. However, the EEG signals are non-stationary in nature with low amplitude. Therefore, extracting the hidden information from the EEG signals is challenging.Approach.The time-frequency domain is crucial for the automatic detection of SZ. Therefore, this paper presents the SchizoNET model combining the Margenau-Hill time-frequency distribution (MH-TFD) and convolutional neural network (CNN). The instantaneous information of EEG signals is captured in the time-frequency domain using MH-TFD. The time-frequency amplitude is converted to two-dimensional plots and fed to the developed CNN model.Results.The SchizoNET model is developed using three different validation techniques, including holdout, five-fold cross-validation, and ten-fold cross-validation techniques using three separate public SZ datasets (Dataset 1, 2, and 3). The proposed model achieved an accuracy of 97.4%, 99.74%, and 96.35% on Dataset 1 (adolescents: 45 SZ and 39 HC subjects), Dataset 2 (adults: 14 SZ and 14 HC subjects), and Dataset 3 (adults: 49 SZ and 32 HC subjects), respectively. We have also evaluated six performance parameters and the area under the curve to evaluate the performance of our developed model.Significance.The SchizoNET is robust, effective, and accurate, as it performed better than the state-of-the-art techniques. To the best of our knowledge, this is the first work to explore three publicly available EEG datasets for the automated detection of SZ. Our SchizoNET model can help neurologists detect the SZ in various scenarios.
  10. Horry MJ, Chakraborty S, Pradhan B, Paul M, Zhu J, Loh HW, et al.
    Sensors (Basel), 2023 Jul 21;23(14).
    PMID: 37514877 DOI: 10.3390/s23146585
    Screening programs for early lung cancer diagnosis are uncommon, primarily due to the challenge of reaching at-risk patients located in rural areas far from medical facilities. To overcome this obstacle, a comprehensive approach is needed that combines mobility, low cost, speed, accuracy, and privacy. One potential solution lies in combining the chest X-ray imaging mode with federated deep learning, ensuring that no single data source can bias the model adversely. This study presents a pre-processing pipeline designed to debias chest X-ray images, thereby enhancing internal classification and external generalization. The pipeline employs a pruning mechanism to train a deep learning model for nodule detection, utilizing the most informative images from a publicly available lung nodule X-ray dataset. Histogram equalization is used to remove systematic differences in image brightness and contrast. Model training is then performed using combinations of lung field segmentation, close cropping, and rib/bone suppression. The resulting deep learning models, generated through this pre-processing pipeline, demonstrate successful generalization on an independent lung nodule dataset. By eliminating confounding variables in chest X-ray images and suppressing signal noise from the bone structures, the proposed deep learning lung nodule detection algorithm achieves an external generalization accuracy of 89%. This approach paves the way for the development of a low-cost and accessible deep learning-based clinical system for lung cancer screening.
  11. Nguyen Thi le T, Sarmiento ME, Calero R, Camacho F, Reyes F, Hossain MM, et al.
    Tuberculosis (Edinb), 2014 Sep;94(5):475-81.
    PMID: 25034135 DOI: 10.1016/j.tube.2014.06.004
    The most important targets for vaccine development are the proteins that are highly expressed by the microorganisms during infection in-vivo. A number of Mycobacterium tuberculosis (Mtb) proteins are also reported to be expressed in-vivo at different phases of infection. In the present study, we analyzed multiple published databases of gene expression profiles of Mtb in-vivo at different phases of infection in animals and humans and selected 38 proteins that are highly expressed in the active, latent and reactivation phases. We predicted T- and B-cell epitopes from the selected proteins using HLAPred for T-cell epitope prediction and BCEPred combined with ABCPred for B-cell epitope prediction. For each selected proteins, regions containing both T- and B-cell epitopes were identified which might be considered as important candidates for vaccine design against tuberculosis.
  12. Acharya UR, Hagiwara Y, Adeli H
    Epilepsy Behav, 2018 11;88:251-261.
    PMID: 30317059 DOI: 10.1016/j.yebeh.2018.09.030
    In the past two decades, significant advances have been made on automated electroencephalogram (EEG)-based diagnosis of epilepsy and seizure detection. A number of innovative algorithms have been introduced that can aid in epilepsy diagnosis with a high degree of accuracy. In recent years, the frontiers of computational epilepsy research have moved to seizure prediction, a more challenging problem. While antiepileptic medication can result in complete seizure freedom in many patients with epilepsy, up to one-third of patients living with epilepsy will have medically intractable epilepsy, where medications reduce seizure frequency but do not completely control seizures. If a seizure can be predicted prior to its clinical manifestation, then there is potential for abortive treatment to be given, either self-administered or via an implanted device administering medication or electrical stimulation. This will have a far-reaching impact on the treatment of epilepsy and patient's quality of life. This paper presents a state-of-the-art review of recent efforts and journal articles on seizure prediction. The technologies developed for epilepsy diagnosis and seizure detection are being adapted and extended for seizure prediction. The paper ends with some novel ideas for seizure prediction using the increasingly ubiquitous machine learning technology, particularly deep neural network machine learning.
    Matched MeSH terms: Neural Networks (Computer)*
  13. Acharya UR, Oh SL, Hagiwara Y, Tan JH, Adeli H
    Comput Biol Med, 2018 09 01;100:270-278.
    PMID: 28974302 DOI: 10.1016/j.compbiomed.2017.09.017
    An encephalogram (EEG) is a commonly used ancillary test to aide in the diagnosis of epilepsy. The EEG signal contains information about the electrical activity of the brain. Traditionally, neurologists employ direct visual inspection to identify epileptiform abnormalities. This technique can be time-consuming, limited by technical artifact, provides variable results secondary to reader expertise level, and is limited in identifying abnormalities. Therefore, it is essential to develop a computer-aided diagnosis (CAD) system to automatically distinguish the class of these EEG signals using machine learning techniques. This is the first study to employ the convolutional neural network (CNN) for analysis of EEG signals. In this work, a 13-layer deep convolutional neural network (CNN) algorithm is implemented to detect normal, preictal, and seizure classes. The proposed technique achieved an accuracy, specificity, and sensitivity of 88.67%, 90.00% and 95.00%, respectively.
  14. Ali SS, Moinuddin M, Raza K, Adil SH
    ScientificWorldJournal, 2014;2014:850189.
    PMID: 24987745 DOI: 10.1155/2014/850189
    Radial basis function neural networks are used in a variety of applications such as pattern recognition, nonlinear identification, control and time series prediction. In this paper, the learning algorithm of radial basis function neural networks is analyzed in a feedback structure. The robustness of the learning algorithm is discussed in the presence of uncertainties that might be due to noisy perturbations at the input or to modeling mismatch. An intelligent adaptation rule is developed for the learning rate of RBFNN which gives faster convergence via an estimate of error energy while giving guarantee to the l 2 stability governed by the upper bounding via small gain theorem. Simulation results are presented to support our theoretical development.
    Matched MeSH terms: Neural Networks (Computer)*
  15. Hidayat W, Shakaff AY, Ahmad MN, Adom AH
    Sensors (Basel), 2010;10(5):4675-85.
    PMID: 22399899 DOI: 10.3390/s100504675
    Presently, the quality assurance of agarwood oil is performed by sensory panels which has significant drawbacks in terms of objectivity and repeatability. In this paper, it is shown how an electronic nose (e-nose) may be successfully utilised for the classification of agarwood oil. Hierarchical Cluster Analysis (HCA) and Principal Component Analysis (PCA), were used to classify different types of oil. The HCA produced a dendrogram showing the separation of e-nose data into three different groups of oils. The PCA scatter plot revealed a distinct separation between the three groups. An Artificial Neural Network (ANN) was used for a better prediction of unknown samples.
  16. Arora S, Sawaran Singh NS, Singh D, Rakesh Shrivastava R, Mathur T, Tiwari K, et al.
    Comput Intell Neurosci, 2022;2022:9755422.
    PMID: 36531923 DOI: 10.1155/2022/9755422
    In this study, the air quality index (AQI) of Indian cities of different tiers is predicted by using the vanilla recurrent neural network (RNN). AQI is used to measure the air quality of any region which is calculated on the basis of the concentration of ground-level ozone, particle pollution, carbon monoxide, and sulphur dioxide in air. Thus, the present air quality of an area is dependent on current weather conditions, vehicle traffic in that area, or anything that increases air pollution. Also, the current air quality is dependent on the climate conditions and industrialization in that area. Thus, the AQI is history-dependent. To capture this dependency, the memory property of fractional derivatives is exploited in this algorithm and the fractional gradient descent algorithm involving Caputo's derivative has been used in the backpropagation algorithm for training of the RNN. Due to the availability of a large amount of data and high computation support, deep neural networks are capable of giving state-of-the-art results in the time series prediction. But, in this study, the basic vanilla RNN has been chosen to check the effectiveness of fractional derivatives. The AQI and gases affecting AQI prediction results for different cities show that the proposed algorithm leads to higher accuracy. It has been observed that the results of the vanilla RNN with fractional derivatives are comparable to long short-term memory (LSTM).
  17. Tham SY, Agatonovic-Kustrin S
    J Pharm Biomed Anal, 2002 May 15;28(3-4):581-90.
    PMID: 12008137
    Quantitative structure-retention relationship(QSRR) method was used to model reversed-phase high-performance liquid chromatography (RP-HPLC) separation of 18 selected amino acids. Retention data for phenylthiocarbamyl (PTC) amino acids derivatives were obtained using gradient elution on ODS column with mobile phase of varying acetonitrile, acetate buffer and containing 0.5 ml/l of triethylamine (TEA). Molecular structure of each amino acid was encoded with 36 calculated molecular descriptors. The correlation between the molecular descriptors and the retention time of the compounds in the calibration set was established using the genetic neural network method. A genetic algorithm (GA) was used to select important molecular descriptors and supervised artificial neural network (ANN) was used to correlate mobile phase composition and selected descriptors with the experimentally derived retention times. Retention time values were used as the network's output and calculated molecular descriptors and mobile phase composition as the inputs. The best model with five input descriptors was chosen, and the significance of the selected descriptors for amino acid separation was examined. Results confirmed the dominant role of the organic modifier in such chromatographic systems in addition to lipophilicity (log P) and molecular size and shape (topological indices) of investigated solutes.
  18. Hasan MK, Ghazal TM, Alkhalifah A, Abu Bakar KA, Omidvar A, Nafi NS, et al.
    Front Public Health, 2021;9:737149.
    PMID: 34712639 DOI: 10.3389/fpubh.2021.737149
    The internet of reality or augmented reality has been considered a breakthrough and an outstanding critical mutation with an emphasis on data mining leading to dismantling of some of its assumptions among several of its stakeholders. In this work, we study the pillars of these technologies connected to web usage as the Internet of things (IoT) system's healthcare infrastructure. We used several data mining techniques to evaluate the online advertisement data set, which can be categorized as high dimensional with 1,553 attributes, and the imbalanced data set, which automatically simulates an IoT discrimination problem. The proposed methodology applies Fischer linear discrimination analysis (FLDA) and quadratic discrimination analysis (QDA) within random projection (RP) filters to compare our runtime and accuracy with support vector machine (SVM), K-nearest neighbor (KNN), and Multilayer perceptron (MLP) in IoT-based systems. Finally, the impact on number of projections was practically experimented, and the sensitivity of both FLDA and QDA with regard to precision and runtime was found to be challenging. The modeling results show not only improved accuracy, but also runtime improvements. When compared with SVM, KNN, and MLP in QDA and FLDA, runtime shortens by 20 times in our chosen data set simulated for a healthcare framework. The RP filtering in the preprocessing stage of the attribute selection, fulfilling the model's runtime, is a standpoint in the IoT industry. Index Terms: Data Mining, Random Projection, Fischer Linear Discriminant Analysis, Online Advertisement Dataset, Quadratic Discriminant Analysis, Feature Selection, Internet of Things.
  19. Tran HNT, Thomas JJ, Ahamed Hassain Malim NH
    PeerJ, 2022;10:e13163.
    PMID: 35578674 DOI: 10.7717/peerj.13163
    The exploration of drug-target interactions (DTI) is an essential stage in the drug development pipeline. Thanks to the assistance of computational models, notably in the deep learning approach, scientists have been able to shorten the time spent on this stage. Widely practiced deep learning algorithms such as convolutional neural networks and recurrent neural networks are commonly employed in DTI prediction projects. However, they can hardly utilize the natural graph structure of molecular inputs. For that reason, a graph neural network (GNN) is an applicable choice for learning the chemical and structural characteristics of molecules when it represents molecular compounds as graphs and learns the compound features from those graphs. In an effort to construct an advanced deep learning-based model for DTI prediction, we propose Deep Neural Computation (DeepNC), which is a framework utilizing three GNN algorithms: Generalized Aggregation Networks (GENConv), Graph Convolutional Networks (GCNConv), and Hypergraph Convolution-Hypergraph Attention (HypergraphConv). In short, our framework learns the features of drugs and targets by the layers of GNN and 1-D convolution network, respectively. Then, representations of the drugs and targets are fed into fully-connected layers to predict the binding affinity values. The models of DeepNC were evaluated on two benchmarked datasets (Davis, Kiba) and one independently proposed dataset (Allergy) to confirm that they are suitable for predicting the binding affinity of drugs and targets. Moreover, compared to the results of baseline methods that worked on the same problem, DeepNC proves to improve the performance in terms of mean square error and concordance index.
    Matched MeSH terms: Neural Networks (Computer)*
  20. Nhu VH, Shirzadi A, Shahabi H, Singh SK, Al-Ansari N, Clague JJ, et al.
    PMID: 32316191 DOI: 10.3390/ijerph17082749
    Shallow landslides damage buildings and other infrastructure, disrupt agriculture practices, and can cause social upheaval and loss of life. As a result, many scientists study the phenomenon, and some of them have focused on producing landslide susceptibility maps that can be used by land-use managers to reduce injury and damage. This paper contributes to this effort by comparing the power and effectiveness of five machine learning, benchmark algorithms-Logistic Model Tree, Logistic Regression, Naïve Bayes Tree, Artificial Neural Network, and Support Vector Machine-in creating a reliable shallow landslide susceptibility map for Bijar City in Kurdistan province, Iran. Twenty conditioning factors were applied to 111 shallow landslides and tested using the One-R attribute evaluation (ORAE) technique for modeling and validation processes. The performance of the models was assessed by statistical-based indexes including sensitivity, specificity, accuracy, mean absolute error (MAE), root mean square error (RMSE), and area under the receiver operatic characteristic curve (AUC). Results indicate that all the five machine learning models performed well for shallow landslide susceptibility assessment, but the Logistic Model Tree model (AUC = 0.932) had the highest goodness-of-fit and prediction accuracy, followed by the Logistic Regression (AUC = 0.932), Naïve Bayes Tree (AUC = 0.864), ANN (AUC = 0.860), and Support Vector Machine (AUC = 0.834) models. Therefore, we recommend the use of the Logistic Model Tree model in shallow landslide mapping programs in semi-arid regions to help decision makers, planners, land-use managers, and government agencies mitigate the hazard and risk.
    Matched MeSH terms: Neural Networks (Computer)*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links