DESIGN: Artificial intelligence (neural network) study.
METHODS: We assessed 1400 OCT scans of patients with neovascular AMD. Fifteen physical features for each eligible OCT, as well as patient age, were used as input data and corresponding recorded visual acuity as the target data to train, validate, and test a supervised neural network. We then applied this network to model the impact on acuity of defined OCT changes in subretinal fluid, subretinal hyperreflective material, and loss of external limiting membrane (ELM) integrity.
RESULTS: A total of 1210 eligible OCT scans were analyzed, resulting in 1210 data points, which were each 16-dimensional. A 10-layer feed-forward neural network with 1 hidden layer of 10 neurons was trained to predict acuity and demonstrated a root mean square error of 8.2 letters for predicted compared to actual visual acuity and a mean regression coefficient of 0.85. A virtual model using this network demonstrated the relationship of visual acuity to specific, programmed changes in OCT characteristics. When ELM is intact, there is a shallow decline in acuity with increasing subretinal fluid but a much steeper decline with equivalent increasing subretinal hyperreflective material. When ELM is not intact, all visual acuities are reduced. Increasing subretinal hyperreflective material or subretinal fluid in this circumstance reduces vision further still, but with a smaller gradient than when ELM is intact.
CONCLUSIONS: The supervised machine learning neural network developed is able to generate an estimated visual acuity value from OCT images in a population of patients with AMD. These findings should be of clinical and research interest in macular degeneration, for example in estimating visual prognosis or highlighting the importance of developing treatments targeting more visually destructive pathologies.
RESULTS: At present, the classifier used has achieved an accuracy of 100% based on skulls' views. Classification and identification to regions and sexes have also attained 72.5%, 87.5% and 80.0% of accuracy for dorsal, lateral, and jaw views, respectively. This results show that the shape characteristic features used are substantial because they can differentiate the specimens based on regions and sexes up to the accuracy of 80% and above. Finally, an application was developed and can be used for the scientific community.
CONCLUSIONS: This automated system demonstrates the practicability of using computer-assisted systems in providing interesting alternative approach for quick and easy identification of unknown species.
RESULTS: Different production media were tested for lipase production by a newly isolated thermophilic Geobacillus sp. strain ARM (DSM 21496 = NCIMB 41583). The maximum production was obtained in the presence of peptone and yeast extract as organic nitrogen sources, olive oil as carbon source and lipase production inducer, sodium and calcium as metal ions, and gum arabic as emulsifier and lipase production inducer. The best models for optimization of culture parameters were achieved by multilayer full feedforward incremental back propagation network and modified response surface model using backward elimination, where the optimum condition was: growth temperature (52.3 degrees C), medium volume (50 ml), inoculum size (1%), agitation rate (static condition), incubation period (24 h) and initial pH (5.8). The experimental lipase activity was 0.47 Uml(-1) at optimum condition (4.7-fold increase), which compared well to the maximum predicted values by ANN (0.47 Uml(-1)) and RSM (0.476 Uml(-1)), whereas R2 and AAD were determined as 0.989 and 0.059% for ANN, and 0.95 and 0.078% for RSM respectively.
CONCLUSION: Lipase production is the result of a synergistic combination of effective parameters interactions. These parameters are in equilibrium and the change of one parameter can be compensated by changes of other parameters to give the same results. Though both RSM and ANN models provided good quality predictions in this study, yet the ANN showed a clear superiority over RSM for both data fitting and estimation capabilities. On the other hand, ANN has the disadvantage of requiring large amounts of training data in comparison with RSM. This problem was solved by using statistical experimental design, to reduce the number of experiments.
METHODS: A large hospital-based breast cancer dataset retrieved from the University Malaya Medical Centre, Kuala Lumpur, Malaysia (n = 8066) with diagnosis information between 1993 and 2016 was used in this study. The dataset contained 23 predictor variables and one dependent variable, which referred to the survival status of the patients (alive or dead). In determining the significant prognostic factors of breast cancer survival rate, prediction models were built using decision tree, random forest, neural networks, extreme boost, logistic regression, and support vector machine. Next, the dataset was clustered based on the receptor status of breast cancer patients identified via immunohistochemistry to perform advanced modelling using random forest. Subsequently, the important variables were ranked via variable selection methods in random forest. Finally, decision trees were built and validation was performed using survival analysis.
RESULTS: In terms of both model accuracy and calibration measure, all algorithms produced close outcomes, with the lowest obtained from decision tree (accuracy = 79.8%) and the highest from random forest (accuracy = 82.7%). The important variables identified in this study were cancer stage classification, tumour size, number of total axillary lymph nodes removed, number of positive lymph nodes, types of primary treatment, and methods of diagnosis.
CONCLUSION: Interestingly the various machine learning algorithms used in this study yielded close accuracy hence these methods could be used as alternative predictive tools in the breast cancer survival studies, particularly in the Asian region. The important prognostic factors influencing survival rate of breast cancer identified in this study, which were validated by survival curves, are useful and could be translated into decision support tools in the medical domain.
RESULT: We tested Naive Bayes, Logistic Regression, KNN, J48, Random Forest, SVM, and Deep Neural Network algorithms to ASD screening dataset and compared the classifiers' based on significant parameters; sensitivity, specificity, accuracy, receiver operating characteristic, area under the curve, and runtime, in predicting ASD occurrences. We also found that most of previous studies focused on classifying health-related dataset while ignoring the missing values which may contribute to significant impacts to the classification result which in turn may impact the life of the patients. Thus, we addressed the missing values by implementing imputation method where they are replaced with the mean of the available records found in the dataset.
CONCLUSION: We found that J48 produced promising results as compared to other classifiers when tested in both circumstances, with and without missing values. Our findings also suggested that SVM does not necessarily perform well for small and simple datasets. The outcome is hoped to assist health practitioners in making accurate diagnosis of ASD occurrences in patients.