Displaying publications 41 - 60 of 932 in total

Abstract:
Sort:
  1. Babini MH, Kulish VV, Namazi H
    J Med Internet Res, 2020 06 01;22(6):e17945.
    PMID: 32478661 DOI: 10.2196/17945
    BACKGROUND: Education and learning are the most important goals of all universities. For this purpose, lecturers use various tools to grab the attention of students and improve their learning ability. Virtual reality refers to the subjective sensory experience of being immersed in a computer-mediated world, and has recently been implemented in learning environments.

    OBJECTIVE: The aim of this study was to analyze the effect of a virtual reality condition on students' learning ability and physiological state.

    METHODS: Students were shown 6 sets of videos (3 videos in a two-dimensional condition and 3 videos in a three-dimensional condition), and their learning ability was analyzed based on a subsequent questionnaire. In addition, we analyzed the reaction of the brain and facial muscles of the students during both the two-dimensional and three-dimensional viewing conditions and used fractal theory to investigate their attention to the videos.

    RESULTS: The learning ability of students was increased in the three-dimensional condition compared to that in the two-dimensional condition. In addition, analysis of physiological signals showed that students paid more attention to the three-dimensional videos.

    CONCLUSIONS: A virtual reality condition has a greater effect on enhancing the learning ability of students. The analytical approach of this study can be further extended to evaluate other physiological signals of subjects in a virtual reality condition.

    Matched MeSH terms: Learning/physiology*
  2. Rajhans V, Mohammed CA, Ve RS, Prabhu A
    Educ Health (Abingdon), 2021 7 3;34(1):22-28.
    PMID: 34213440 DOI: 10.4103/efh.EfH_69_20
    Background: Current trends in health professions education are aligned to meet the needs of the millennial learner. The aim of this study was to identify learners' perceptions of an ongoing journal club (JC) activity in the optometry curriculum and evaluate the utility and efficiency of this method in promoting student learning.

    Methods: A qualitative approach with a phenomenological research design was adopted. The perceptions of undergraduate and postgraduate optometry students about JCs were captured using focus group discussions. A narrative thematic analysis was done using the verbatim transcripts and moderator's notes. Results are reported using "consolidated criteria for reporting qualitative research" guidelines.

    Results: A total of 33 optometry students participated in the study. Data analysis revealed three major themes related to (i) The ongoing practice of JC, (ii) student perceptions of JC and its relevance in facilitating student learning, and (iii) suggestions for modification of JC for achieving optimal educational outcomes.

    Discussion: Student feedback indicates that an instructional redesigning of JC is necessary, considering the characteristics and expectations of the current generation of learners and the rapid strides made in the field of educational technology. The recommendations provided are likely to resurrect an age-old approach that still has educational relevance if blended with collaborative learning formats and appropriate technology.

    Matched MeSH terms: Learning*
  3. Tan JK, Nazar FH, Makpol S, Teoh SL
    Molecules, 2022 Oct 30;27(21).
    PMID: 36364200 DOI: 10.3390/molecules27217374
    Learning and memory are essential to organism survival and are conserved across various species, especially vertebrates. Cognitive studies involving learning and memory require using appropriate model organisms to translate relevant findings to humans. Zebrafish are becoming increasingly popular as one of the animal models for neurodegenerative diseases due to their low maintenance cost, prolific nature and amenability to genetic manipulation. More importantly, zebrafish exhibit a repertoire of neurobehaviors comparable to humans. In this review, we discuss the forms of learning and memory abilities in zebrafish and the tests used to evaluate the neurobehaviors in this species. In addition, the pharmacological studies that used zebrafish as models to screen for the effects of neuroprotective and neurotoxic compounds on cognitive performance will be summarized here. Lastly, we discuss the challenges and perspectives in establishing zebrafish as a robust model for cognitive research involving learning and memory. Zebrafish are becoming an indispensable model in learning and memory research for screening neuroprotective agents against cognitive impairment.
    Matched MeSH terms: Learning*
  4. Chua SL, Foo LK, Guesgen HW, Marsland S
    Sensors (Basel), 2022 Nov 03;22(21).
    PMID: 36366154 DOI: 10.3390/s22218458
    Sensor-based human activity recognition has been extensively studied. Systems learn from a set of training samples to classify actions into a pre-defined set of ground truth activities. However, human behaviours vary over time, and so a recognition system should ideally be able to continuously learn and adapt, while retaining the knowledge of previously learned activities, and without failing to highlight novel, and therefore potentially risky, behaviours. In this paper, we propose a method based on compression that can incrementally learn new behaviours, while retaining prior knowledge. Evaluation was conducted on three publicly available smart home datasets.
    Matched MeSH terms: Machine Learning*
  5. Hentabli H, Bengherbia B, Saeed F, Salim N, Nafea I, Toubal A, et al.
    Int J Mol Sci, 2022 Oct 30;23(21).
    PMID: 36362018 DOI: 10.3390/ijms232113230
    Determining and modeling the possible behaviour and actions of molecules requires investigating the basic structural features and physicochemical properties that determine their behaviour during chemical, physical, biological, and environmental processes. Computational approaches such as machine learning methods are alternatives to predicting the physiochemical properties of molecules based on their structures. However, the limited accuracy and high error rates of such predictions restrict their use. In this paper, a novel technique based on a deep learning convolutional neural network (CNN) for the prediction of chemical compounds' bioactivity is proposed and developed. The molecules are represented in the new matrix format Mol2mat, a molecular matrix representation adapted from the well-known 2D-fingerprint descriptors. To evaluate the performance of the proposed methods, a series of experiments were conducted using two standard datasets, namely the MDL Drug Data Report (MDDR) and Sutherland, datasets comprising 10 homogeneous and 14 heterogeneous activity classes. After analysing the eight fingerprints, all the probable combinations were investigated using the five best descriptors. The results showed that a combination of three fingerprints, ECFP4, EPFP4, and ECFC4, along with a CNN activity prediction process, achieved the highest performance of 98% AUC when compared to the state-of-the-art ML algorithms NaiveB, LSVM, and RBFN.
    Matched MeSH terms: Machine Learning*
  6. Hussain S, Mustafa MW, Al-Shqeerat KHA, Saeed F, Al-Rimy BAS
    Sensors (Basel), 2021 Dec 17;21(24).
    PMID: 34960516 DOI: 10.3390/s21248423
    This study presents a novel feature-engineered-natural gradient descent ensemble-boosting (NGBoost) machine-learning framework for detecting fraud in power consumption data. The proposed framework was sequentially executed in three stages: data pre-processing, feature engineering, and model evaluation. It utilized the random forest algorithm-based imputation technique initially to impute the missing data entries in the acquired smart meter dataset. In the second phase, the majority weighted minority oversampling technique (MWMOTE) algorithm was used to avoid an unequal distribution of data samples among different classes. The time-series feature-extraction library and whale optimization algorithm were utilized to extract and select the most relevant features from the kWh reading of consumers. Once the most relevant features were acquired, the model training and testing process was initiated by using the NGBoost algorithm to classify the consumers into two distinct categories ("Healthy" and "Theft"). Finally, each input feature's impact (positive or negative) in predicting the target variable was recognized with the tree SHAP additive-explanations algorithm. The proposed framework achieved an accuracy of 93%, recall of 91%, and precision of 95%, which was greater than all the competing models, and thus validated its efficacy and significance in the studied field of research.
    Matched MeSH terms: Machine Learning*
  7. Muthaiyah S, Phang K, Sembakutti S
    F1000Res, 2021;10:892.
    PMID: 35035890 DOI: 10.12688/f1000research.72880.1
    Background: Changing trends in the use of technology have become an impelling force to be reckoned with for the accounting and finance profession. The curriculum offered in higher learning institutions must be quickly revamped so that students who complete a bachelor's degree are digitally competent upon graduation. With US$55.3 billion invested in FinTech in 2019 alone and more than 72% of accounting jobs being automated, graduates must be trained on digital skills to be future proof. Accounting and finance graduates must be made competent in skills that are related to digital content such as blockchain technology, information assets and autonomous peer to peer systems, to name a few. Methods: We used a three-phase approach: 1) careful mapping of digital topics taught within the course structure offered at these institutions; 2) review of current best practices and digital learning tools for digital inclusion which was ascertained from literature; and 3) 80 experts in a think tank group were interviewed on antecedents, awareness and problems in relation to digital inclusion within the curriculum to validate our research objective. Results: Eleven key tools for inclusion in the curriculum were discussed with experts and then mapped to current curriculum offered at institutions. We discovered that less than 5% of these were being taught. In total, 78% of experts agreed that digital content is inevitable, 90% agreed that digital inclusion based on tools that were discussed will yield great benefits for students, and lastly 75% agreed that giving digital exposure to students must be standard practice. Conclusions: The response from experts confirms that digital inclusion is imperative, but instructors themselves lacked the know-how of emerging technologies. Only the curriculum of institutions with approved bachelor's programs were included in this research. In our future work we hope to include all institutions and professional bodies as well.
    Matched MeSH terms: Learning*
  8. Adnan MSG, Siam ZS, Kabir I, Kabir Z, Ahmed MR, Hassan QK, et al.
    J Environ Manage, 2023 Jan 15;326(Pt B):116813.
    PMID: 36435143 DOI: 10.1016/j.jenvman.2022.116813
    Globally, many studies on machine learning (ML)-based flood susceptibility modeling have been carried out in recent years. While majority of those models produce reasonably accurate flood predictions, the outcomes are subject to uncertainty since flood susceptibility models (FSMs) may produce varying spatial predictions. However, there have not been many attempts to address these uncertainties because identifying spatial agreement in flood projections is a complex process. This study presents a framework for reducing spatial disagreement among four standalone and hybridized ML-based FSMs: random forest (RF), k-nearest neighbor (KNN), multilayer perceptron (MLP), and hybridized genetic algorithm-gaussian radial basis function-support vector regression (GA-RBF-SVR). Besides, an optimized model was developed combining the outcomes of those four models. The southwest coastal region of Bangladesh was selected as the case area. A comparable percentage of flood potential area (approximately 60% of the total land areas) was produced by all ML-based models. Despite achieving high prediction accuracy, spatial discrepancy in the model outcomes was observed, with pixel-wise correlation coefficients across different models ranging from 0.62 to 0.91. The optimized model exhibited high prediction accuracy and improved spatial agreement by reducing the number of classification errors. The framework presented in this study might aid in the formulation of risk-based development plans and enhancement of current early warning systems.
    Matched MeSH terms: Machine Learning*
  9. Wang C, Omar Dev RD, Soh KG, Mohd Nasirudddin NJ, Yuan Y, Ji X
    Front Public Health, 2023;11:1073423.
    PMID: 36969628 DOI: 10.3389/fpubh.2023.1073423
    This review aims to provide a detailed overview of the current status and development trends of blended learning in physical education by reviewing journal articles from the Web of Science (WOS) database. Several dimensions of blended learning were observed, including research trends, participants, online learning tools, theoretical frameworks, evaluation methods, application domains, Research Topics, and challenges. Following the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), a total of 22 journal articles were included in the current review. The findings of this review reveal that the number of blended learning articles in physical education has increased since 2018, proving that the incorporation of online learning tools into physical education courses has grown in popularity. From the reviewed journal articles, most attention is given to undergraduates, emphasizing that attention in the future should be placed on K-12 students, teachers, and educational institutions. The theoretical framework applied by journal articles is also limited to a few articles and the assessment method is relatively homogeneous, consisting mostly of questionnaires. This review also discovers the trends in blended learning in physical education as most of the studies focus on the topic centered on dynamic physical education. In terms of Research Topics, most journal articles focus on perceptions, learning outcomes, satisfaction, and motivation, which are preliminary aspects of blended learning research. Although the benefits of blended learning are evident, this review identifies five challenges of blended learning: instructional design challenges, technological literacy and competency challenges, self-regulation challenges, alienation and isolation challenges, and belief challenges. Finally, a number of recommendations for future research are presented.
    Matched MeSH terms: Learning*
  10. Zehra S, Faseeha U, Syed HJ, Samad F, Ibrahim AO, Abulfaraj AW, et al.
    Sensors (Basel), 2023 Jun 05;23(11).
    PMID: 37300067 DOI: 10.3390/s23115340
    Network function virtualization (NFV) is a rapidly growing technology that enables the virtualization of traditional network hardware components, offering benefits such as cost reduction, increased flexibility, and efficient resource utilization. Moreover, NFV plays a crucial role in sensor and IoT networks by ensuring optimal resource usage and effective network management. However, adopting NFV in these networks also brings security challenges that must promptly and effectively address. This survey paper focuses on exploring the security challenges associated with NFV. It proposes the utilization of anomaly detection techniques as a means to mitigate the potential risks of cyber attacks. The research evaluates the strengths and weaknesses of various machine learning-based algorithms for detecting network-based anomalies in NFV networks. By providing insights into the most efficient algorithm for timely and effective anomaly detection in NFV networks, this study aims to assist network administrators and security professionals in enhancing the security of NFV deployments, thus safeguarding the integrity and performance of sensors and IoT systems.
    Matched MeSH terms: Machine Learning*
  11. Wang X, Yu L, Wang Z
    J Environ Public Health, 2022;2022:9602876.
    PMID: 36200091 DOI: 10.1155/2022/9602876
    Blended learning has become the dominant teaching approach in colleges and universities as they evolve. A good learning environment design can represent college and university teaching quality, improve undergraduates' literacy, and boost talent training. This paper introduces the data mining method of undergraduate comprehensive literacy education, discovers the association rules of the evaluation data, and introduces the undergraduate comprehensive literacy evaluation model and BP neural network model driven by theory and technology in a mixed learning environment, which promotes students' comprehensive literacy evaluation and builds a good learning environment. The results demonstrate that undergraduate classification prediction accuracy is similar by data mining, and most reach 99.58 percent. So, whether it is the training sample or the test sample, the prediction result of undergraduate comprehensive literacy is acceptable, which illustrates the validity of the data mining algorithm model and has strong application importance for developing a better learning environment.
    Matched MeSH terms: Learning*
  12. Asim Shahid M, Alam MM, Mohd Su'ud M
    PLoS One, 2023;18(4):e0284209.
    PMID: 37053173 DOI: 10.1371/journal.pone.0284209
    The benefits and opportunities offered by cloud computing are among the fastest-growing technologies in the computer industry. Additionally, it addresses the difficulties and issues that make more users more likely to accept and use the technology. The proposed research comprised of machine learning (ML) algorithms is Naïve Bayes (NB), Library Support Vector Machine (LibSVM), Multinomial Logistic Regression (MLR), Sequential Minimal Optimization (SMO), K Nearest Neighbor (KNN), and Random Forest (RF) to compare the classifier gives better results in accuracy and less fault prediction. In this research, the secondary data results (CPU-Mem Mono) give the highest percentage of accuracy and less fault prediction on the NB classifier in terms of 80/20 (77.01%), 70/30 (76.05%), and 5 folds cross-validation (74.88%), and (CPU-Mem Multi) in terms of 80/20 (89.72%), 70/30 (90.28%), and 5 folds cross-validation (92.83%). Furthermore, on (HDD Mono) the SMO classifier gives the highest percentage of accuracy and less fault prediction fault in terms of 80/20 (87.72%), 70/30 (89.41%), and 5 folds cross-validation (88.38%), and (HDD-Multi) in terms of 80/20 (93.64%), 70/30 (90.91%), and 5 folds cross-validation (88.20%). Whereas, primary data results found RF classifier gives the highest percentage of accuracy and less fault prediction in terms of 80/20 (97.14%), 70/30 (96.19%), and 5 folds cross-validation (95.85%) in the primary data results, but the algorithm complexity (0.17 seconds) is not good. In terms of 80/20 (95.71%), 70/30 (95.71%), and 5 folds cross-validation (95.71%), SMO has the second highest accuracy and less fault prediction, but the algorithm complexity is good (0.3 seconds). The difference in accuracy and less fault prediction between RF and SMO is only (.13%), and the difference in time complexity is (14 seconds). We have decided that we will modify SMO. Finally, the Modified Sequential Minimal Optimization (MSMO) Algorithm method has been proposed to get the highest accuracy & less fault prediction errors in terms of 80/20 (96.42%), 70/30 (96.42%), & 5 fold cross validation (96.50%).
    Matched MeSH terms: Machine Learning*
  13. Menon S, Anand D, Kavita, Verma S, Kaur M, Jhanjhi NZ, et al.
    Sensors (Basel), 2023 Jul 04;23(13).
    PMID: 37447981 DOI: 10.3390/s23136132
    With the increasing growth rate of smart home devices and their interconnectivity via the Internet of Things (IoT), security threats to the communication network have become a concern. This paper proposes a learning engine for a smart home communication network that utilizes blockchain-based secure communication and a cloud-based data evaluation layer to segregate and rank data on the basis of three broad categories of Transactions (T), namely Smart T, Mod T, and Avoid T. The learning engine utilizes a neural network for the training and classification of the categories that helps the blockchain layer with improvisation in the decision-making process. The contributions of this paper include the application of a secure blockchain layer for user authentication and the generation of a ledger for the communication network; the utilization of the cloud-based data evaluation layer; the enhancement of an SI-based algorithm for training; and the utilization of a neural engine for the precise training and classification of categories. The proposed algorithm outperformed the Fused Real-Time Sequential Deep Extreme Learning Machine (RTS-DELM) system, the data fusion technique, and artificial intelligence Internet of Things technology in providing electronic information engineering and analyzing optimization schemes in terms of the computation complexity, false authentication rate, and qualitative parameters with a lower average computation complexity; in addition, it ensures a secure, efficient smart home communication network to enhance the lifestyle of human beings.
    Matched MeSH terms: Machine Learning; Learning
  14. Lim JY, Lim KM, Lee CP, Tan YX
    Neural Netw, 2023 Aug;165:19-30.
    PMID: 37263089 DOI: 10.1016/j.neunet.2023.05.037
    Few-shot learning aims to train a model with a limited number of base class samples to classify the novel class samples. However, to attain generalization with a limited number of samples is not a trivial task. This paper proposed a novel few-shot learning approach named Self-supervised Contrastive Learning (SCL) that enriched the model representation with multiple self-supervision objectives. Given the base class samples, the model is trained with the base class loss. Subsequently, contrastive-based self-supervision is introduced to minimize the distance between each training sample with their augmented variants to improve the sample discrimination. To recognize the distant sample, rotation-based self-supervision is proposed to enable the model to learn to recognize the rotation degree of the samples for better sample diversity. The multitask environment is introduced where each training sample is assigned with two class labels: base class label and rotation class label. Complex augmentation is put forth to help the model learn a deeper understanding of the object. The image structure of the training samples are augmented independent of the base class information. The proposed SCL is trained to minimize the base class loss, contrastive distance loss, and rotation class loss simultaneously to learn the generic features and improve the novel class performance. With the multiple self-supervision objectives, the proposed SCL outperforms state-of-the-art few-shot approaches on few-shot image classification benchmark datasets.
    Matched MeSH terms: Learning*
  15. Ong SQ, Isawasan P, Ngesom AMM, Shahar H, Lasim AM, Nair G
    Sci Rep, 2023 Nov 05;13(1):19129.
    PMID: 37926755 DOI: 10.1038/s41598-023-46342-2
    Machine learning algorithms (ML) are receiving a lot of attention in the development of predictive models for monitoring dengue transmission rates. Previous work has focused only on specific weather variables and algorithms, and there is still a need for a model that uses more variables and algorithms that have higher performance. In this study, we use vector indices and meteorological data as predictors to develop the ML models. We trained and validated seven ML algorithms, including an ensemble ML method, and compared their performance using the receiver operating characteristic (ROC) with the area under the curve (AUC), accuracy and F1 score. Our results show that an ensemble ML such as XG Boost, AdaBoost and Random Forest perform better than the logistics regression, Naïve Bayens, decision tree, and support vector machine (SVM), with XGBoost having the highest AUC, accuracy and F1 score. Analysis of the importance of the variables showed that the container index was the least important. By removing this variable, the ML models improved their performance by at least 6% in AUC and F1 score. Our result provides a framework for future studies on the use of predictive models in the development of an early warning system.
    Matched MeSH terms: Machine Learning*
  16. AlThuwaynee OF, Kim SW, Najemaden MA, Aydda A, Balogun AL, Fayyadh MM, et al.
    Environ Sci Pollut Res Int, 2021 Aug;28(32):43544-43566.
    PMID: 33834339 DOI: 10.1007/s11356-021-13255-4
    This study investigates uncertainty in machine learning that can occur when there is significant variance in the prediction importance level of the independent variables, especially when the ROC fails to reflect the unbalanced effect of prediction variables. A variable drop-off loop function, based on the concept of early termination for reduction of model capacity, regularization, and generalization control, was tested. A susceptibility index for airborne particulate matter of less than 10 μm diameter (PM10) was modeled using monthly maximum values and spectral bands and indices from Landsat 8 imagery, and Open Street Maps were used to prepare a range of independent variables. Probability and classification index maps were prepared using extreme-gradient boosting (XGBOOST) and random forest (RF) algorithms. These were assessed against utility criteria such as a confusion matrix of overall accuracy, quantity of variables, processing delay, degree of overfitting, importance distribution, and area under the receiver operating characteristic curve (ROC).
    Matched MeSH terms: Machine Learning*
  17. Su G, Jiang P
    Bioresour Technol, 2024 May;399:130519.
    PMID: 38437964 DOI: 10.1016/j.biortech.2024.130519
    This study developed six machine learning models to predict the biochar properties from the dry torrefaction of lignocellulosic biomass by using biomass characteristics and torrefaction conditions as input variables. After optimization, gradient boosting machines were the optimal model, with the highest coefficient of determination ranging from 0.89 to 0.94. Torrefaction conditions exhibited a higher relative contribution to the yield and higher heating value (HHV) of biochar than biomass characteristics. Temperature was the dominant contributor to the elemental and proximate composition and the yield and HHV of biochar. Feature importance and SHapley Additive exPlanations revealed the effect of each influential factor on the target variables and the interactions between these factors in torrefaction. Software that can accurately predict the element, yield, and HHV of biochar was developed. These findings provide a comprehensive understanding of the key factors and their interactions influencing the torrefaction process and biochar properties.
    Matched MeSH terms: Machine Learning*
  18. Zhou S, Hudin NS
    PLoS One, 2024;19(4):e0299087.
    PMID: 38635519 DOI: 10.1371/journal.pone.0299087
    In recent years, the global e-commerce landscape has witnessed rapid growth, with sales reaching a new peak in the past year and expected to rise further in the coming years. Amid this e-commerce boom, accurately predicting user purchase behavior has become crucial for commercial success. We introduce a novel framework integrating three innovative approaches to enhance the prediction model's effectiveness. First, we integrate an event-based timestamp encoding within a time-series attention model, effectively capturing the dynamic and temporal aspects of user behavior. This aspect is often neglected in traditional user purchase prediction methods, leading to suboptimal accuracy. Second, we incorporate Graph Neural Networks (GNNs) to analyze user behavior. By modeling users and their actions as nodes and edges within a graph structure, we capture complex relationships and patterns in user behavior more effectively than current models, offering a nuanced and comprehensive analysis. Lastly, our framework transcends traditional learning strategies by implementing advanced meta-learning techniques. This enables the model to autonomously adjust learning parameters, including the learning rate, in response to new and evolving data environments, thereby significantly enhancing its adaptability and learning efficiency. Through extensive experiments on diverse real-world e-commerce datasets, our model demonstrates superior performance, particularly in accuracy and adaptability in large-scale data scenarios. This study not only overcomes the existing challenges in analyzing e-commerce user behavior but also sets a foundation for future exploration in this dynamic field. We believe our contributions provide significant insights and tools for e-commerce platforms to better understand and cater to their users, ultimately driving sales and improving user experiences.
    Matched MeSH terms: Learning*
  19. Salam A, Mohamad N, Siraj HH, Kamarudin MA, Yaman MN, Bujang SM
    Natl Med J India, 2014 Nov-Dec;27(6):350.
    PMID: 26133346
    Matched MeSH terms: Learning*
  20. Teo BG, Dhillon SK
    BMC Bioinformatics, 2019 Dec 24;20(Suppl 19):658.
    PMID: 31870297 DOI: 10.1186/s12859-019-3210-x
    BACKGROUND: Studying structural and functional morphology of small organisms such as monogenean, is difficult due to the lack of visualization in three dimensions. One possible way to resolve this visualization issue is to create digital 3D models which may aid researchers in studying morphology and function of the monogenean. However, the development of 3D models is a tedious procedure as one will have to repeat an entire complicated modelling process for every new target 3D shape using a comprehensive 3D modelling software. This study was designed to develop an alternative 3D modelling approach to build 3D models of monogenean anchors, which can be used to understand these morphological structures in three dimensions. This alternative 3D modelling approach is aimed to avoid repeating the tedious modelling procedure for every single target 3D model from scratch.

    RESULT: An automated 3D modeling pipeline empowered by an Artificial Neural Network (ANN) was developed. This automated 3D modelling pipeline enables automated deformation of a generic 3D model of monogenean anchor into another target 3D anchor. The 3D modelling pipeline empowered by ANN has managed to automate the generation of the 8 target 3D models (representing 8 species: Dactylogyrus primaries, Pellucidhaptor merus, Dactylogyrus falcatus, Dactylogyrus vastator, Dactylogyrus pterocleidus, Dactylogyrus falciunguis, Chauhanellus auriculatum and Chauhanellus caelatus) of monogenean anchor from the respective 2D illustrations input without repeating the tedious modelling procedure.

    CONCLUSIONS: Despite some constraints and limitation, the automated 3D modelling pipeline developed in this study has demonstrated a working idea of application of machine learning approach in a 3D modelling work. This study has not only developed an automated 3D modelling pipeline but also has demonstrated a cross-disciplinary research design that integrates machine learning into a specific domain of study such as 3D modelling of the biological structures.

    Matched MeSH terms: Machine Learning*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links