Displaying all 6 publications

Abstract:
Sort:
  1. Rashid M, Sulaiman N, P P Abdul Majeed A, Musa RM, Ab Nasir AF, Bari BS, et al.
    Front Neurorobot, 2020;14:25.
    PMID: 32581758 DOI: 10.3389/fnbot.2020.00025
    Brain-Computer Interface (BCI), in essence, aims at controlling different assistive devices through the utilization of brain waves. It is worth noting that the application of BCI is not limited to medical applications, and hence, the research in this field has gained due attention. Moreover, the significant number of related publications over the past two decades further indicates the consistent improvements and breakthroughs that have been made in this particular field. Nonetheless, it is also worth mentioning that with these improvements, new challenges are constantly discovered. This article provides a comprehensive review of the state-of-the-art of a complete BCI system. First, a brief overview of electroencephalogram (EEG)-based BCI systems is given. Secondly, a considerable number of popular BCI applications are reviewed in terms of electrophysiological control signals, feature extraction, classification algorithms, and performance evaluation metrics. Finally, the challenges to the recent BCI systems are discussed, and possible solutions to mitigate the issues are recommended.
  2. Liu H, Liu Y, Zhang R, Wu X
    Front Neurorobot, 2021;15:675827.
    PMID: 34393749 DOI: 10.3389/fnbot.2021.675827
    The study of student behavior analysis in class plays a key role in teaching and educational reforms that can help the university to find an effective way to improve students' learning efficiency and innovation ability. It is also one of the effective ways to cultivate innovative talents. The traditional behavior recognition methods have many disadvantages, such as poor robustness and low efficiency. From a heterogeneous view perception point of view, it introduces the students' behavior recognition. Therefore, we propose a 3-D multiscale residual dense network from heterogeneous view perception for analysis of student behavior recognition in class. First, the proposed method adopts 3-D multiscale residual dense blocks as the basic module of the network, and the module extracts the hierarchical features of students' behavior through the densely connected convolutional layer. Second, the local dense feature of student behavior is to learn adaptively. Third, the residual connection module is used to improve the training efficiency. Finally, experimental results show that the proposed algorithm has good robustness and transfer learning ability compared with the state-of-the-art behavior recognition algorithms, and it can effectively handle multiple video behavior recognition tasks. The design of an intelligent human behavior recognition algorithm has great practical significance to analyze the learning and teaching of students in the class.
  3. Dzulkifli MA, Hamzaid NA, Davis GM, Hasnan N
    Front Neurorobot, 2018;12:50.
    PMID: 30147650 DOI: 10.3389/fnbot.2018.00050
    This study sought to design and deploy a torque monitoring system using an artificial neural network (ANN) with mechanomyography (MMG) for situations where muscle torque cannot be independently quantified. The MMG signals from the quadriceps were used to derive knee torque during prolonged functional electrical stimulation (FES)-assisted isometric knee extensions and during standing in spinal cord injured (SCI) individuals. Three individuals with motor-complete SCI performed FES-evoked isometric quadriceps contractions on a Biodex dynamometer at 30° knee angle and at a fixed stimulation current, until the torque had declined to a minimum required for ANN model development. Two ANN models were developed based on different inputs; Root mean square (RMS) MMG and RMS-Zero crossing (ZC) which were derived from MMG. The performance of the ANN was evaluated by comparing model predicted torque against the actual torque derived from the dynamometer. MMG data from 5 other individuals with SCI who performed FES-evoked standing to fatigue-failure were used to validate the RMS and RMS-ZC ANN models. RMS and RMS-ZC of the MMG obtained from the FES standing experiments were then provided as inputs to the developed ANN models to calculate the predicted torque during the FES-evoked standing. The average correlation between the knee extension-predicted torque and the actual torque outputs were 0.87 ± 0.11 for RMS and 0.84 ± 0.13 for RMS-ZC. The average accuracy was 79 ± 14% for RMS and 86 ± 11% for RMS-ZC. The two models revealed significant trends in torque decrease, both suggesting a critical point around 50% torque drop where there were significant changes observed in RMS and RMS-ZC patterns. Based on these findings, both RMS and RMS-ZC ANN models performed similarly well in predicting FES-evoked knee extension torques in this population. However, interference was observed in the RMS-ZC values at a time around knee buckling. The developed ANN models could be used to estimate muscle torque in real-time, thereby providing safer automated FES control of standing in persons with motor-complete SCI.
  4. Lim JZ, Mountstephens J, Teo J
    Front Neurorobot, 2021;15:796895.
    PMID: 35177973 DOI: 10.3389/fnbot.2021.796895
    CONTEXT: Eye tracking is a technology to measure and determine the eye movements and eye positions of an individual. The eye data can be collected and recorded using an eye tracker. Eye-tracking data offer unprecedented insights into human actions and environments, digitizing how people communicate with computers, and providing novel opportunities to conduct passive biometric-based classification such as emotion prediction. The objective of this article is to review what specific machine learning features can be obtained from eye-tracking data for the classification task.

    METHODS: We performed a systematic literature review (SLR) covering the eye-tracking studies in classification published from 2016 to the present. In the search process, we used four independent electronic databases which were the IEEE Xplore, the ACM Digital Library, and the ScienceDirect repositories as well as the Google Scholar. The selection process was performed by using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) search strategy. We followed the processes indicated in the PRISMA to choose the appropriate relevant articles.

    RESULTS: Out of the initial 420 articles that were returned from our initial search query, 37 articles were finally identified and used in the qualitative synthesis, which were deemed to be directly relevant to our research question based on our methodology.

    CONCLUSION: The features that could be extracted from eye-tracking data included pupil size, saccade, fixations, velocity, blink, pupil position, electrooculogram (EOG), and gaze point. Fixation was the most commonly used feature among the studies found.

  5. Guo B, Liu H, Niu L
    Front Neurorobot, 2023;17:1265936.
    PMID: 38111712 DOI: 10.3389/fnbot.2023.1265936
    Health monitoring is a critical aspect of personalized healthcare, enabling early detection, and intervention for various medical conditions. The emergence of cloud-based robot-assisted systems has opened new possibilities for efficient and remote health monitoring. In this paper, we present a Transformer-based Multi-modal Fusion approach for health monitoring, focusing on the effects of cognitive workload, assessment of cognitive workload in human-machine collaboration, and acceptability in human-machine interactions. Additionally, we investigate biomechanical strain measurement and evaluation, utilizing wearable devices to assess biomechanical risks in working environments. Furthermore, we study muscle fatigue assessment during collaborative tasks and propose methods for improving safe physical interaction with cobots. Our approach integrates multi-modal data, including visual, audio, and sensor- based inputs, enabling a holistic assessment of an individual's health status. The core of our method lies in leveraging the powerful Transformer model, known for its ability to capture complex relationships in sequential data. Through effective fusion and representation learning, our approach extracts meaningful features for accurate health monitoring. Experimental results on diverse datasets demonstrate the superiority of our Transformer-based multi- modal fusion approach, outperforming existing methods in capturing intricate patterns and predicting health conditions. The significance of our research lies in revolutionizing remote health monitoring, providing more accurate, and personalized healthcare services.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links