Displaying all 13 publications

Abstract:
Sort:
  1. Ahmed MA, Zaidan BB, Zaidan AA, Salih MM, Lakulu MMB
    Sensors (Basel), 2018 Jul 09;18(7).
    PMID: 29987266 DOI: 10.3390/s18072208
    Loss of the ability to speak or hear exerts psychological and social impacts on the affected persons due to the lack of proper communication. Multiple and systematic scholarly interventions that vary according to context have been implemented to overcome disability-related difficulties. Sign language recognition (SLR) systems based on sensory gloves are significant innovations that aim to procure data on the shape or movement of the human hand. Innovative technology for this matter is mainly restricted and dispersed. The available trends and gaps should be explored in this research approach to provide valuable insights into technological environments. Thus, a review is conducted to create a coherent taxonomy to describe the latest research divided into four main categories: development, framework, other hand gesture recognition, and reviews and surveys. Then, we conduct analyses of the glove systems for SLR device characteristics, develop a roadmap for technology evolution, discuss its limitations, and provide valuable insights into technological environments. This will help researchers to understand the current options and gaps in this area, thus contributing to this line of research.
    Matched MeSH terms: Gestures
  2. Shaari AR, Mohd Jani MN, Mohamed Yunus AS
    MyJurnal
    Wheelchair has been an important assistive device and the demand are ever rising because of the increasing physically handicapped and old age populations. The recent development in the robotics artificial intelligence extends vast scope for developing the more advanced and intelligent one to overcome limitations of the existing traditional wheelchairs. The prototype smart wheelchair were present on this paper using hardware implementation with the help of simple hand gesture which is comprises of an accelerometer mounted on the hand glove senses the tilt angle of the user hand movements and transmits control signal to the receiver mounted on wheelchair. This will interpret the movement accordingly required by user. The wheelchair control unit is developed by integration of ATMEGA328 microcontroller with Arduino UNO. The wheelchair is developed to allow peoples to move safely and put reliability in accomplishment of some important tasks in daily life.
    Matched MeSH terms: Gestures
  3. Wirza R, Nazir S, Khan HU, García-Magariño I, Amin R
    J Healthc Eng, 2020;2020:8835544.
    PMID: 32963749 DOI: 10.1155/2020/8835544
    The medical system is facing the transformations with augmentation in the use of medical information systems, electronic records, smart, wearable devices, and handheld. The central nervous system function is to control the activities of the mind and the human body. Modern speedy development in medical and computational growth in the field of the central nervous system enables practitioners and researchers to extract and visualize insight from these systems. The function of augmented reality is to incorporate virtual and real objects, interactively running in a real-time and real environment. The role of augmented reality in the central nervous system becomes a thought-provoking task. Gesture interaction approach-based augmented reality in the central nervous system has enormous impending for reducing the care cost, quality refining of care, and waste and error reducing. To make this process smooth, it would be effective to present a comprehensive study report of the available state-of-the-art-work for enabling doctors and practitioners to easily use it in the decision making process. This comprehensive study will finally summarise the outputs of the published materials associate to gesture interaction-based augmented reality approach in the central nervous system. This research uses the protocol of systematic literature which systematically collects, analyses, and derives facts from the collected papers. The data collected range from the published materials for 10 years. 78 papers were selected and included papers based on the predefined inclusion, exclusion, and quality criteria. The study supports to identify the studies related to augmented reality in the nervous system, application of augmented reality in the nervous system, technique of augmented reality in the nervous system, and the gesture interaction approaches in the nervous system. The derivations from the studies show that there is certain amount of rise-up in yearly wise articles, and numerous studies exist, related to augmented reality and gestures interaction approaches to different systems of the human body, specifically to the nervous system. This research organises and summarises the existing associated work, which is in the form of published materials, and are related to augmented reality. This research will help the practitioners and researchers to sight most of the existing studies subjected to augmented reality-based gestures interaction approaches for the nervous system and then can eventually be followed as support in future for complex anatomy learning.
    Matched MeSH terms: Gestures*
  4. Hamedi M, Salleh ShH, Astaraki M, Noor AM
    Biomed Eng Online, 2013;12:73.
    PMID: 23866903 DOI: 10.1186/1475-925X-12-73
    Recently, the recognition of different facial gestures using facial neuromuscular activities has been proposed for human machine interfacing applications. Facial electromyograms (EMGs) analysis is a complicated field in biomedical signal processing where accuracy and low computational cost are significant concerns. In this paper, a very fast versatile elliptic basis function neural network (VEBFNN) was proposed to classify different facial gestures. The effectiveness of different facial EMG time-domain features was also explored to introduce the most discriminating.
    Matched MeSH terms: Gestures*
  5. Adlina, S., Narimah, A.H.H., Hakimi, Z.A., N Adilah, H., N Syuhada, Y.
    MyJurnal
    Employee satnfaction surveys can provide the information needed to improved levek of productivity, job and loyalty. Management can identify the factors of job issues and provide solutions to improve the working environment. A cross sectional descriptive study on employee satisfaction among a health care district office’s staff was conducted in Perak in March - April 2006. A total of 19 staff were randomly picked and interviewed in the data collection process. Almost all understand the objectives of the administration unit (94%) and were satisfied with the management leadership’s style (78%- l 00%) . Majority agreed that their relationship with immediate superior and within the group was harmonious and professional (89%) and they preferred an open problem solving method in handling conflict (72 %). The most common type of incentive rewarded by the administration to express gratitude to their staff was certificate (56%); bonus and medal (33%); and informal gesture (28%). Majority (83%) were also satisfied by the method used to disseminate the information in their units. Majority agreed that the working environment in the administration unit were conducive (72%), their ideas were equally considered during decision making sessions (89%) and training opportunities were similarly given to them by the management (72%). This study revealed that employee satisfaction was determined by several factors such as management leadership's style, opportunity to contribute skills and idea; reward and incentive; and conducive king environment.
    Matched MeSH terms: Gestures
  6. Tan CK, Lim KM, Chang RKY, Lee CP, Alqahtani A
    Sensors (Basel), 2023 Jun 14;23(12).
    PMID: 37420722 DOI: 10.3390/s23125555
    Hand gesture recognition (HGR) is a crucial area of research that enhances communication by overcoming language barriers and facilitating human-computer interaction. Although previous works in HGR have employed deep neural networks, they fail to encode the orientation and position of the hand in the image. To address this issue, this paper proposes HGR-ViT, a Vision Transformer (ViT) model with an attention mechanism for hand gesture recognition. Given a hand gesture image, it is first split into fixed size patches. Positional embedding is added to these embeddings to form learnable vectors that capture the positional information of the hand patches. The resulting sequence of vectors are then served as the input to a standard Transformer encoder to obtain the hand gesture representation. A multilayer perceptron head is added to the output of the encoder to classify the hand gesture to the correct class. The proposed HGR-ViT obtains an accuracy of 99.98%, 99.36% and 99.85% for the American Sign Language (ASL) dataset, ASL with Digits dataset, and National University of Singapore (NUS) hand gesture dataset, respectively.
    Matched MeSH terms: Gestures*
  7. Khoh WH, Pang YH, Yap HY
    F1000Res, 2022;11:283.
    PMID: 37600220 DOI: 10.12688/f1000research.74134.2
    Background: With the advances in current technology, hand gesture recognition has gained considerable attention. It has been extended to recognize more distinctive movements, such as a signature, in human-computer interaction (HCI) which enables the computer to identify a person in a non-contact acquisition environment. This application is known as in-air hand gesture signature recognition. To our knowledge, there are no publicly accessible databases and no detailed descriptions of the acquisitional protocol in this domain. Methods: This paper aims to demonstrate the procedure for collecting the in-air hand gesture signature's database. This database is disseminated as a reference database in the relevant field for evaluation purposes. The database is constructed from the signatures of 100 volunteer participants, who contributed their signatures in two different sessions. Each session provided 10 genuine samples enrolled using a Microsoft Kinect sensor camera to generate a genuine dataset. In addition, a forgery dataset was also collected by imitating the genuine samples. For evaluation, each sample was preprocessed with hand localization and predictive hand segmentation algorithms to extract the hand region. Then, several vector-based features were extracted. Results: In this work, classification performance analysis and system robustness analysis were carried out. In the classification analysis, a multiclass Support Vector Machine (SVM) was employed to classify the samples and 97.43% accuracy was achieved; while the system robustness analysis demonstrated low error rates of 2.41% and 5.07% in random forgery and skilled forgery attacks, respectively. Conclusions: These findings indicate that hand gesture signature is not only feasible for human classification, but its properties are also robust against forgery attacks.
    Matched MeSH terms: Gestures*
  8. Norzehan Sakamat, Siti Nabilah Sabri, Norizan Mat Diah
    Scientific Research Journal, 2017;14(2):35-48.
    MyJurnal
    Storytelling is considered as an interactive social arts that uses word and
    gestures to reveal the elements and images of a story while engaging the
    listener's imagination. Multimedia based digital storytelling learning
    approach provides interesting, interactive, engaging and multisensory
    learning experience to children. Children explore new experience and
    scenarios as new stories are being told. This study concentrates on
    determining the best combination of elements for designing effective digital
    storytelling applications specifically for the usage of dyslexic children.
    Dyslexic children are known to have a common learning difficulty that can
    cause problems with reading, writing, spelling and comprehension. These
    applications are design with the objective to help in improving dyslexic
    children ability in readings and comprehensions. Four elements were
    derived from extensive literature studies. The elements are multimedia
    components, multi-sensory instructional approach, emotional design and
    games design. The relationship among all the elements were determine
    and described in details as it will be used to contribute to the design and
    development of the application in further works. The strength of this study
    is it models the combinations of technology, psychology and instructional
    approach as a support components for developing an effective digital story
    telling learning application for dyslexic children.
    Matched MeSH terms: Gestures
  9. Alnned M. Mharib, Mohammad Hamiruce Marhaban, Abdul Rahman Ramli
    MyJurnal
    Skin detection has gained popularity and importance in the computer vision community. It is an essential step for important vision tasks such as the detection, tracking and recognition of face, segmentation of hand for gesture analysis, person identification, as well as video surveillance and filtering of objectionable web images. All these applications are based on the assumption that the regions of the human skin are already located. In the recent past, numerous techniques for skin colour modeling and recognition have been proposed. The aims of this paper are to compile the published pixel-based skin colour detection techniques to describe their key concepts and try to find out and summarize their advantages, disadvantages and characteristic features.
    Matched MeSH terms: Gestures
  10. Alshammari RFN, Abd Rahman AH, Arshad H, Albahri OS
    Sensors (Basel), 2023 Dec 05;23(24).
    PMID: 38139465 DOI: 10.3390/s23249619
    Existing methods for scoring student presentations predominantly rely on computer-based implementations and do not incorporate a robotic multi-classification model. This limitation can result in potential misclassification issues as these approaches lack active feature learning capabilities due to fixed camera positions. Moreover, these scoring methods often solely focus on facial expressions and neglect other crucial factors, such as eye contact, hand gestures and body movements, thereby leading to potential biases or inaccuracies in scoring. To address these limitations, this study introduces Robotics-based Presentation Skill Scoring (RPSS), which employs a multi-model analysis. RPSS captures and analyses four key presentation parameters in real time, namely facial expressions, eye contact, hand gestures and body movements, and applies the fuzzy Delphi method for criteria selection and the analytic hierarchy process for weighting, thereby enabling decision makers or managers to assign varying weights to each criterion based on its relative importance. RPSS identifies five academic facial expressions and evaluates eye contact to achieve a comprehensive assessment and enhance its scoring accuracy. Specific sub-models are employed for each presentation parameter, namely EfficientNet for facial emotions, DeepEC for eye contact and an integrated Kalman and heuristic approach for hand and body movements. The scores are determined based on predefined rules. RPSS is implemented on a robot, and the results highlight its practical applicability. Each sub-model is rigorously evaluated offline and compared against benchmarks for selection. Real-world evaluations are also conducted by incorporating a novel active learning approach to improve performance by leveraging the robot's mobility. In a comparative evaluation with human tutors, RPSS achieves a remarkable average agreement of 99%, showcasing its effectiveness in assessing students' presentation skills.
    Matched MeSH terms: Gestures
  11. Ahmed M. Mbarib, Mohammad Hamiruce Marhaban, Abdul Rahman Ramli
    MyJurnal
    Skin colour is an important visual cue for face detection, face recogmtlon, hand segmentation for gesture analysis and filtering of objectionable images. In this paper, the adaptive skin color detection model is proposed, based on two bivariate normal distribution models of the skin chromatic subspace, and on image segmentation using an automatic and adaptive multi-thresholding technique. Experimental results on images presenting a wide range of variations in lighting condition and background demonstrate the efficiency of the proposed skin-segmentation algorithm.
    Matched MeSH terms: Gestures
  12. Joginder Singh S, Loo ZL
    Disabil Rehabil Assist Technol, 2023 Nov;18(8):1281-1289.
    PMID: 37017363 DOI: 10.1080/17483107.2023.2196305
    PURPOSE: Augmentative and alternative communication (AAC) systems are often introduced to children with disabilities who demonstrate complex communication needs. As attending school is an essential part of these children's lives, it is important that they use their AAC system to communicate in the classroom. This study aimed to describe the nature of the use of AAC by students with developmental disabilities in the classroom.

    MATERIALS AND METHOD: This study was conducted in Malaysia. Six students were observed twice each in their classroom and their classroom interactions were video recorded. The video recordings were transcribed and coded for the presence of a communication event, the student's mode of communication and communication function, the communication partner involved, and access to the AAC system.

    RESULTS: Contrary to past studies, most students in this study spontaneously initiated interaction almost as many times as they responded. They primarily communicated with gestures and verbalizations/vocalizations despite having been introduced to an AAC system. When students communicated using their AAC system, they mainly interacted with the teachers, and for the function of either behavioral regulation or joint attention. It was found that for 39% of communicative events, the student's aided AAC system was not within arm's reach.

    CONCLUSION: The findings highlight the need for efforts to encourage students with complex communication needs to use AAC more frequently in their classroom to be able to communicate more effectively and for a wider range of communicative functions. Speech-language pathologists can work closely with teachers to provide the necessary support to these students.

    Matched MeSH terms: Gestures
  13. Wan Hassan WN, Abu Kassim NL, Jhawar A, Shurkri NM, Kamarul Baharin NA, Chan CS
    Am J Orthod Dentofacial Orthop, 2016 Apr;149(4):567-78.
    PMID: 27021461 DOI: 10.1016/j.ajodo.2015.10.018
    In this article, we present an evaluation of user acceptance of our innovative hand-gesture-based touchless sterile system for interaction with and control of a set of 3-dimensional digitized orthodontic study models using the Kinect motion-capture sensor (Microsoft, Redmond, Wash).
    Matched MeSH terms: Gestures
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links