Displaying publications 21 - 40 of 126 in total

Abstract:
Sort:
  1. Yap KS, Lim CP, Abidin IZ
    IEEE Trans Neural Netw, 2008 Sep;19(9):1641-6.
    PMID: 18779094 DOI: 10.1109/TNN.2008.2000992
    In this brief, a new neural network model called generalized adaptive resonance theory (GART) is introduced. GART is a hybrid model that comprises a modified Gaussian adaptive resonance theory (MGA) and the generalized regression neural network (GRNN). It is an enhanced version of the GRNN, which preserves the online learning properties of adaptive resonance theory (ART). A series of empirical studies to assess the effectiveness of GART in classification, regression, and time series prediction tasks is conducted. The results demonstrate that GART is able to produce good performances as compared with those of other methods, including the online sequential extreme learning machine (OSELM) and sequential learning radial basis function (RBF) neural network models.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  2. Tabatabaey-Mashadi N, Sudirman R, Khalid PI, Lange-Küttner C
    Percept Mot Skills, 2015 Jun;120(3):865-94.
    PMID: 26029964
    Sequential strategies of digitized tablet drawings by 6-7-yr.-old children (N = 203) of average and below-average handwriting ability were analyzed. A Beery Visual Motor Integration (BVMI) and a Bender-Gestalt (BG) pattern, each composed of two tangential shapes, were predefined into area sectors for automatic analysis and adaptive mapping of the drawings. Girls more often began on the left side and used more strokes than boys. The below-average handwriting group showed more directional diversity and idiosyncratic strategies.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  3. Maktabdar Oghaz M, Maarof MA, Zainal A, Rohani MF, Yaghoubyan SH
    PLoS One, 2015;10(8):e0134828.
    PMID: 26267377 DOI: 10.1371/journal.pone.0134828
    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications.
    Matched MeSH terms: Pattern Recognition, Automated*
  4. Abbasi A, Woo CS, Ibrahim RW, Islam S
    PLoS One, 2015;10(4):e0123427.
    PMID: 25884854 DOI: 10.1371/journal.pone.0123427
    Digital image watermarking is an important technique for the authentication of multimedia content and copyright protection. Conventional digital image watermarking techniques are often vulnerable to geometric distortions such as Rotation, Scaling, and Translation (RST). These distortions desynchronize the watermark information embedded in an image and thus disable watermark detection. To solve this problem, we propose an RST invariant domain watermarking technique based on fractional calculus. We have constructed a domain using Heaviside function of order alpha (HFOA). The HFOA models the signal as a polynomial for watermark embedding. The watermark is embedded in all the coefficients of the image. We have also constructed a fractional variance formula using fractional Gaussian field. A cross correlation method based on the fractional Gaussian field is used for watermark detection. Furthermore the proposed method enables blind watermark detection where the original image is not required during the watermark detection thereby making it more practical than non-blind watermarking techniques. Experimental results confirmed that the proposed technique has a high level of robustness.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  5. Vijayasarveswari V, Andrew AM, Jusoh M, Sabapathy T, Raof RAA, Yasin MNM, et al.
    PLoS One, 2020;15(8):e0229367.
    PMID: 32790672 DOI: 10.1371/journal.pone.0229367
    Breast cancer is the most common cancer among women and it is one of the main causes of death for women worldwide. To attain an optimum medical treatment for breast cancer, an early breast cancer detection is crucial. This paper proposes a multi- stage feature selection method that extracts statistically significant features for breast cancer size detection using proposed data normalization techniques. Ultra-wideband (UWB) signals, controlled using microcontroller are transmitted via an antenna from one end of the breast phantom and are received on the other end. These ultra-wideband analogue signals are represented in both time and frequency domain. The preprocessed digital data is passed to the proposed multi- stage feature selection algorithm. This algorithm has four selection stages. It comprises of data normalization methods, feature extraction, data dimensional reduction and feature fusion. The output data is fused together to form the proposed datasets, namely, 8-HybridFeature, 9-HybridFeature and 10-HybridFeature datasets. The classification performance of these datasets is tested using the Support Vector Machine, Probabilistic Neural Network and Naïve Bayes classifiers for breast cancer size classification. The research findings indicate that the 8-HybridFeature dataset performs better in comparison to the other two datasets. For the 8-HybridFeature dataset, the Naïve Bayes classifier (91.98%) outperformed the Support Vector Machine (90.44%) and Probabilistic Neural Network (80.05%) classifiers in terms of classification accuracy. The finalized method is tested and visualized in the MATLAB based 2D and 3D environment.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  6. Acharya UR, Bhat S, Koh JEW, Bhandary SV, Adeli H
    Comput Biol Med, 2017 Sep 01;88:72-83.
    PMID: 28700902 DOI: 10.1016/j.compbiomed.2017.06.022
    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  7. Agbolade O, Nazri A, Yaakob R, Ghani AA, Cheah YK
    BMC Bioinformatics, 2019 Dec 02;20(1):619.
    PMID: 31791234 DOI: 10.1186/s12859-019-3153-2
    BACKGROUND: Expression in H-sapiens plays a remarkable role when it comes to social communication. The identification of this expression by human beings is relatively easy and accurate. However, achieving the same result in 3D by machine remains a challenge in computer vision. This is due to the current challenges facing facial data acquisition in 3D; such as lack of homology and complex mathematical analysis for facial point digitization. This study proposes facial expression recognition in human with the application of Multi-points Warping for 3D facial landmark by building a template mesh as a reference object. This template mesh is thereby applied to each of the target mesh on Stirling/ESRC and Bosphorus datasets. The semi-landmarks are allowed to slide along tangents to the curves and surfaces until the bending energy between a template and a target form is minimal and localization error is assessed using Procrustes ANOVA. By using Principal Component Analysis (PCA) for feature selection, classification is done using Linear Discriminant Analysis (LDA).

    RESULT: The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components (PCs). The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of 99.58 and 99.32% on Stirling/ESRC and Bosphorus, respectively.

    CONCLUSION: The results demonstrate that the method is robust and in agreement with the state-of-the-art results.

    Matched MeSH terms: Pattern Recognition, Automated*
  8. Mostafa SA, Mustapha A, Mohammed MA, Ahmad MS, Mahmoud MA
    Int J Med Inform, 2018 04;112:173-184.
    PMID: 29500017 DOI: 10.1016/j.ijmedinf.2018.02.001
    Autonomous agents are being widely used in many systems, such as ambient assisted-living systems, to perform tasks on behalf of humans. However, these systems usually operate in complex environments that entail uncertain, highly dynamic, or irregular workload. In such environments, autonomous agents tend to make decisions that lead to undesirable outcomes. In this paper, we propose a fuzzy-logic-based adjustable autonomy (FLAA) model to manage the autonomy of multi-agent systems that are operating in complex environments. This model aims to facilitate the autonomy management of agents and help them make competent autonomous decisions. The FLAA model employs fuzzy logic to quantitatively measure and distribute autonomy among several agents based on their performance. We implement and test this model in the Automated Elderly Movements Monitoring (AEMM-Care) system, which uses agents to monitor the daily movement activities of elderly users and perform fall detection and prevention tasks in a complex environment. The test results show that the FLAA model improves the accuracy and performance of these agents in detecting and preventing falls.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  9. Gandhamal A, Talbar S, Gajre S, Hani AF, Kumar D
    Comput Biol Med, 2017 04 01;83:120-133.
    PMID: 28279861 DOI: 10.1016/j.compbiomed.2017.03.001
    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  10. Suriani NS, Hussain A, Zulkifley MA
    Sensors (Basel), 2013 Aug 05;13(8):9966-98.
    PMID: 23921828 DOI: 10.3390/s130809966
    Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1) the importance of a sudden event over a general anomalous event; (2) frameworks used in sudden event recognition; (3) the requirements and comparative studies of a sudden event recognition system and (4) various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  11. Al-Saiagh W, Tiun S, Al-Saffar A, Awang S, Al-Khaleefa AS
    PLoS One, 2018;13(12):e0208695.
    PMID: 30571777 DOI: 10.1371/journal.pone.0208695
    Word sense disambiguation (WSD) is the process of identifying an appropriate sense for an ambiguous word. With the complexity of human languages in which a single word could yield different meanings, WSD has been utilized by several domains of interests such as search engines and machine translations. The literature shows a vast number of techniques used for the process of WSD. Recently, researchers have focused on the use of meta-heuristic approaches to identify the best solutions that reflect the best sense. However, the application of meta-heuristic approaches remains limited and thus requires the efficient exploration and exploitation of the problem space. Hence, the current study aims to propose a hybrid meta-heuristic method that consists of particle swarm optimization (PSO) and simulated annealing to find the global best meaning of a given text. Different semantic measures have been utilized in this model as objective functions for the proposed hybrid PSO. These measures consist of JCN and extended Lesk methods, which are combined effectively in this work. The proposed method is tested using a three-benchmark dataset (SemCor 3.0, SensEval-2, and SensEval-3). Results show that the proposed method has superior performance in comparison with state-of-the-art approaches.
    Matched MeSH terms: Pattern Recognition, Automated
  12. Moghaddasi Z, Jalab HA, Md Noor R, Aghabozorgi S
    ScientificWorldJournal, 2014;2014:606570.
    PMID: 25295304 DOI: 10.1155/2014/606570
    Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images.
    Matched MeSH terms: Pattern Recognition, Automated/methods*; Pattern Recognition, Automated/trends
  13. Ibitoye MO, Hamzaid NA, Zuniga JM, Hasnan N, Wahab AK
    Sensors (Basel), 2014;14(12):22940-70.
    PMID: 25479326 DOI: 10.3390/s141222940
    The research conducted in the last three decades has collectively demonstrated that the skeletal muscle performance can be alternatively assessed by mechanomyographic signal (MMG) parameters. Indices of muscle performance, not limited to force, power, work, endurance and the related physiological processes underlying muscle activities during contraction have been evaluated in the light of the signal features. As a non-stationary signal that reflects several distinctive patterns of muscle actions, the illustrations obtained from the literature support the reliability of MMG in the analysis of muscles under voluntary and stimulus evoked contractions. An appraisal of the standard practice including the measurement theories of the methods used to extract parameters of the signal is vital to the application of the signal during experimental and clinical practices, especially in areas where electromyograms are contraindicated or have limited application. As we highlight the underpinning technical guidelines and domains where each method is well-suited, the limitations of the methods are also presented to position the state of the art in MMG parameters extraction, thus providing the theoretical framework for improvement on the current practices to widen the opportunity for new insights and discoveries. Since the signal modality has not been widely deployed due partly to the limited information extractable from the signals when compared with other classical techniques used to assess muscle performance, this survey is particularly relevant to the projected future of MMG applications in the realm of musculoskeletal assessments and in the real time detection of muscle activity.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  14. Noor NM, Than JC, Rijal OM, Kassim RM, Yunus A, Zeki AA, et al.
    J Med Syst, 2015 Mar;39(3):22.
    PMID: 25666926 DOI: 10.1007/s10916-015-0214-6
    Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  15. Imran M, Hashim R, Noor Elaiza AK, Irtaza A
    ScientificWorldJournal, 2014;2014:752090.
    PMID: 25121136 DOI: 10.1155/2014/752090
    One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF) coupled with support vector machine (SVM) has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO). The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  16. Tan CS, Ting WS, Mohamad MS, Chan WH, Deris S, Shah ZA
    Biomed Res Int, 2014;2014:213656.
    PMID: 25250315 DOI: 10.1155/2014/213656
    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  17. Rassem TH, Khoo BE
    ScientificWorldJournal, 2014;2014:373254.
    PMID: 24977193 DOI: 10.1155/2014/373254
    Despite the fact that the two texture descriptors, the completed modeling of Local Binary Pattern (CLBP) and the Completed Local Binary Count (CLBC), have achieved a remarkable accuracy for invariant rotation texture classification, they inherit some Local Binary Pattern (LBP) drawbacks. The LBP is sensitive to noise, and different patterns of LBP may be classified into the same class that reduces its discriminating property. Although, the Local Ternary Pattern (LTP) is proposed to be more robust to noise than LBP, however, the latter's weakness may appear with the LTP as well as with LBP. In this paper, a novel completed modeling of the Local Ternary Pattern (LTP) operator is proposed to overcome both LBP drawbacks, and an associated completed Local Ternary Pattern (CLTP) scheme is developed for rotation invariant texture classification. The experimental results using four different texture databases show that the proposed CLTP achieved an impressive classification accuracy as compared to the CLBP and CLBC descriptors.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  18. Malik AS, Humayun J, Kamel N, Yap FB
    Skin Res Technol, 2014 Aug;20(3):322-31.
    PMID: 24329769 DOI: 10.1111/srt.12122
    BACKGROUND: More than 99% acne patients suffer from acne vulgaris. While diagnosing the severity of acne vulgaris lesions, dermatologists have observed inter-rater and intra-rater variability in diagnosis results. This is because during assessment, identifying lesion types and their counting is a tedious job for dermatologists. To make the assessment job objective and easier for dermatologists, an automated system based on image processing methods is proposed in this study.
    OBJECTIVES: There are two main objectives: (i) to develop an algorithm for the enhancement of various acne vulgaris lesions; and (ii) to develop a method for the segmentation of enhanced acne vulgaris lesions.
    METHODS: For the first objective, an algorithm is developed based on the theory of high dynamic range (HDR) images. The proposed algorithm uses local rank transform to generate the HDR images from a single acne image followed by the log transformation. Then, segmentation is performed by clustering the pixels based on Mahalanobis distance of each pixel from spectral models of acne vulgaris lesions.
    RESULTS: Two metrics are used to evaluate the enhancement of acne vulgaris lesions, i.e., contrast improvement factor (CIF) and image contrast normalization (ICN). The proposed algorithm is compared with two other methods. The proposed enhancement algorithm shows better result than both the other methods based on CIF and ICN. In addition, sensitivity and specificity are calculated for the segmentation results. The proposed segmentation method shows higher sensitivity and specificity than other methods.
    CONCLUSION: This article specifically discusses the contrast enhancement and segmentation for automated diagnosis system of acne vulgaris lesions. The results are promising that can be used for further classification of acne vulgaris lesions for final grading of the lesions.
    KEYWORDS: acne grading; acne lesions; acne vulgaris; enhancement; segmentation
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  19. Zainal-Mokhtar K, Mohamad-Saleh J
    Sensors (Basel), 2013;13(9):11385-406.
    PMID: 24064598 DOI: 10.3390/s130911385
    This paper presents novel research on the development of a generic intelligent oil fraction sensor based on Electrical Capacitance Tomography (ECT) data. An artificial Neural Network (ANN) has been employed as the intelligent system to sense and estimate oil fractions from the cross-sections of two-component flows comprising oil and gas in a pipeline. Previous works only focused on estimating the oil fraction in the pipeline based on fixed ECT sensor parameters. With fixed ECT design sensors, an oil fraction neural sensor can be trained to deal with ECT data based on the particular sensor parameters, hence the neural sensor is not generic. This work focuses on development of a generic neural oil fraction sensor based on training a Multi-Layer Perceptron (MLP) ANN with various ECT sensor parameters. On average, the proposed oil fraction neural sensor has shown to be able to give a mean absolute error of 3.05% for various ECT sensor sizes.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  20. Chowdhury RH, Reaz MB, Ali MA, Bakar AA, Chellappan K, Chang TG
    Sensors (Basel), 2013;13(9):12431-66.
    PMID: 24048337 DOI: 10.3390/s130912431
    Electromyography (EMG) signals are becoming increasingly important in many applications, including clinical/biomedical, prosthesis or rehabilitation devices, human machine interactions, and more. However, noisy EMG signals are the major hurdles to be overcome in order to achieve improved performance in the above applications. Detection, processing and classification analysis in electromyography (EMG) is very desirable because it allows a more standardized and precise evaluation of the neurophysiological, rehabitational and assistive technological findings. This paper reviews two prominent areas; first: the pre-processing method for eliminating possible artifacts via appropriate preparation at the time of recording EMG signals, and second: a brief explanation of the different methods for processing and classifying EMG signals. This study then compares the numerous methods of analyzing EMG signals, in terms of their performance. The crux of this paper is to review the most recent developments and research studies related to the issues mentioned above.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links