Affiliations 

  • 1 Department of Computer Science, Faculty of Computer Science & IT, Universiti Putra Malaysia, Serdang, Selangor, Malaysia. lokoprof@yahoo.com
  • 2 Department of Computer Science, Faculty of Computer Science & IT, Universiti Putra Malaysia, Serdang, Selangor, Malaysia. azree@upm.edu.my
  • 3 Department of Computer Science, Faculty of Computer Science & IT, Universiti Putra Malaysia, Serdang, Selangor, Malaysia
  • 4 Department of Software Engineering, Faculty of Computer Science & IT, Universiti Putra Malaysia, Serdang, Selangor, Malaysia
  • 5 Department of Biomedical Science, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, Serdang, Selangor, Malaysia
BMC Bioinformatics, 2019 Dec 02;20(1):619.
PMID: 31791234 DOI: 10.1186/s12859-019-3153-2

Abstract

BACKGROUND: Expression in H-sapiens plays a remarkable role when it comes to social communication. The identification of this expression by human beings is relatively easy and accurate. However, achieving the same result in 3D by machine remains a challenge in computer vision. This is due to the current challenges facing facial data acquisition in 3D; such as lack of homology and complex mathematical analysis for facial point digitization. This study proposes facial expression recognition in human with the application of Multi-points Warping for 3D facial landmark by building a template mesh as a reference object. This template mesh is thereby applied to each of the target mesh on Stirling/ESRC and Bosphorus datasets. The semi-landmarks are allowed to slide along tangents to the curves and surfaces until the bending energy between a template and a target form is minimal and localization error is assessed using Procrustes ANOVA. By using Principal Component Analysis (PCA) for feature selection, classification is done using Linear Discriminant Analysis (LDA).

RESULT: The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components (PCs). The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of 99.58 and 99.32% on Stirling/ESRC and Bosphorus, respectively.

CONCLUSION: The results demonstrate that the method is robust and in agreement with the state-of-the-art results.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.