RESULT: The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components (PCs). The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of 99.58 and 99.32% on Stirling/ESRC and Bosphorus, respectively.
CONCLUSION: The results demonstrate that the method is robust and in agreement with the state-of-the-art results.
RESULTS: The study included 24 participants and examined three muscles (m. Orbicularis Oris, m. Zygomaticus Major, and m. Mentalis) during five different facial expressions. Prior to thorough statistical analysis, features were extracted from the acquired electromyographs. Finally, classification was done with the use of logistic regression, random forest classifier and linear discriminant analysis. A statistically significant difference in muscle activity amplitudes was demonstrated between muscles, enabling the tracking of individual muscle activity for diagnostic and therapeutic purposes. Additionally other time domain and frequency domain features were analyzed, showing statistical significance in differentiation between muscles as well. Examples of pattern recognition showed promising avenues for further research and development.
CONCLUSION: Surface electromyography is a useful method for assessing the function of facial expression muscles, significantly contributing to the diagnosis and treatment of oral motor function disorders. Results of this study show potential for further research and development in this field of research.