Displaying publications 1 - 20 of 25 in total

Abstract:
Sort:
  1. Wang Y, See J, Phan RC, Oh YH
    PLoS One, 2015;10(5):e0124674.
    PMID: 25993498 DOI: 10.1371/journal.pone.0124674
    Micro-expression recognition is still in the preliminary stage, owing much to the numerous difficulties faced in the development of datasets. Since micro-expression is an important affective clue for clinical diagnosis and deceit analysis, much effort has gone into the creation of these datasets for research purposes. There are currently two publicly available spontaneous micro-expression datasets--SMIC and CASME II, both with baseline results released using the widely used dynamic texture descriptor LBP-TOP for feature extraction. Although LBP-TOP is popular and widely used, it is still not compact enough. In this paper, we draw further inspiration from the concept of LBP-TOP that considers three orthogonal planes by proposing two efficient approaches for feature extraction. The compact robust form described by the proposed LBP-Six Intersection Points (SIP) and a super-compact LBP-Three Mean Orthogonal Planes (MOP) not only preserves the essential patterns, but also reduces the redundancy that affects the discriminality of the encoded features. Through a comprehensive set of experiments, we demonstrate the strengths of our approaches in terms of recognition accuracy and efficiency.
    Matched MeSH terms: Facial Expression*
  2. Yasmin S, Pathan RK, Biswas M, Khandaker MU, Faruque MRI
    Sensors (Basel), 2020 Sep 21;20(18).
    PMID: 32967087 DOI: 10.3390/s20185391
    Compelling facial expression recognition (FER) processes have been utilized in very successful fields like computer vision, robotics, artificial intelligence, and dynamic texture recognition. However, the FER's critical problem with traditional local binary pattern (LBP) is the loss of neighboring pixels related to different scales that can affect the texture of facial images. To overcome such limitations, this study describes a new extended LBP method to extract feature vectors from images, detecting each image from facial expressions. The proposed method is based on the bitwise AND operation of two rotational kernels applied on LBP(8,1) and LBP(8,2) and utilizes two accessible datasets. Firstly, the facial parts are detected and the essential components of a face are observed, such as eyes, nose, and lips. The portion of the face is then cropped to reduce the dimensions and an unsharp masking kernel is applied to sharpen the image. The filtered images then go through the feature extraction method and wait for the classification process. Four machine learning classifiers were used to verify the proposed method. This study shows that the proposed multi-scale featured local binary pattern (MSFLBP), together with Support Vector Machine (SVM), outperformed the recent LBP-based state-of-the-art approaches resulting in an accuracy of 99.12% for the Extended Cohn-Kanade (CK+) dataset and 89.08% for the Karolinska Directed Emotional Faces (KDEF) dataset.
    Matched MeSH terms: Facial Expression*
  3. Irwantoro K, Nimsha Nilakshi Lennon N, Mareschal I, Miflah Hussain Ismail A
    Q J Exp Psychol (Hove), 2023 Feb;76(2):450-459.
    PMID: 35360991 DOI: 10.1177/17470218221094296
    The influence of context on facial expression classification is most often investigated using simple cues in static faces portraying basic expressions with a fixed emotional intensity. We examined (1) whether a perceptually rich, dynamic audiovisual context, presented in the form of movie clips (to achieve closer resemblance to real life), affected the subsequent classification of dynamic basic (happy) and non-basic (sarcastic) facial expressions and (2) whether people's susceptibility to contextual cues was related to their ability to classify facial expressions viewed in isolation. Participants classified facial expressions-gradually progressing from neutral to happy/sarcastic in increasing intensity-that followed movie clips. Classification was relatively more accurate and faster when the preceding context predicted the upcoming expression, compared with when the context did not. Speeded classifications suggested that predictive contexts reduced the emotional intensity required to be accurately classified. More importantly, we show for the first time that participants' accuracy in classifying expressions without an informative context correlated with the magnitude of the contextual effects experienced by them-poor classifiers of isolated expressions were more susceptible to a predictive context. Our findings support the emerging view that contextual cues and individual differences must be considered when explaining mechanisms underlying facial expression classification.
    Matched MeSH terms: Facial Expression*
  4. Al Qudah M, Mohamed A, Lutfi S
    Sensors (Basel), 2023 Mar 27;23(7).
    PMID: 37050571 DOI: 10.3390/s23073513
    Several studies have been conducted using both visual and thermal facial images to identify human affective states. Despite the advantages of thermal facial images in recognizing spontaneous human affects, few studies have focused on facial occlusion challenges in thermal images, particularly eyeglasses and facial hair occlusion. As a result, three classification models are proposed in this paper to address the problem of thermal occlusion in facial images, with six basic spontaneous emotions being classified. The first proposed model in this paper is based on six main facial regions, including the forehead, tip of the nose, cheeks, mouth, and chin. The second model deconstructs the six main facial regions into multiple subregions to investigate the efficacy of subregions in recognizing the human affective state. The third proposed model in this paper uses selected facial subregions, free of eyeglasses and facial hair (beard, mustaches). Nine statistical features on apex and onset thermal images are implemented. Furthermore, four feature selection techniques with two classification algorithms are proposed for a further investigation. According to the comparative analysis presented in this paper, the results obtained from the three proposed modalities were promising and comparable to those of other studies.
    Matched MeSH terms: Facial Expression*
  5. Alkawaz MH, Basori AH, Mohamad D, Mohamed F
    ScientificWorldJournal, 2014;2014:367013.
    PMID: 25136663 DOI: 10.1155/2014/367013
    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
    Matched MeSH terms: Facial Expression*
  6. Nagarajan R, Hariharan M, Satiyan M
    J Med Syst, 2012 Aug;36(4):2225-34.
    PMID: 21465183 DOI: 10.1007/s10916-011-9690-5
    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.
    Matched MeSH terms: Facial Expression*
  7. Taylor D, Hartmann D, Dezecache G, Te Wong S, Davila-Ross M
    Sci Rep, 2019 03 21;9(1):4961.
    PMID: 30899046 DOI: 10.1038/s41598-019-39932-6
    Facial mimicry is a central feature of human social interactions. Although it has been evidenced in other mammals, no study has yet shown that this phenomenon can reach the level of precision seem in humans and gorillas. Here, we studied the facial complexity of group-housed sun bears, a typically solitary species, with special focus on testing for exact facial mimicry. Our results provided evidence that the bears have the ability to mimic the expressions of their conspecifics and that they do so by matching the exact facial variants they interact with. In addition, the data showed the bears produced the open-mouth faces predominantly when they received the recipient's attention, suggesting a degree of social sensitivity. Our finding questions the relationship between communicative complexity and social complexity, and suggests the possibility that the capacity for complex facial communication is phylogenetically more widespread than previously thought.
    Matched MeSH terms: Facial Expression*
  8. Panliang M, Madaan S, Babikir Ali SA, J G, Khatibi A, Alsoud AR, et al.
    Sci Rep, 2025 Feb 07;15(1):4665.
    PMID: 39920157 DOI: 10.1038/s41598-025-85206-9
    Facial expression recognition (FER) has advanced applications in various disciplines, including computer vision, Internet of Things, and artificial intelligence, supporting diverse domains such as medical escort services, learning analysis, fatigue detection, and human-computer interaction. The accuracy of these systems is of utmost concern and depends on effective feature selection, which directly impacts their ability to accurately detect facial expressions across various poses. This research proposes a new hybrid approach called QIFABC (Hybrid Quantum-Inspired Firefly and Artificial Bee Colony Algorithm), which combines the Quantum-Inspired Firefly Algorithm (QIFA) with the Artificial Bee Colony (ABC) method to enhance feature selection for a multi-pose facial expression recognition system. The proposed algorithm uses the attributes of both the QIFA and ABC algorithms to enhance search space exploration, thereby improving the robustness of features in FER. The firefly agents initially move toward the brightest firefly until identified, then search transition to the ABC algorithm, targeting positions with the highest nectar quality. In order to evaluate the efficacy of the proposed QIFABC algorithm, feature selection is also conducted using QIFA, FA, and ABC algorithms. The evaluated features are utilized for classifying face expressions by utilizing the deep neural network model, ResNet-50. The presented FER system has been tested using multi-pose facial expression benchmark datasets, including RaF (Radboud Faces) and KDEF (Karolinska Directed Emotional Faces). Experimental results show that the proposed QIFABC with ResNet50 method achieves an accuracy of 98.93%, 94.11%, and 91.79% for front, diagonal, and profile poses on the RaF dataset, respectively, and 98.47%, 93.88%, and 91.58% on the KDEF dataset.
    Matched MeSH terms: Facial Expression*
  9. Fong FTK, Thong C, Nelson NL
    J Exp Child Psychol, 2025 May;253:106205.
    PMID: 39978308 DOI: 10.1016/j.jecp.2025.106205
    We adapted a previous protocol to assess children's ability to spontaneously associate a novel cause with a novel emotional expression. An experimenter opened a series of boxes and generated an expression based on what was inside (the cause of the emotion). Participants (4- to 9-year-olds; N = 72) guessed what the experimenter saw from four possible objects linked to four expressions: stickers (happy), a broken balloon (sad), a spider (scared), and a novel object, pax (novel puffed cheeks expression). Children were then invited to open a series of boxes and generate expressions for the experimenter. Results suggest that children used a process of elimination to associate the novel pax object with the puffed cheeks expression. Some children also re-produced the puffed cheeks expression in a later task. As a final trial, when children were asked how people would feel when seeing the pax object, younger children tended to use positive labels and older children used negative labels. These results show that children are able to quickly associate novel facial expressions with precipitating events as early as 4 years of age, comparable to their performance in linking familiar expressions and objects.
    Matched MeSH terms: Facial Expression*
  10. Sheppard E, Pillai D, Wong GT, Ropar D, Mitchell P
    J Autism Dev Disord, 2016 Apr;46(4):1247-54.
    PMID: 26603886 DOI: 10.1007/s10803-015-2662-8
    How well can neurotypical adults' interpret mental states in people with ASD? 'Targets' (ASD and neurotypical) reactions to four events were video-recorded then shown to neurotypical participants whose task was to identify which event the target had experienced. In study 1 participants were more successful for neurotypical than ASD targets. In study 2, participants rated ASD targets equally expressive as neurotypical targets for three of the events, while in study 3 participants gave different verbal descriptions of the reactions of ASD and neurotypical targets. It thus seems people with ASD react differently but not less expressively to events. Because neurotypicals are ineffective in interpreting the behaviour of those with ASD, this could contribute to the social difficulties in ASD.
    Matched MeSH terms: Facial Expression*
  11. Agbolade O, Nazri A, Yaakob R, Ghani AA, Cheah YK
    BMC Bioinformatics, 2019 Dec 02;20(1):619.
    PMID: 31791234 DOI: 10.1186/s12859-019-3153-2
    BACKGROUND: Expression in H-sapiens plays a remarkable role when it comes to social communication. The identification of this expression by human beings is relatively easy and accurate. However, achieving the same result in 3D by machine remains a challenge in computer vision. This is due to the current challenges facing facial data acquisition in 3D; such as lack of homology and complex mathematical analysis for facial point digitization. This study proposes facial expression recognition in human with the application of Multi-points Warping for 3D facial landmark by building a template mesh as a reference object. This template mesh is thereby applied to each of the target mesh on Stirling/ESRC and Bosphorus datasets. The semi-landmarks are allowed to slide along tangents to the curves and surfaces until the bending energy between a template and a target form is minimal and localization error is assessed using Procrustes ANOVA. By using Principal Component Analysis (PCA) for feature selection, classification is done using Linear Discriminant Analysis (LDA).

    RESULT: The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components (PCs). The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of 99.58 and 99.32% on Stirling/ESRC and Bosphorus, respectively.

    CONCLUSION: The results demonstrate that the method is robust and in agreement with the state-of-the-art results.

    Matched MeSH terms: Facial Expression*
  12. Maruthapillai V, Murugappan M
    PLoS One, 2016;11(2):e0149003.
    PMID: 26859884 DOI: 10.1371/journal.pone.0149003
    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
    Matched MeSH terms: Facial Expression*
  13. Adamov L, Petrović B, Milić L, Štrbac V, Kojić S, Joseph K, et al.
    Biomed Eng Online, 2025 Feb 12;24(1):17.
    PMID: 39939995 DOI: 10.1186/s12938-025-01350-3
    BACKGROUND: Facial expression muscles serve a fundamental role in the orofacial system, significantly influencing the overall health and well-being of an individual. They are essential for performing basic functions such as speech, chewing, and swallowing. The purpose of this study was to determine whether surface electromyography could be used to evaluate the health, function, or dysfunction of three facial muscles by measuring their electrical activity in healthy people. Additionally, to ascertain whether pattern recognition and artificial intelligence may be used for tasks that differ from one another.

    RESULTS: The study included 24 participants and examined three muscles (m. Orbicularis Oris, m. Zygomaticus Major, and m. Mentalis) during five different facial expressions. Prior to thorough statistical analysis, features were extracted from the acquired electromyographs. Finally, classification was done with the use of logistic regression, random forest classifier and linear discriminant analysis. A statistically significant difference in muscle activity amplitudes was demonstrated between muscles, enabling the tracking of individual muscle activity for diagnostic and therapeutic purposes. Additionally other time domain and frequency domain features were analyzed, showing statistical significance in differentiation between muscles as well. Examples of pattern recognition showed promising avenues for further research and development.

    CONCLUSION: Surface electromyography is a useful method for assessing the function of facial expression muscles, significantly contributing to the diagnosis and treatment of oral motor function disorders. Results of this study show potential for further research and development in this field of research.

    Matched MeSH terms: Facial Expression*
  14. Oh YH, See J, Le Ngo AC, Phan RC, Baskaran VM
    Front Psychol, 2018;9:1128.
    PMID: 30042706 DOI: 10.3389/fpsyg.2018.01128
    Over the last few years, automatic facial micro-expression analysis has garnered increasing attention from experts across different disciplines because of its potential applications in various fields such as clinical diagnosis, forensic investigation and security systems. Advances in computer algorithms and video acquisition technology have rendered machine analysis of facial micro-expressions possible today, in contrast to decades ago when it was primarily the domain of psychiatrists where analysis was largely manual. Indeed, although the study of facial micro-expressions is a well-established field in psychology, it is still relatively new from the computational perspective with many interesting problems. In this survey, we present a comprehensive review of state-of-the-art databases and methods for micro-expressions spotting and recognition. Individual stages involved in the automation of these tasks are also described and reviewed at length. In addition, we also deliberate on the challenges and future directions in this growing field of automatic facial micro-expression analysis.
    Matched MeSH terms: Facial Expression
  15. Lim JZ, Mountstephens J, Teo J
    Sensors (Basel), 2020 Apr 22;20(8).
    PMID: 32331327 DOI: 10.3390/s20082384
    The ability to detect users' emotions for the purpose of emotion engineering is currently one of the main endeavors of machine learning in affective computing. Among the more common approaches to emotion detection are methods that rely on electroencephalography (EEG), facial image processing and speech inflections. Although eye-tracking is fast in becoming one of the most commonly used sensor modalities in affective computing, it is still a relatively new approach for emotion detection, especially when it is used exclusively. In this survey paper, we present a review on emotion recognition using eye-tracking technology, including a brief introductory background on emotion modeling, eye-tracking devices and approaches, emotion stimulation methods, the emotional-relevant features extractable from eye-tracking data, and most importantly, a categorical summary and taxonomy of the current literature which relates to emotion recognition using eye-tracking. This review concludes with a discussion on the current open research problems and prospective future research directions that will be beneficial for expanding the body of knowledge in emotion detection using eye-tracking as the primary sensor modality.
    Matched MeSH terms: Facial Expression
  16. Molavi M, Yunus J, Utama NP
    Psychol Res Behav Manag, 2016;9:105-14.
    PMID: 27307772 DOI: 10.2147/PRBM.S100495
    Fasting can influence psychological and mental states. In the current study, the effect of periodical fasting on the process of emotion through gazed facial expression as a realistic multisource of social information was investigated for the first time. The dynamic cue-target task was applied via behavior and event-related potential measurements for 40 participants to reveal the temporal and spatial brain activities - before, during, and after fasting periods. The significance of fasting included several effects. The amplitude of the N1 component decreased over the centroparietal scalp during fasting. Furthermore, the reaction time during the fasting period decreased. The self-measurement of deficit arousal as well as the mood increased during the fasting period. There was a significant contralateral alteration of P1 over occipital area for the happy facial expression stimuli. The significant effect of gazed expression and its interaction with the emotional stimuli was indicated by the amplitude of N1. Furthermore, the findings of the study approved the validity effect as a congruency between gaze and target position, as indicated by the increment of P3 amplitude over centroparietal area as well as slower reaction time from behavioral response data during incongruency or invalid condition between gaze and target position compared with those during valid condition. Results of this study proved that attention to facial expression stimuli as a kind of communicative social signal was affected by fasting. Also, fasting improved the mood of practitioners. Moreover, findings from the behavioral and event-related potential data analyses indicated that the neural dynamics of facial emotion are processed faster than that of gazing, as the participants tended to react faster and prefer to relay on the type of facial emotions than to gaze direction while doing the task. Because of happy facial expression stimuli, right hemisphere activation was more than that of the left hemisphere. It indicated the consistency of the emotional lateralization concept rather than the valence concept of emotional processing.
    Matched MeSH terms: Facial Expression
  17. Jones BC, DeBruine LM, Flake JK, Liuzza MT, Antfolk J, Arinze NC, et al.
    Nat Hum Behav, 2021 01;5(1):159-169.
    PMID: 33398150 DOI: 10.1038/s41562-020-01007-2
    Over the past 10 years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 5 November 2018. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.7611443.v1 .
    Matched MeSH terms: Facial Expression
  18. Ganesan I, Thomas T
    Med J Malaysia, 2011 Dec;66(5):507-9.
    PMID: 22390114 MyJurnal
    The Ochoa syndrome is the association of a non-neurogenic neurogenic bladder with abnormal facial muscle expression. Patients are at risk for renal failure due to obstructive uropathy. We report a family of three siblings, with an emphasis on the abnormalities in facial expression. Careful examination shows an unusual co-contraction of the orbicularis oculi and orbicularis oris muscles only when full facial expressions are exhibited, across a range of emotional or voluntary situations. This suggests a peripheral disorder in facial muscle control. Two thirds of patients have anal sphincter abnormalities. Aberrant organisation of the facial motor and urinary-anal sphincter nuclei may explain these symptoms.
    Matched MeSH terms: Facial Expression*
  19. Hamedi M, Salleh ShH, Tan TS, Ismail K, Ali J, Dee-Uam C, et al.
    Int J Nanomedicine, 2011;6:3461-72.
    PMID: 22267930 DOI: 10.2147/IJN.S26619
    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.
    Matched MeSH terms: Facial Expression*
  20. Teoh Y, Wallis E, Stephen ID, Mitchell P
    Cognition, 2017 02;159:48-60.
    PMID: 27886521 DOI: 10.1016/j.cognition.2016.11.003
    Past research tells us that individuals can infer information about a target's emotional state and intentions from their facial expressions (Frith & Frith, 2012), a process known as mentalising. This extends to inferring the events that caused the facial reaction (e.g. Pillai, Sheppard, & Mitchell, 2012; Pillai et al., 2014), an ability known as retrodictive mindreading. Here, we enter new territory by investigating whether or not people (perceivers) can guess a target's social context by observing their response to stimuli. In Experiment 1, perceivers viewed targets' responses and were able to determine whether these targets were alone or observed by another person. In Experiment 2, another group of perceivers, without any knowledge of the social context or what the targets were watching, judged whether targets were hiding or exaggerating their facial expressions; and their judgments discriminated between conditions in which targets were observed and alone. Experiment 3 established that another group of perceivers' judgments of social context were associated with estimations of target expressivity to some degree. In Experiments 1 and 2, the eye movements of perceivers also varied between conditions in which targets were observed and alone. Perceivers were thus able to infer a target's social context from their visible response. The results demonstrate an ability to use other minds as a window onto a social context that could not be seen directly.
    Matched MeSH terms: Facial Expression*
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links