The influence of context on facial expression classification is most often investigated using simple cues in static faces portraying basic expressions with a fixed emotional intensity. We examined (1) whether a perceptually rich, dynamic audiovisual context, presented in the form of movie clips (to achieve closer resemblance to real life), affected the subsequent classification of dynamic basic (happy) and non-basic (sarcastic) facial expressions and (2) whether people's susceptibility to contextual cues was related to their ability to classify facial expressions viewed in isolation. Participants classified facial expressions-gradually progressing from neutral to happy/sarcastic in increasing intensity-that followed movie clips. Classification was relatively more accurate and faster when the preceding context predicted the upcoming expression, compared with when the context did not. Speeded classifications suggested that predictive contexts reduced the emotional intensity required to be accurately classified. More importantly, we show for the first time that participants' accuracy in classifying expressions without an informative context correlated with the magnitude of the contextual effects experienced by them-poor classifiers of isolated expressions were more susceptible to a predictive context. Our findings support the emerging view that contextual cues and individual differences must be considered when explaining mechanisms underlying facial expression classification.
Ambiguous images are widely recognized as a valuable tool for probing human perception. Perceptual biases that arise when people make judgements about ambiguous images reveal their expectations about the environment. While perceptual biases in early visual processing have been well established, their existence in higher-level vision has been explored only for faces, which may be processed differently from other objects. Here we developed a new, highly versatile method of creating ambiguous hybrid images comprising two component objects belonging to distinct categories. We used these hybrids to measure perceptual biases in object classification and found that images of man-made (manufactured) objects dominated those of naturally occurring (non-man-made) ones in hybrids. This dominance generalized to a broad range of object categories, persisted when the horizontal and vertical elements that dominate man-made objects were removed and increased with the real-world size of the manufactured object. Our findings show for the first time that people have perceptual biases to see man-made objects and suggest that extended exposure to manufactured environments in our urban-living participants has changed the way that they see the world.