Modulation of Facial Expression Perception by Body Context
MetadataShow full item record
The present study tested the emotion seed hypothesis, previously not fully tested, which states that facial expression perception is modulated by context based on perceptual similarities shared between facial expressions. The more visually similar a facial expression (e.g. fearful) is to another (e.g. surprised), the more likely they will be confused for one another especially in one another’s emotionally congruent context. Therefore only specific emotional contexts will enhance the confusability of a facial expression. Faces expressing the six basic emotions and neutral expressions were mixed and combined with the bodily expressions of these emotions, in a face expression categorization task. Results demonstrate that facial expression perception is influenced by which bodily expression it is combined with. Only a few of the predictions of the emotion seed hypothesis were confirmed. Unpredicted modulations of facial expression perception occurred, such as facial expressions being confused as context incongruent expressions. Given these findings, it is proposed that facial expression perception is influenced by both categorical and underlying dimensional attributes (i.e. intensity and valence).