emotion recognition

Brain regions involved in ‘mentalizing’ process vocal emotions differently in youth with epilepsy

Previous research has found that youth with epilepsy are at risk for poorer social and relational outcomes. Although this is not true of everyone with epilepsy, many children and adolescents with this neurodevelopmental disorder report having a hard time making and maintaining friendships. Perhaps related to this, youth with epilepsy often also struggle to interpret others’ emotions: they tend to be less accurate than youth without epilepsy on emotion recognition tasks, where people are asked to identify the intended emotion in facial or vocal expressions. Why might this be? Some researchers have suggested that the brains of youth with epilepsy may respond differently to emotional faces, compared to youth without epilepsy. Could something similar be happening with emotional voices?

To answer this question, the current study recruited youth who had been diagnosed with intractable epilepsy (meaning they still experienced seizures despite taking medication to prevent them), and some who had not. Participants were asked to listen to recordings of emotional voices (e.g., angry voices, fearful voices, etc.) while they were in an MRI scanner. After each recording, participants were asked to indicate what emotion they thought was being expressed. We examined how accurate they were at determining the intended emotion in each recording, and how their brains responded to the different types of voices.

We found that youth with epilepsy were less accurate than youth without epilepsy on this vocal emotion recognition task—especially at younger ages. In addition, we found six regions of the brain that responded differently to the emotional voices in youth with vs. without epilepsy. Activation patterns in these areas (including regions like the right temporo-parietal junction, the right hippocampus, and the right medial prefrontal cortex) could actually predict whether any given participant had been diagnosed with epilepsy or not. Interestingly, many of these six regions are often found to be involved in ‘mentalizing tasks’, where participants are asked to make judgments about others’ emotions, thoughts, and beliefs. Our findings suggest that these brain areas might be responding differently when trying to interpret others’ emotions (based on their tone of voice) in youth with epilepsy. We don’t yet know whether these different patterns of activation are actually related to emotion recognition accuracy, or to social difficulties; they could simply reflect an alternative “strategy” when processing vocal emotional cues. Although more research is needed to determine this, our findings contribute to our understanding of how neurodiverse brains process social and emotional information.

Higher emotional intensity doesn't always make vocal cues easier to recognize--it depends on the emotion type

Man speaking angrily into phone

We know that someone’s tone of voice can be a good indicator of what they are feeling in that moment. Someone who is speaking quickly, loudly, and with a high-pitched voice is likely to be feeling angry—and many of us would be able to infer this from their vocal cues. But what if these cues were more subtle? Would we still be able to understand that they were upset based on their tone of voice?

The current study examined whether listeners’ ability to identify a speaker’s emotional state depended on the level of ‘emotional intensity’ with which an emotion was conveyed. Intuitively, it makes sense that high-intensity expressions would be better recognized than low-intensity expressions. But, less is known about how fine-grained variations in vocal cues influence listeners’ accuracy of interpretation.

To answer our questions, we created a series of recordings that varied from neutral expressions to full-intensity expressions of different emotions, in 10% intervals. We used actors’ full-intensity and neutral expressions from a previous study as end points, and merged them together to create recordings that were 10% neutral and 90% angry, 20% neutral and 80% angry, 30% neutral and 70% angry, etc. We presented these recordings to listeners in increasing order of intensity, and asked them to tell us what emotion was being conveyed in each. From this, we obtained an estimate of the slope of listeners’ accuracy across increasing levels of emotional intensity, for each emotion type.

Varying curves of increasing recognition accuracy (on the y-axis) across increasing levels of emotional intensity (on the x-axis)

We determined that listeners’ ability to identify each emotion did not increase linearly alongside the recordings’ intensity level. Even though the acoustic characteristics of the recordings changed linearly with intensity, listeners’ accuracy did not map onto that. Further, the exact pattern of change in accuracy across intensity levels varied for different emotions. For example, anger was easy for listeners to recognize, even at low intensity levels (see blue arrow in the figure above). In contrast, listeners didn’t start to be able to identify sadness until it was expressed with more than 50% emotional intensity (see green arrow).

These results tell us that some emotions may be much harder than others to identify at low intensities. This is important, because those low-intensity versions are likely more common than full-intensity expressions in the real world. The findings of the current study point us to emotion types that may be particularly challenging to identify at low intensities, such as happiness. They also make us think twice about assuming that listeners’ accuracy is a direct reflection of linear changes in the vocal cues associated with different emotions.

To read more, visit: https://psycnet.apa.org/record/2021-86253-001

Ongoing maturation of neural responses to voices—but not faces—in adolescence

With age, we become better able to understand the meaning behind others’ nonverbal cues. In other words, we become more skilled at identifying others’ emotional states or attitudes based on their facial expressions, postures or gestures, or tone of voice. Previous research has found that the ability to recognize emotions in vocal cues (i.e., the way in which someone says something, beyond their verbal content) follows a more protracted developmental trajectory throughout adolescence than the same ability with facial expressions. Are there similar differences in the maturational trajectory of the neural representation of both types of nonverbal cues?

The current study examined age-related changes in a) facial and vocal emotion recognition skills, and b) in neural activation to both types of stimuli in adolescence. A group of 8- to 19-year-old participants were asked to complete both a facial and a vocal emotion recognition task—in which they were asked to identify the intended emotion in other teenagers’ facial expressions and voices—while undergoing functional magnetic resonance imaging (fMRI). We found that accuracy on the emotion recognition tasks began to plateau around age 14 for faces, but continued to increase linearly throughout adolescence for voices. At a neural level, a variety of subcortical regions, visual-motor association areas, prefrontal regions, and the right superior temporal gyrus responded to both faces and voices. While there were no age-related changes in activation within these areas when responding to faces, prefrontal regions (specifically, the inferior frontal cortex and dorsomedial prefrontal cortex) were more engaged when hearing voices in older adolescents. These findings suggest that the maturation of vocal emotion recognition skills and associated neural responses in frontal regions of the brain are continuing to develop throughout adolescence, following a more protracted trajectory than other social cognitive skills like the ability to interpret facial emotions. This may make it harder for teenagers to navigate social situations in which they must rely on vocal cues—for instance, when others are wearing masks.

girl-1245713_1920.jpg

Emotional faces elicit less activation in face-processing regions of the brain in youth with epilepsy

smiley-2979107_1920.jpg

Youth with epilepsy sometimes report having a hard time forming and maintaining relationships with others. This may be due to a variety of factors, but some research has suggested that deficits in “emotion recognition’”—or, the ability to interpret emotion in others’ facial expressions or tone of voice—may make it more challenging for youth with epilepsy to navigate social interactions. Difficulties in emotion recognition tend to be more pronounced in adults with childhood-onset epilepsy, suggesting that recurrent seizures may be disrupting the integrity of brain circuits involved in this social-cognitive skill in youth. However, though previous studies had investigated the neural representation of emotional faces in adults, none had examined the neural correlates of emotional face processing in youth with epilepsy. The current study examined whether emotional faces elicited different neural response in the brains of youth with and without epilepsy—and whether such differences were related to deficits in emotion recognition. Participants completed a facial emotion recognition task, in which they were asked to identify the emotion in other teenagers’ facial expressions, while undergoing functional magnetic resonance imaging (fMRI). We found that, compared to typically-developing youth, youth with epilepsy were less accurate in the facial emotion recognition task. In addition, youth with epilepsy showed blunted activation in the fusiform gyrus and right posterior superior temporal sulcus—two regions that play an important role in the processing of faces and social information. Reduced activation in these regions was correlated with poorer accuracy in the facial emotion recognition task. Together, our results suggest that reduced engagement of brain regions involved in processing socio-emotional signals may contribute to difficulties in social cognition experienced by youth with epilepsy.

Read more about the study here.

Loneliness in adolescents is associated with the recognition of vocal fear and friendliness

adult-2178209_1920.jpg

During the teenage years, adolescents typically begin forming complex social networks and spending more time with friends than with their parents. However, not all teenagers experience the same level of social connection at this age. Feelings of loneliness can be hard to manage, and may impact the way in which teenagers interpret social information. Previous research has shown that lonely individuals are highly attuned to social information, including both cues of social threat and signals of affiliation. Relatedly, loneliness has been linked to better recognition of negative emotions conveyed by others’ facial expressions. However, little is known about whether loneliness has similar associations with the interpretation of non-facial information, such as others’ tone of voice. To answer this question, we asked 11- to 18-year-old adolescents to report on their feelings of loneliness and to complete a vocal emotion recognition task, in which they were asked to select the emotion they thought was being conveyed in recordings of emotional voices. Contrary to our expectations, we found that loneliness was linked to poorer recognition of fear (a negative emotion), but better recognition of friendliness (an affiliative expression), in others’ voices. We speculated that differences from previous findings may stem from the differential timecourse over which vocal emotion unfolds: though negative cues may initially grab listeners’ attention, lonely individuals’ tendency to avoid threat may interfere with their accurate interpretation of this type of social cue. This work provides some evidence that youth’s cognitive response to social information is likely relevant to their social experiences, but highlights the importance of extending our assessment of social information processing to non-facial modalities.

More details about this work can be found here: https://tandfonline.com/doi/full/10.1080/02699931.2019.1682971

Morningstar, M., Nowland, R., Dirks, M.A., & Qualter, P. (2019). Links between feelings of loneliness and the recognition of vocal socio-emotional expressions in adolescents. Cognition & Emotion. doi: 10.1080/02699931.2019.1682971

Age-related changes in adolescents’ neural connectivity and activation when hearing vocal prosody

girls-914823_1920.jpg

The ability to understand others' emotional state based on their tone of voice (vocal emotional prosody) develops throughout adolescence. Does neural activation to vocal prosody also change with age during the teenage years? We asked 8 to 19 year-old youth to complete a vocal emotion recognition task, in which they had to identify speakers' intended emotion based on their prosody, while in the MRI scanner. Age was associated with greater functional activation in regions of the frontal lobe often associated with language processing and emotional categorization. Further, age was linked to greater structural and functional connectivity between these frontal regions and the temporal-parietal junction, an area crucial for social cognition. These maturational changes were associated with greater accuracy in identifying the intended emotion in others' voices, suggesting that these neurodevelopmental processes may be supporting the growth of vocal emotion recognition skills during adolescence.