emotion

Brain regions involved in ‘mentalizing’ process vocal emotions differently in youth with epilepsy

Previous research has found that youth with epilepsy are at risk for poorer social and relational outcomes. Although this is not true of everyone with epilepsy, many children and adolescents with this neurodevelopmental disorder report having a hard time making and maintaining friendships. Perhaps related to this, youth with epilepsy often also struggle to interpret others’ emotions: they tend to be less accurate than youth without epilepsy on emotion recognition tasks, where people are asked to identify the intended emotion in facial or vocal expressions. Why might this be? Some researchers have suggested that the brains of youth with epilepsy may respond differently to emotional faces, compared to youth without epilepsy. Could something similar be happening with emotional voices?

To answer this question, the current study recruited youth who had been diagnosed with intractable epilepsy (meaning they still experienced seizures despite taking medication to prevent them), and some who had not. Participants were asked to listen to recordings of emotional voices (e.g., angry voices, fearful voices, etc.) while they were in an MRI scanner. After each recording, participants were asked to indicate what emotion they thought was being expressed. We examined how accurate they were at determining the intended emotion in each recording, and how their brains responded to the different types of voices.

We found that youth with epilepsy were less accurate than youth without epilepsy on this vocal emotion recognition task—especially at younger ages. In addition, we found six regions of the brain that responded differently to the emotional voices in youth with vs. without epilepsy. Activation patterns in these areas (including regions like the right temporo-parietal junction, the right hippocampus, and the right medial prefrontal cortex) could actually predict whether any given participant had been diagnosed with epilepsy or not. Interestingly, many of these six regions are often found to be involved in ‘mentalizing tasks’, where participants are asked to make judgments about others’ emotions, thoughts, and beliefs. Our findings suggest that these brain areas might be responding differently when trying to interpret others’ emotions (based on their tone of voice) in youth with epilepsy. We don’t yet know whether these different patterns of activation are actually related to emotion recognition accuracy, or to social difficulties; they could simply reflect an alternative “strategy” when processing vocal emotional cues. Although more research is needed to determine this, our findings contribute to our understanding of how neurodiverse brains process social and emotional information.

Higher emotional intensity doesn't always make vocal cues easier to recognize--it depends on the emotion type

Man speaking angrily into phone

We know that someone’s tone of voice can be a good indicator of what they are feeling in that moment. Someone who is speaking quickly, loudly, and with a high-pitched voice is likely to be feeling angry—and many of us would be able to infer this from their vocal cues. But what if these cues were more subtle? Would we still be able to understand that they were upset based on their tone of voice?

The current study examined whether listeners’ ability to identify a speaker’s emotional state depended on the level of ‘emotional intensity’ with which an emotion was conveyed. Intuitively, it makes sense that high-intensity expressions would be better recognized than low-intensity expressions. But, less is known about how fine-grained variations in vocal cues influence listeners’ accuracy of interpretation.

To answer our questions, we created a series of recordings that varied from neutral expressions to full-intensity expressions of different emotions, in 10% intervals. We used actors’ full-intensity and neutral expressions from a previous study as end points, and merged them together to create recordings that were 10% neutral and 90% angry, 20% neutral and 80% angry, 30% neutral and 70% angry, etc. We presented these recordings to listeners in increasing order of intensity, and asked them to tell us what emotion was being conveyed in each. From this, we obtained an estimate of the slope of listeners’ accuracy across increasing levels of emotional intensity, for each emotion type.

Varying curves of increasing recognition accuracy (on the y-axis) across increasing levels of emotional intensity (on the x-axis)

We determined that listeners’ ability to identify each emotion did not increase linearly alongside the recordings’ intensity level. Even though the acoustic characteristics of the recordings changed linearly with intensity, listeners’ accuracy did not map onto that. Further, the exact pattern of change in accuracy across intensity levels varied for different emotions. For example, anger was easy for listeners to recognize, even at low intensity levels (see blue arrow in the figure above). In contrast, listeners didn’t start to be able to identify sadness until it was expressed with more than 50% emotional intensity (see green arrow).

These results tell us that some emotions may be much harder than others to identify at low intensities. This is important, because those low-intensity versions are likely more common than full-intensity expressions in the real world. The findings of the current study point us to emotion types that may be particularly challenging to identify at low intensities, such as happiness. They also make us think twice about assuming that listeners’ accuracy is a direct reflection of linear changes in the vocal cues associated with different emotions.

To read more, visit: https://psycnet.apa.org/record/2021-86253-001

Emotional faces elicit less activation in face-processing regions of the brain in youth with epilepsy

smiley-2979107_1920.jpg

Youth with epilepsy sometimes report having a hard time forming and maintaining relationships with others. This may be due to a variety of factors, but some research has suggested that deficits in “emotion recognition’”—or, the ability to interpret emotion in others’ facial expressions or tone of voice—may make it more challenging for youth with epilepsy to navigate social interactions. Difficulties in emotion recognition tend to be more pronounced in adults with childhood-onset epilepsy, suggesting that recurrent seizures may be disrupting the integrity of brain circuits involved in this social-cognitive skill in youth. However, though previous studies had investigated the neural representation of emotional faces in adults, none had examined the neural correlates of emotional face processing in youth with epilepsy. The current study examined whether emotional faces elicited different neural response in the brains of youth with and without epilepsy—and whether such differences were related to deficits in emotion recognition. Participants completed a facial emotion recognition task, in which they were asked to identify the emotion in other teenagers’ facial expressions, while undergoing functional magnetic resonance imaging (fMRI). We found that, compared to typically-developing youth, youth with epilepsy were less accurate in the facial emotion recognition task. In addition, youth with epilepsy showed blunted activation in the fusiform gyrus and right posterior superior temporal sulcus—two regions that play an important role in the processing of faces and social information. Reduced activation in these regions was correlated with poorer accuracy in the facial emotion recognition task. Together, our results suggest that reduced engagement of brain regions involved in processing socio-emotional signals may contribute to difficulties in social cognition experienced by youth with epilepsy.

Read more about the study here.

Loneliness in adolescents is associated with the recognition of vocal fear and friendliness

adult-2178209_1920.jpg

During the teenage years, adolescents typically begin forming complex social networks and spending more time with friends than with their parents. However, not all teenagers experience the same level of social connection at this age. Feelings of loneliness can be hard to manage, and may impact the way in which teenagers interpret social information. Previous research has shown that lonely individuals are highly attuned to social information, including both cues of social threat and signals of affiliation. Relatedly, loneliness has been linked to better recognition of negative emotions conveyed by others’ facial expressions. However, little is known about whether loneliness has similar associations with the interpretation of non-facial information, such as others’ tone of voice. To answer this question, we asked 11- to 18-year-old adolescents to report on their feelings of loneliness and to complete a vocal emotion recognition task, in which they were asked to select the emotion they thought was being conveyed in recordings of emotional voices. Contrary to our expectations, we found that loneliness was linked to poorer recognition of fear (a negative emotion), but better recognition of friendliness (an affiliative expression), in others’ voices. We speculated that differences from previous findings may stem from the differential timecourse over which vocal emotion unfolds: though negative cues may initially grab listeners’ attention, lonely individuals’ tendency to avoid threat may interfere with their accurate interpretation of this type of social cue. This work provides some evidence that youth’s cognitive response to social information is likely relevant to their social experiences, but highlights the importance of extending our assessment of social information processing to non-facial modalities.

More details about this work can be found here: https://tandfonline.com/doi/full/10.1080/02699931.2019.1682971

Morningstar, M., Nowland, R., Dirks, M.A., & Qualter, P. (2019). Links between feelings of loneliness and the recognition of vocal socio-emotional expressions in adolescents. Cognition & Emotion. doi: 10.1080/02699931.2019.1682971