Recent research

Brain regions involved in ‘mentalizing’ process vocal emotions differently in youth with epilepsy

Previous research has found that youth with epilepsy are at risk for poorer social and relational outcomes. Although this is not true of everyone with epilepsy, many children and adolescents with this neurodevelopmental disorder report having a hard time making and maintaining friendships. Perhaps related to this, youth with epilepsy often also struggle to interpret others’ emotions: they tend to be less accurate than youth without epilepsy on emotion recognition tasks, where people are asked to identify the intended emotion in facial or vocal expressions. Why might this be? Some researchers have suggested that the brains of youth with epilepsy may respond differently to emotional faces, compared to youth without epilepsy. Could something similar be happening with emotional voices?

To answer this question, the current study recruited youth who had been diagnosed with intractable epilepsy (meaning they still experienced seizures despite taking medication to prevent them), and some who had not. Participants were asked to listen to recordings of emotional voices (e.g., angry voices, fearful voices, etc.) while they were in an MRI scanner. After each recording, participants were asked to indicate what emotion they thought was being expressed. We examined how accurate they were at determining the intended emotion in each recording, and how their brains responded to the different types of voices.

We found that youth with epilepsy were less accurate than youth without epilepsy on this vocal emotion recognition task—especially at younger ages. In addition, we found six regions of the brain that responded differently to the emotional voices in youth with vs. without epilepsy. Activation patterns in these areas (including regions like the right temporo-parietal junction, the right hippocampus, and the right medial prefrontal cortex) could actually predict whether any given participant had been diagnosed with epilepsy or not. Interestingly, many of these six regions are often found to be involved in ‘mentalizing tasks’, where participants are asked to make judgments about others’ emotions, thoughts, and beliefs. Our findings suggest that these brain areas might be responding differently when trying to interpret others’ emotions (based on their tone of voice) in youth with epilepsy. We don’t yet know whether these different patterns of activation are actually related to emotion recognition accuracy, or to social difficulties; they could simply reflect an alternative “strategy” when processing vocal emotional cues. Although more research is needed to determine this, our findings contribute to our understanding of how neurodiverse brains process social and emotional information.

Higher emotional intensity doesn't always make vocal cues easier to recognize--it depends on the emotion type

Man speaking angrily into phone

We know that someone’s tone of voice can be a good indicator of what they are feeling in that moment. Someone who is speaking quickly, loudly, and with a high-pitched voice is likely to be feeling angry—and many of us would be able to infer this from their vocal cues. But what if these cues were more subtle? Would we still be able to understand that they were upset based on their tone of voice?

The current study examined whether listeners’ ability to identify a speaker’s emotional state depended on the level of ‘emotional intensity’ with which an emotion was conveyed. Intuitively, it makes sense that high-intensity expressions would be better recognized than low-intensity expressions. But, less is known about how fine-grained variations in vocal cues influence listeners’ accuracy of interpretation.

To answer our questions, we created a series of recordings that varied from neutral expressions to full-intensity expressions of different emotions, in 10% intervals. We used actors’ full-intensity and neutral expressions from a previous study as end points, and merged them together to create recordings that were 10% neutral and 90% angry, 20% neutral and 80% angry, 30% neutral and 70% angry, etc. We presented these recordings to listeners in increasing order of intensity, and asked them to tell us what emotion was being conveyed in each. From this, we obtained an estimate of the slope of listeners’ accuracy across increasing levels of emotional intensity, for each emotion type.

Varying curves of increasing recognition accuracy (on the y-axis) across increasing levels of emotional intensity (on the x-axis)

We determined that listeners’ ability to identify each emotion did not increase linearly alongside the recordings’ intensity level. Even though the acoustic characteristics of the recordings changed linearly with intensity, listeners’ accuracy did not map onto that. Further, the exact pattern of change in accuracy across intensity levels varied for different emotions. For example, anger was easy for listeners to recognize, even at low intensity levels (see blue arrow in the figure above). In contrast, listeners didn’t start to be able to identify sadness until it was expressed with more than 50% emotional intensity (see green arrow).

These results tell us that some emotions may be much harder than others to identify at low intensities. This is important, because those low-intensity versions are likely more common than full-intensity expressions in the real world. The findings of the current study point us to emotion types that may be particularly challenging to identify at low intensities, such as happiness. They also make us think twice about assuming that listeners’ accuracy is a direct reflection of linear changes in the vocal cues associated with different emotions.

To read more, visit: https://psycnet.apa.org/record/2021-86253-001