social cognition

Higher emotional intensity doesn't always make vocal cues easier to recognize--it depends on the emotion type

Man speaking angrily into phone

We know that someone’s tone of voice can be a good indicator of what they are feeling in that moment. Someone who is speaking quickly, loudly, and with a high-pitched voice is likely to be feeling angry—and many of us would be able to infer this from their vocal cues. But what if these cues were more subtle? Would we still be able to understand that they were upset based on their tone of voice?

The current study examined whether listeners’ ability to identify a speaker’s emotional state depended on the level of ‘emotional intensity’ with which an emotion was conveyed. Intuitively, it makes sense that high-intensity expressions would be better recognized than low-intensity expressions. But, less is known about how fine-grained variations in vocal cues influence listeners’ accuracy of interpretation.

To answer our questions, we created a series of recordings that varied from neutral expressions to full-intensity expressions of different emotions, in 10% intervals. We used actors’ full-intensity and neutral expressions from a previous study as end points, and merged them together to create recordings that were 10% neutral and 90% angry, 20% neutral and 80% angry, 30% neutral and 70% angry, etc. We presented these recordings to listeners in increasing order of intensity, and asked them to tell us what emotion was being conveyed in each. From this, we obtained an estimate of the slope of listeners’ accuracy across increasing levels of emotional intensity, for each emotion type.

Varying curves of increasing recognition accuracy (on the y-axis) across increasing levels of emotional intensity (on the x-axis)

We determined that listeners’ ability to identify each emotion did not increase linearly alongside the recordings’ intensity level. Even though the acoustic characteristics of the recordings changed linearly with intensity, listeners’ accuracy did not map onto that. Further, the exact pattern of change in accuracy across intensity levels varied for different emotions. For example, anger was easy for listeners to recognize, even at low intensity levels (see blue arrow in the figure above). In contrast, listeners didn’t start to be able to identify sadness until it was expressed with more than 50% emotional intensity (see green arrow).

These results tell us that some emotions may be much harder than others to identify at low intensities. This is important, because those low-intensity versions are likely more common than full-intensity expressions in the real world. The findings of the current study point us to emotion types that may be particularly challenging to identify at low intensities, such as happiness. They also make us think twice about assuming that listeners’ accuracy is a direct reflection of linear changes in the vocal cues associated with different emotions.

To read more, visit: https://psycnet.apa.org/record/2021-86253-001

Youth with and without epilepsy differ in 'social brain' connectivity during a social cognitive task, but not at rest

Deficits in social cognition are common in people with epilepsy. This means that individuals with epilepsy may struggle to understand others' intentions in social situations, may find it harder to interpret others' facial expressions or tone of voice in social interactions, or may have trouble forming social connections with others. We know that epilepsy is associated with atypical functioning in regions of the brain that are thought to be involved in social cognition, but most existing research has examined patterns of brain connectivity at rest--that is, when nothing is happening. The current study wanted to investigate whether youth with epilepsy showed different brain connectivity patterns in these 'social brain' areas, when participants were completing a social cognition task. To answer this question, we compared brain connectivity within the "mentalizing network" (involved in theory of mind and other social cognitive functions) and within a network centered around the amygdala (involved in processing salient social information) in youth with and without epilepsy, while they were either completing a facial emotion recognition task or were at rest.

Compared to typically-developing youth, youth with epilepsy showed weaker connectivity between the left posterior superior temporal sulcus and the medial prefrontal cortex of the brain when seeing facial expressions in the emotion recognition task. These regions are thought to work together during social cognitive tasks, so decreased connectivity between these areas may indicate that these network nodes aren't communicating as efficiently or as well as they could be in youth with epilepsy. On the flip side, we found that youth with epilepsy had greater connectivity within the temporal lobe (between the left temporo-parietal junction and the anterior temporal cortex, to be precise) compared to typically-developing adolescents. This pattern was associated with poorer accuracy on the facial emotion recognition task. It is possible that youth with epilepsy are using a different 'strategy' in the task that results in different brain connectivity patterns in the temporal lobe, but we would need to test this possibility explicitly in future studies. In contrast to these findings, youth with and without epilepsy did not differ in their connectivity within either social brain network during resting-state scans (i.e., when they weren't doing a task).

israel-palacio-ImcUkZ72oUs-unsplash.jpg

Overall, our findings highlight that there may be important differences in how regions associated with social cognition are connected to one another during social cognitive tasks in youth with and without epilepsy. Although this is only a first step in understanding this phenomenon, our results indicate that looking at neural connectivity patterns during relevant tasks may be important to understanding the association between epilepsy and social cognitive deficits.

Find out more and read the paper here: https://www.sciencedirect.com/science/article/abs/pii/S0028393221001330

Ongoing maturation of neural responses to voices—but not faces—in adolescence

With age, we become better able to understand the meaning behind others’ nonverbal cues. In other words, we become more skilled at identifying others’ emotional states or attitudes based on their facial expressions, postures or gestures, or tone of voice. Previous research has found that the ability to recognize emotions in vocal cues (i.e., the way in which someone says something, beyond their verbal content) follows a more protracted developmental trajectory throughout adolescence than the same ability with facial expressions. Are there similar differences in the maturational trajectory of the neural representation of both types of nonverbal cues?

The current study examined age-related changes in a) facial and vocal emotion recognition skills, and b) in neural activation to both types of stimuli in adolescence. A group of 8- to 19-year-old participants were asked to complete both a facial and a vocal emotion recognition task—in which they were asked to identify the intended emotion in other teenagers’ facial expressions and voices—while undergoing functional magnetic resonance imaging (fMRI). We found that accuracy on the emotion recognition tasks began to plateau around age 14 for faces, but continued to increase linearly throughout adolescence for voices. At a neural level, a variety of subcortical regions, visual-motor association areas, prefrontal regions, and the right superior temporal gyrus responded to both faces and voices. While there were no age-related changes in activation within these areas when responding to faces, prefrontal regions (specifically, the inferior frontal cortex and dorsomedial prefrontal cortex) were more engaged when hearing voices in older adolescents. These findings suggest that the maturation of vocal emotion recognition skills and associated neural responses in frontal regions of the brain are continuing to develop throughout adolescence, following a more protracted trajectory than other social cognitive skills like the ability to interpret facial emotions. This may make it harder for teenagers to navigate social situations in which they must rely on vocal cues—for instance, when others are wearing masks.

girl-1245713_1920.jpg

Emotional faces elicit less activation in face-processing regions of the brain in youth with epilepsy

smiley-2979107_1920.jpg

Youth with epilepsy sometimes report having a hard time forming and maintaining relationships with others. This may be due to a variety of factors, but some research has suggested that deficits in “emotion recognition’”—or, the ability to interpret emotion in others’ facial expressions or tone of voice—may make it more challenging for youth with epilepsy to navigate social interactions. Difficulties in emotion recognition tend to be more pronounced in adults with childhood-onset epilepsy, suggesting that recurrent seizures may be disrupting the integrity of brain circuits involved in this social-cognitive skill in youth. However, though previous studies had investigated the neural representation of emotional faces in adults, none had examined the neural correlates of emotional face processing in youth with epilepsy. The current study examined whether emotional faces elicited different neural response in the brains of youth with and without epilepsy—and whether such differences were related to deficits in emotion recognition. Participants completed a facial emotion recognition task, in which they were asked to identify the emotion in other teenagers’ facial expressions, while undergoing functional magnetic resonance imaging (fMRI). We found that, compared to typically-developing youth, youth with epilepsy were less accurate in the facial emotion recognition task. In addition, youth with epilepsy showed blunted activation in the fusiform gyrus and right posterior superior temporal sulcus—two regions that play an important role in the processing of faces and social information. Reduced activation in these regions was correlated with poorer accuracy in the facial emotion recognition task. Together, our results suggest that reduced engagement of brain regions involved in processing socio-emotional signals may contribute to difficulties in social cognition experienced by youth with epilepsy.

Read more about the study here.

Age-related changes in adolescents’ neural connectivity and activation when hearing vocal prosody

girls-914823_1920.jpg

The ability to understand others' emotional state based on their tone of voice (vocal emotional prosody) develops throughout adolescence. Does neural activation to vocal prosody also change with age during the teenage years? We asked 8 to 19 year-old youth to complete a vocal emotion recognition task, in which they had to identify speakers' intended emotion based on their prosody, while in the MRI scanner. Age was associated with greater functional activation in regions of the frontal lobe often associated with language processing and emotional categorization. Further, age was linked to greater structural and functional connectivity between these frontal regions and the temporal-parietal junction, an area crucial for social cognition. These maturational changes were associated with greater accuracy in identifying the intended emotion in others' voices, suggesting that these neurodevelopmental processes may be supporting the growth of vocal emotion recognition skills during adolescence.