How does neural connectivity during emotion recognition change over time in youth?

By Sophie Ye

How do we identify what emotions others are expressing? Our facial and vocal emotion recognition (ER) abilities help us to decode visual cues from other people's facial expressions and auditory cues from their voices. Facial and vocal ER are supported by hierarchically organized systems in the brain, meaning the information is first processed in primary sensory areas (like the visual and auditory cortices, respectively) and then by more complex networks. The neural correlates of vocal and facial ER may be developing at different rates during adolescence, which could help explain why adolescents’ performance in facial ER tasks reaches adult-like levels before their performance in vocal ER tasks. So far, research has mostly focused on changes in how different areas of the brain are activated during ER, across childhood and adolescence. Because processing emotional stimuli requires connections between many areas, the current study wanted to see how neural connectivity during these tasks changed over time.

What did we expect to find?

Our study set out to look at youth’s brain connectivity while they completed facial and vocal ER tasks. We hypothesized that connectivity patterns would be different depending on the type of modality tested (i.e., face vs. voice; Hypothesis 1). We also hypothesized that changes in connectivity would be noted across the one-year time span of this longitudinal study, based on existing research on the maturation of several facial and vocal processing areas in the brain and their changes in connectivity with development (Hypothesis 2). Furthermore, since vocal ER skills take longer to develop and undergo a more pronounced maturation across adolescence, we expected to see more changes over time in connectivity (Hypothesis 3) and task accuracy (Hypothesis 4) for the vocal than the facial ER task.

What did we do?

To test these hypotheses, we looked at functional magnetic resonance imaging (fMRI) data from 41 youth participants, aged 8 to 19 years old, who completed facial and vocal ER tasks in an fMRI scanner. The scanner allows us to measure brain activity by detecting changes in blood flow, and we looked specifically at how different regions of the brain worked together during the task (i.e., neural connectivity). In both tasks, participants were presented with emotional stimuli produced by other teens and asked to identify which emotion was being expressed: anger, fear, happiness, sadness, or neutral. One year later, the participants completed the same task in the fMRI scanner again.

What did we find?

After analyzing our data, we found several significant results. First, we found that vocal and facial ER were supported by distinguishable brain networks, meaning that regions of the brain are differentially connected with one another when completing a facial vs. a vocal ER task, consistent with Hypothesis 1. Second, there were developmental changes in functional connectivity from Time 1 to Time 2 for both facial and vocal ER, consistent with Hypothesis 2. Some ‘edges’ (i.e., the strength of the relationship between two nodes, or regions of the brain, in a network) had stronger connectivity at Time 1 whereas others had stronger connectivity at Time 2, but overall these developmental changes were greater for vocal ER (consistent with Hypothesis 3).

We also found that participants’ task performance (in terms of sensitivity, i.e., the number of times they were correct in identifying an emotion minus the number of times they were incorrect) was 0.85 for the facial ER task and 0.29 for the vocal ER task. (You can think of this as being 85% vs. 29% accurate on the task, when you account for both people’s correct and incorrect responses!). There was a significant effect of modality on sensitivity, whereby faces were better recognized than voices, as well as a significant effect of age, whereby older participants recognized both types of expressions better than younger participants. While we found a more pronounced development in behavioural accuracy over time for vocal ER, this trend was not statistically significant, partially supporting Hypothesis 4.

These results suggest that ongoing network integration across development may support the growth of ER skills during childhood and adolescence. Although more research in bigger samples is required to back up our preliminary evidence for neural mechanisms underlying ER, our findings show that measuring ER in different modalities is important to fully understand the development of key social cognitive skills in youth.

Morningstar, M., Hughes, C., French, R.C., Grannis, C., Mattson, W.I., & Nelson, E.E. (2024). Functional connectivity during facial and vocal emotion recognition: Preliminary evidence for dissociations in developmental change by nonverbal modality. Neuropsychologia, 202, 108946. doi: 10.1016/j.neuropsychologia.2024.108946