How does neural connectivity during emotion recognition change over time in youth?

By Sophie Ye

How do we identify what emotions others are expressing? Our facial and vocal emotion recognition (ER) abilities help us to decode visual cues from other people's facial expressions and auditory cues from their voices. Facial and vocal ER are supported by hierarchically organized systems in the brain, meaning the information is first processed in primary sensory areas (like the visual and auditory cortices, respectively) and then by more complex networks. The neural correlates of vocal and facial ER may be developing at different rates during adolescence, which could help explain why adolescents’ performance in facial ER tasks reaches adult-like levels before their performance in vocal ER tasks. So far, research has mostly focused on changes in how different areas of the brain are activated during ER, across childhood and adolescence. Because processing emotional stimuli requires connections between many areas, the current study wanted to see how neural connectivity during these tasks changed over time.

What did we expect to find?

Our study set out to look at youth’s brain connectivity while they completed facial and vocal ER tasks. We hypothesized that connectivity patterns would be different depending on the type of modality tested (i.e., face vs. voice; Hypothesis 1). We also hypothesized that changes in connectivity would be noted across the one-year time span of this longitudinal study, based on existing research on the maturation of several facial and vocal processing areas in the brain and their changes in connectivity with development (Hypothesis 2). Furthermore, since vocal ER skills take longer to develop and undergo a more pronounced maturation across adolescence, we expected to see more changes over time in connectivity (Hypothesis 3) and task accuracy (Hypothesis 4) for the vocal than the facial ER task.

What did we do?

To test these hypotheses, we looked at functional magnetic resonance imaging (fMRI) data from 41 youth participants, aged 8 to 19 years old, who completed facial and vocal ER tasks in an fMRI scanner. The scanner allows us to measure brain activity by detecting changes in blood flow, and we looked specifically at how different regions of the brain worked together during the task (i.e., neural connectivity). In both tasks, participants were presented with emotional stimuli produced by other teens and asked to identify which emotion was being expressed: anger, fear, happiness, sadness, or neutral. One year later, the participants completed the same task in the fMRI scanner again.

What did we find?

After analyzing our data, we found several significant results. First, we found that vocal and facial ER were supported by distinguishable brain networks, meaning that regions of the brain are differentially connected with one another when completing a facial vs. a vocal ER task, consistent with Hypothesis 1. Second, there were developmental changes in functional connectivity from Time 1 to Time 2 for both facial and vocal ER, consistent with Hypothesis 2. Some ‘edges’ (i.e., the strength of the relationship between two nodes, or regions of the brain, in a network) had stronger connectivity at Time 1 whereas others had stronger connectivity at Time 2, but overall these developmental changes were greater for vocal ER (consistent with Hypothesis 3).

We also found that participants’ task performance (in terms of sensitivity, i.e., the number of times they were correct in identifying an emotion minus the number of times they were incorrect) was 0.85 for the facial ER task and 0.29 for the vocal ER task. (You can think of this as being 85% vs. 29% accurate on the task, when you account for both people’s correct and incorrect responses!). There was a significant effect of modality on sensitivity, whereby faces were better recognized than voices, as well as a significant effect of age, whereby older participants recognized both types of expressions better than younger participants. While we found a more pronounced development in behavioural accuracy over time for vocal ER, this trend was not statistically significant, partially supporting Hypothesis 4.

These results suggest that ongoing network integration across development may support the growth of ER skills during childhood and adolescence. Although more research in bigger samples is required to back up our preliminary evidence for neural mechanisms underlying ER, our findings show that measuring ER in different modalities is important to fully understand the development of key social cognitive skills in youth.

Morningstar, M., Hughes, C., French, R.C., Grannis, C., Mattson, W.I., & Nelson, E.E. (2024). Functional connectivity during facial and vocal emotion recognition: Preliminary evidence for dissociations in developmental change by nonverbal modality. Neuropsychologia, 202, 108946. doi: 10.1016/j.neuropsychologia.2024.108946

Teens' tone of voice when responding to peer provocation influences how they are perceived by other adolescents

By Daniel Nault

Did you know that up to 25% of teenagers in Canada report being bullied by their peers at least twice per week? These rates are alarming given what we know about how bullying can lead to negative mental health outcomes and academic challenges. In efforts to reduce bullying, researchers have identified some of the ways that teenagers respond to bullying episodes. When asked what they would do in made-up situations involving bullying, teenagers often report either responding aggressively (e.g., by calling the person a mean name), asserting their needs (e.g., by telling the person that they don’t like what they’re doing), or withdrawing from the situation (e.g., by walking away and saying/doing nothing about it). While we know that some responses (e.g., assertion) might be more effective than others at stopping bullying (e.g., aggression), not all responses work equally well for all teens. We don’t know very much about why that is. In this study, we tested one possible reason for why some responses might be more effective for some students than others: because of how a response is said with the voice.

Imagine the following scenario: after being shoved by another kid, Caleb tells them to “stop doing that”. Usually, that would be considered an assertive response. But, it may not work to solve the problem if Caleb said that response with a timid and tentative-sounding voice. We set out to test whether the tone of voice someone uses when responding to bullying influenced how that response was perceived by other teens.


To do this, we first asked a group of 39 teenagers (SPEAKERS) to fill out a questionnaire about how they would respond to made-up situations involving bullying. Their choices were broken down into the 3 main ways of responding that have been identified in previous research: (1) aggression, (2) assertion, and (3) withdrawal. We also took audio recordings of them saying responses to bullying. Speakers listened to some made-up situations involving bullying and were asked to tell us how they would say the following sentences if those situations happened to them: (1) “I didn’t know that” and (2) “Why are you saying that?”. We used speech analysis to get information about the pitch (highness/lowness), intensity (volume), and speech rate (speed) of their voice when saying these sentences. These acoustic cues allow us to quantify each speaker’s tone of voice when speaking.

Next, we recruited a separate group of 133 teenagers (LISTENERS) to listen to the audio recordings of the speakers saying their responses to bullying. They listened to each recording and were asked how mean/friendly each response sounded to them on a scale from (1) Mean to (7) Friendly.

When we analyzed the data, we found that there was a relationship between speakers’ responses on the self-report questionnaire and listeners’ meanness/friendliness ratings of speakers’ vocal responses. Speakers who were more likely to say they would respond to bullying episodes using aggressive strategies tended to say their responses to bullying in ways that listeners rated as sounding meaner. We also found that speakers who said their responses with a faster speech rate were rated by listeners as sounding friendlier.

 The findings from our study shed light on the importance of paying attention not only to what youth say and/or do when responding to bullying, but also to how they say it. Bullying prevention programs mostly teach teenagers what to say and/or do to stop bullying, but often pay little or no attention to teaching teenagers how to say and/or do it. Our study found some evidence that talking faster when responding to bullying sounds friendlier to other teens… and might help to de-escalate peer conflict.

Mothers with and without a history of depression use different acoustic cues when expressing happiness

by Emma Ilyaz

Depression is related to a whole host of outcomes, including changes to nonverbal emotional expressions. This can include one’s facial expressions, body language and posture, and their tone of voice (“vocal prosody”—i.e., not what you say, but HOW you say it). Despite some evidence that people with depression have altered vocal prosody, we don’t yet know how vocal prosody differs during emotional speech (e.g., when expressing anger or happiness). This type of speech is thought to be especially important for the parent-child relationship, because parents often use emotional-sounding speech (called “infant-directed speech”) when talking to their young ones. Is emotional vocal prosody different in parents who experienced depression during their child’s lifetime, vs. those who did not? Our study set out to understand whether mothers with or without a recent history of depression had differences in their vocal prosody during emotional expression.

Our study included eighty-one mothers and their 3- to 4-year-old children. Thirty-nine of the mothers had a history of depression during their child’s lifetime, while the rest of them did not. Mothers were recorded while they portrayed twenty different sentences in a neutral, angry, and happy tone of voice. The sentences were made to align with common phrases a mother might say to their child (e.g., “Did you put your shoes away?”). After this, we used a special software to objectively characterize their tone of voice! This involved editing each of the recordings to ensure they had good sound quality, then putting them into a software to extract the values of their pitch (the up and down pattern of speech), their intensity (their loudness), and their speech rate (how fast or slow they’re speaking). Additionally, we asked four hundred and ten other adults to listen to these recordings and tell us how intense, recognizable, and authentic mothers’ emotional speech was. We looked to see whether there were differences between mothers with and without a history of depression on our objective speech analysis and whether these differences were able to be picked up by listeners.

In our objective speech analysis, we found that when expressing happiness, mothers with a history of depression had less range in their pitch (i.e., sounded more monotone), and spoke slower than mothers without a history of depression. While we did not find that people with depression were rated any differently by the independent listeners, we did find that recordings with less pitch range and slower speech rate were rated to be less intensely, recognizably, and authentically happy. This signals to us that while there are objective differences in speech between mothers with and without a history of depression, the way that people are perceiving these differences could depend on each individual mother’s speech pattern.

These findings suggest that mothers with depression may be speaking in a way that blurs the emotional intent of their message. In turn, this might be affecting the way that their children learn about emotion. There is still much to learn about how different vocal prosody impacts children. But, since vocal prosody can easily be modified, an early suggestion might be to encourage mothers who have a history of depression to exaggerate their happy voice when speaking to their young children to help clarify their emotional messages!

Ilyaz, E., Feng, X., Fu, X., Nelson, E.E., & Morningstar, M. (2024). Vocal emotional expressions in mothers with and without a history of major depressive disorder. Journal of Nonverbal Behavior. doi: 10.1007/s10919-024-00462-z

How teenage girls' brains respond to peer rejection and acceptance explains the link between risk for depression and later symptoms

Adolescence is a time in which rates of depression increase dramatically. Only 2% of children experience depression, but 15-20% of adolescents do! This suggests that adolescence is a time of heightened risk for the emergence of depressive symptoms.

There are many changes in adolescence that could explain this phenomenon. First, teens are very sensitive to social feedback from their peers. Their brains show heightened response to signs of rejection, in regions of the brain involved in processing emotional and social information. Teens who struggle with depression show atypical response to rejection in these brain regions, and there is some indication that youth who are at familial risk for depression (i.e., aren’t currently depressed but have a parent who has been) do as well. Does this pattern of neural response explain why some youth at risk for depression develop later symptoms?

We set out to test this question by recruiting three groups of teenage girls who were at different levels of risk for depression. Because depression is heritable, people whose parents had or have a diagnosis of depression are at higher risk of developing depression themselves. This doesn’t mean that they will develop depression, but they could be more likely to. Our three groups were:

  • Girls who were not currently depressed, and did not have parents who had been/were depressed (“low risk” group)
  • Girls who were not currently depressed, but who had at least 1 parent with a current or past diagnosis of depression (“high risk” group)
  • Girls who were currently depressed (“depressed” group)

All groups completed a “Chatroom” game while they were in an MRI (magnetic resonance imaging) scanner. During this game, participants saw photos of two other teenagers with whom they could choose to discuss various topics. The other teenagers also made these decisions: they decided whether to chat with the participant (acceptance) or not (rejection). If participants were rejected by the other teen, a big red X would appear on their photo! If they were accepted, their photo was surrounded by a green border to indicate they had been selected. Of course, the game was rigged so that participants would get the same amount of accepting vs. rejecting feedback from these fictitious peers over time. In our analyses, we looked at how participants’ brains responded to rejection and acceptance. We examined a) whether the three different risk groups showed different patterns of neural response to being rejected or accepted by peers in the Chatroom game, and b) whether those patterns explained (mediated) the link between their risk status and depressive symptoms 6 and 12 months later.

What did we find?

First, being accepted by fictitious teenagers in the Chatroom task (compared to being rejected) elicited activation in several regions of the social brain! These included the anterior insula, the medial prefrontal cortex, and the ventral striatum. Activation in the right insula, as well as in the right temporo-parietal junction (a.k.a., the “TPJ”: an area of the brain typically involved in social cognition), was highest in the high risk girls, for both acceptance and rejection. This means that these girls, who weren’t themselves experiencing depression but were at higher familial risk for depression, showed greater activation to the task overall than the other groups of girls! In contrast, girls in the depressed group showed blunted response to being accepted in the subgenual anterior cingulate cortex—a region that often shows atypical response to emotional stimuli in people experiencing depression. For girls experiencing depression, being selected to chat with a peer didn’t create the amount of activation in that region that other groups were showing!

Did these patterns matter for later depression symptoms? Greater right TPJ activation to rejection (like that seen in the high risk group) was associated with less depressive symptoms in the high risk girls… but more TPJ response to acceptance was linked to greater later symptoms! This pattern of neural response explained some of the association between risk group and later symptoms. We were surprised by this finding, because we thought that responding more (at a neural level) to peer rejection might put someone more at risk of depressive symptoms, especially if they were already experiencing depression or at risk for it. That wasn’t the case! Our findings instead suggest that greater response to rejection in the TPJ might be representing an adaptive process that protects against later symptoms in girls at higher risk for depression. But, high TPJ activation to acceptance might signal some “overprocessing” of positive peer feedback, in ways that may predict later struggles with depression.

Our findings tell us that our brains process social feedback from peers in nuanced ways. How youth’s brains responded to being rejected or accepted by peers seemed to interact with their depression risk to predict their psychological well-being in the future. Additional research is needed to understand this phenomenon, but our findings have important implications for interventions: it might be important to consider how youth at risk for depression are perceiving, interpreting, and dealing with both peer acceptance and rejection.

Read more about our study
and findings:

Stroud, L.R., Morningstar, M., Vergara-Lopez, C., Bublitz, M.H., Lee, S.Y., Sanes, J.N., Dahl, R.E., Silk, J.S., Nelson, E.E., & Dickstein, D.P. (2023). Neural activation to peer acceptance and rejection in relation to concurrent and prospective depression risk in adolescent girls. Biological Psychology, 181, 108618. doi: 10.1016/j.biopsycho.2023.108618

Brain network specialization may support growth of vocal emotion recognition skills in adolescence

Adult and teen talking

As youth mature, they become better able to identify other people’s emotional states based on their facial and vocal expressions (“emotion recognition”, or ER). How does this happen? This is still an open question! Amongst other changes to cognition and social interactions, changes to brain functioning during childhood and adolescence could play a role in the growth of emotion recognition skills. Although previous research has investigated how brain activation to facial expressions of emotion changes with age and across time, this study examined change in neural response to vocal emotional expressions.

We recruited children and teenagers aged 8 to 19 to participate in the study. While they were in an MRI scanner, we asked them to complete a vocal emotion recognition task: they were presented with recordings of other people saying things in different tones of voice, and had to select what emotion the speaker was expressing. Participants then came back one year later to do the same task again. We looked at how participants’ brains activated when they were hearing the emotional voices, compared to the neutral voices that we used as a baseline.

We asked three main questions:

  1. Did participants’ brain response to the emotional voices depend on their age?

  2. Did brain response change across 1 year’s time (i.e., from visit 1 to visit 2)?

  3. And, did the rate of change in brain response across visits depend on participants’ age?

We found that certain areas of the brain did respond differently in younger vs. older adolescents (Question 1, above): notably, there was an area of the prefrontal cortex that responded more strongly to emotional voices (than to neutral voices) in the older participants than the younger ones. This was a bit of a surprising finding, because previous research that asked similar questions about how the brain responds to emotional faces found that activation in this area of the brain decreased with age! We’re not sure why this result is different, but this may be because voices elicit different patterns of neural response than do faces.

We also found that regions like the dorsal striatum and inferior frontal gyrus—which have previously been involved in coding the value of certain outcomes, and in ‘mentalizing’ tasks where people are asked to infer others’ emotions or mental states—showed less activation at the second visit than at the first (Question 2). Although we have to speculate about what this pattern means, it could be reflecting increased efficiency (i.e., needing less response in these areas) with repeated practice. Lastly, and perhaps most interestingly, we found that activation patterns in the right temporo-parietal junction (TPJ)—an area thought to be crucial to understanding others’ emotions and intentions—varied as a function of both age and time (Question 3). For younger teens, we saw increases in TPJ response between visits. But, for older teens, we found the opposite pattern. This inverted U-shaped pattern is consistent with the predictions of a prominent theory of neurodevelopment called the “interactive specialization” model (Johnson et al., 2000). According to this theory, early development is characterized by broad and diffuse patterns of activation (meaning that a given task would require lots of activation in many parts of the brain); with maturation, however, less activation is required, and less parts of the brain need to participate.

Images of brain on analysis program

What do these patterns of activation mean in the real-world? Some of these patterns were linked to performance on the task. Specifically, people who showed decreases in dorsal striatum and TPJ activation across timepoints did best on the vocal emotion recognition task. Therefore, our findings suggest that some of these changes in neural activation to emotional voices may be part of the neural mechanisms that are supporting increases in vocal ER skills across childhood and adolescence. Our findings help shed some light on how changes to our brain’s function may be helping us better navigate our social world during adolescence.  

To learn more, VISIT https://doi.org/10.1093/scan/nsac021

Brain regions involved in ‘mentalizing’ process vocal emotions differently in youth with epilepsy

Previous research has found that youth with epilepsy are at risk for poorer social and relational outcomes. Although this is not true of everyone with epilepsy, many children and adolescents with this neurodevelopmental disorder report having a hard time making and maintaining friendships. Perhaps related to this, youth with epilepsy often also struggle to interpret others’ emotions: they tend to be less accurate than youth without epilepsy on emotion recognition tasks, where people are asked to identify the intended emotion in facial or vocal expressions. Why might this be? Some researchers have suggested that the brains of youth with epilepsy may respond differently to emotional faces, compared to youth without epilepsy. Could something similar be happening with emotional voices?

To answer this question, the current study recruited youth who had been diagnosed with intractable epilepsy (meaning they still experienced seizures despite taking medication to prevent them), and some who had not. Participants were asked to listen to recordings of emotional voices (e.g., angry voices, fearful voices, etc.) while they were in an MRI scanner. After each recording, participants were asked to indicate what emotion they thought was being expressed. We examined how accurate they were at determining the intended emotion in each recording, and how their brains responded to the different types of voices.

We found that youth with epilepsy were less accurate than youth without epilepsy on this vocal emotion recognition task—especially at younger ages. In addition, we found six regions of the brain that responded differently to the emotional voices in youth with vs. without epilepsy. Activation patterns in these areas (including regions like the right temporo-parietal junction, the right hippocampus, and the right medial prefrontal cortex) could actually predict whether any given participant had been diagnosed with epilepsy or not. Interestingly, many of these six regions are often found to be involved in ‘mentalizing tasks’, where participants are asked to make judgments about others’ emotions, thoughts, and beliefs. Our findings suggest that these brain areas might be responding differently when trying to interpret others’ emotions (based on their tone of voice) in youth with epilepsy. We don’t yet know whether these different patterns of activation are actually related to emotion recognition accuracy, or to social difficulties; they could simply reflect an alternative “strategy” when processing vocal emotional cues. Although more research is needed to determine this, our findings contribute to our understanding of how neurodiverse brains process social and emotional information.

Higher emotional intensity doesn't always make vocal cues easier to recognize--it depends on the emotion type

Man speaking angrily into phone

We know that someone’s tone of voice can be a good indicator of what they are feeling in that moment. Someone who is speaking quickly, loudly, and with a high-pitched voice is likely to be feeling angry—and many of us would be able to infer this from their vocal cues. But what if these cues were more subtle? Would we still be able to understand that they were upset based on their tone of voice?

The current study examined whether listeners’ ability to identify a speaker’s emotional state depended on the level of ‘emotional intensity’ with which an emotion was conveyed. Intuitively, it makes sense that high-intensity expressions would be better recognized than low-intensity expressions. But, less is known about how fine-grained variations in vocal cues influence listeners’ accuracy of interpretation.

To answer our questions, we created a series of recordings that varied from neutral expressions to full-intensity expressions of different emotions, in 10% intervals. We used actors’ full-intensity and neutral expressions from a previous study as end points, and merged them together to create recordings that were 10% neutral and 90% angry, 20% neutral and 80% angry, 30% neutral and 70% angry, etc. We presented these recordings to listeners in increasing order of intensity, and asked them to tell us what emotion was being conveyed in each. From this, we obtained an estimate of the slope of listeners’ accuracy across increasing levels of emotional intensity, for each emotion type.

Varying curves of increasing recognition accuracy (on the y-axis) across increasing levels of emotional intensity (on the x-axis)

We determined that listeners’ ability to identify each emotion did not increase linearly alongside the recordings’ intensity level. Even though the acoustic characteristics of the recordings changed linearly with intensity, listeners’ accuracy did not map onto that. Further, the exact pattern of change in accuracy across intensity levels varied for different emotions. For example, anger was easy for listeners to recognize, even at low intensity levels (see blue arrow in the figure above). In contrast, listeners didn’t start to be able to identify sadness until it was expressed with more than 50% emotional intensity (see green arrow).

These results tell us that some emotions may be much harder than others to identify at low intensities. This is important, because those low-intensity versions are likely more common than full-intensity expressions in the real world. The findings of the current study point us to emotion types that may be particularly challenging to identify at low intensities, such as happiness. They also make us think twice about assuming that listeners’ accuracy is a direct reflection of linear changes in the vocal cues associated with different emotions.

To read more, visit: https://psycnet.apa.org/record/2021-86253-001

Youth with and without epilepsy differ in 'social brain' connectivity during a social cognitive task, but not at rest

Deficits in social cognition are common in people with epilepsy. This means that individuals with epilepsy may struggle to understand others' intentions in social situations, may find it harder to interpret others' facial expressions or tone of voice in social interactions, or may have trouble forming social connections with others. We know that epilepsy is associated with atypical functioning in regions of the brain that are thought to be involved in social cognition, but most existing research has examined patterns of brain connectivity at rest--that is, when nothing is happening. The current study wanted to investigate whether youth with epilepsy showed different brain connectivity patterns in these 'social brain' areas, when participants were completing a social cognition task. To answer this question, we compared brain connectivity within the "mentalizing network" (involved in theory of mind and other social cognitive functions) and within a network centered around the amygdala (involved in processing salient social information) in youth with and without epilepsy, while they were either completing a facial emotion recognition task or were at rest.

Compared to typically-developing youth, youth with epilepsy showed weaker connectivity between the left posterior superior temporal sulcus and the medial prefrontal cortex of the brain when seeing facial expressions in the emotion recognition task. These regions are thought to work together during social cognitive tasks, so decreased connectivity between these areas may indicate that these network nodes aren't communicating as efficiently or as well as they could be in youth with epilepsy. On the flip side, we found that youth with epilepsy had greater connectivity within the temporal lobe (between the left temporo-parietal junction and the anterior temporal cortex, to be precise) compared to typically-developing adolescents. This pattern was associated with poorer accuracy on the facial emotion recognition task. It is possible that youth with epilepsy are using a different 'strategy' in the task that results in different brain connectivity patterns in the temporal lobe, but we would need to test this possibility explicitly in future studies. In contrast to these findings, youth with and without epilepsy did not differ in their connectivity within either social brain network during resting-state scans (i.e., when they weren't doing a task).

israel-palacio-ImcUkZ72oUs-unsplash.jpg

Overall, our findings highlight that there may be important differences in how regions associated with social cognition are connected to one another during social cognitive tasks in youth with and without epilepsy. Although this is only a first step in understanding this phenomenon, our results indicate that looking at neural connectivity patterns during relevant tasks may be important to understanding the association between epilepsy and social cognitive deficits.

Find out more and read the paper here: https://www.sciencedirect.com/science/article/abs/pii/S0028393221001330

Ongoing maturation of neural responses to voices—but not faces—in adolescence

With age, we become better able to understand the meaning behind others’ nonverbal cues. In other words, we become more skilled at identifying others’ emotional states or attitudes based on their facial expressions, postures or gestures, or tone of voice. Previous research has found that the ability to recognize emotions in vocal cues (i.e., the way in which someone says something, beyond their verbal content) follows a more protracted developmental trajectory throughout adolescence than the same ability with facial expressions. Are there similar differences in the maturational trajectory of the neural representation of both types of nonverbal cues?

The current study examined age-related changes in a) facial and vocal emotion recognition skills, and b) in neural activation to both types of stimuli in adolescence. A group of 8- to 19-year-old participants were asked to complete both a facial and a vocal emotion recognition task—in which they were asked to identify the intended emotion in other teenagers’ facial expressions and voices—while undergoing functional magnetic resonance imaging (fMRI). We found that accuracy on the emotion recognition tasks began to plateau around age 14 for faces, but continued to increase linearly throughout adolescence for voices. At a neural level, a variety of subcortical regions, visual-motor association areas, prefrontal regions, and the right superior temporal gyrus responded to both faces and voices. While there were no age-related changes in activation within these areas when responding to faces, prefrontal regions (specifically, the inferior frontal cortex and dorsomedial prefrontal cortex) were more engaged when hearing voices in older adolescents. These findings suggest that the maturation of vocal emotion recognition skills and associated neural responses in frontal regions of the brain are continuing to develop throughout adolescence, following a more protracted trajectory than other social cognitive skills like the ability to interpret facial emotions. This may make it harder for teenagers to navigate social situations in which they must rely on vocal cues—for instance, when others are wearing masks.

girl-1245713_1920.jpg

Social information processing in pediatric anxiety and depression

Anxiety and depression increase in prevalence during the teenage years. Adolescence is considered a sensitive period for the development of internalizing disorders, due in part to the dramatic changes in body, brain, and behaviour that occur at this time. Shifting interactions between limbic and executive function networks during adolescence may underlie maladaptive processing of positive and negative stimuli. For instance, typically-developing youth typically show enhanced amygdala and reduced prefrontal response to emotional stimuli. In contrast, youth with anxiety and depression show heightened activation to negative or threatening stimuli in both regions, and reduced amygdala response to positive stimuli. These neural patterns translate to heightened processing of threat cues, but reduced response to reward—both of which are hallmarks of anxiety and depression. Alterations in social information processing in the brain may have effects on behaviour and associated psychosocial well-being. Early intervention to ameliorate deficits in social information processing may be effective in preventing the long-term consequences of pediatric affective disorders.

Read this chapter of the Oxford Handbook of Developmental Cognitive Neuroscience here.

girl-3704998_1920.jpg