Scientists trace facial gestures back to their source. before a smile appears, the brain has already decided
New study in Science reveals a neural hierarchy that converts intention into expression, before a face even moves
Every time we smile, grimace, or flash a quick look of surprise, it feels effortless, but the brain is quietly coordinating an intricate performance. This study shows that facial gestures aren’t controlled by two separate “systems” (one for deliberate expressions and one for emotional ones), as scientists long assumed. Instead, multiple face-control regions in the brain work together, using different kinds of signals: some are fast and shifting, like real-time choreography, while others are steadier, like a held intention. Remarkably, these brain patterns appear before the face even moves, meaning the brain starts preparing a gesture in advance, shaping it not just as a movement, but as a socially meaningful message. That matters because facial expressions are one of our most powerful tools for communication and understanding how the brain builds them helps explain what can go wrong after brain injury or in conditions that affect social signaling, This may eventually guide new ways to restore or interpret facial communication when it’s lost.
When someone smiles politely, flashes a grin of recognition, or tightens their lips in disapproval, the movement is tiny, but the message can be enormous. Facial gestures are among the most powerful forms of communication in primate societies, delivering emotion, intention, and social meaning in fractions of a second.
Now, a new study published in Science uncovers how the brain prepares and produces these gestures through a temporally organized hierarchy of neural “codes,” including signals that appear well before movement begins.
The research was led by Prof. Winrich A. Freiwald of The Rockefeller University in New York and Prof. Yifat Prut of ELSC at Hebrew University working with Dr. Geena Ianni and Dr. Yuriria Vázquez from The Rockefeller University.
For decades, neuroscience has leaned on a tidy division: lateral cortical areas in the frontal lobe controls deliberate, voluntary facial movements, while the medial areas governs emotional expressions. This view was shaped in part by clinical evidence from individuals with focal brain lesions.
But by directly measuring activity from individual neurons across both cortical regions, the researchers found something striking: both regions encode both voluntary and emotional gestures and they do so in ways that are distinguishable well before any visible facial movement occurs.
In other words, facial communication appears to be orchestrated not by two separate systems, but by a continuous neural hierarchy, where different regions contribute information at different time scales, some fast-changing and dynamic, others stable and sustained.
Dynamic vs. Stable: Two Neural Languages Working Together
The team discovered that the brain uses area-specific timing patterns that form a continuum:
- Dynamic neural activity reflects the rapid unfolding of facial motion, like the shifting muscle choreography involved in an expression.
- Stable neural activity functions more like a sustained “intent” or “context” signal, persisting in time to support socially appropriate output.
Together, these activity patterns allow the brain to generate coherent facial gestures that match the context: deliberate or spontaneous, socially calibrated, and communication-ready.
Why This Matters
Facial gestures are not just physical movements. They are social actions, and the brain treats them as such.
This discovery offers a new framework for understanding:
- How facial gestures are coordinated in real time
- How communication-related motor control is structured in the brain
- What may go wrong in disorders where facial signalling is disrupted—whether through neurological injury or conditions affecting social communication
And it reframes facial expression as something more sophisticated than a reflex or a simple decision: it is the product of a coordinated neural hierarchy that bridges emotion, intention, and action.
By showing that multiple brain regions work in parallel, each contributing different timing-based codes, the study opens new pathways for exploring how the brain produces socially meaningful behavior.
“Facial gestures may look effortless,” the researchers note, “but the neural machinery behind them is remarkably structured and begins preparing for communication well before movement even starts.”
Journal
Science
Method of Research
Experimental study
Subject of Research
Animals
Article Title
Facial gestures are enacted through a cortical hierarchy of dynamic and stable codes
Article Publication Date
8-Jan-2026
How the brain creates facial expressions
New work demonstrates how neural circuits in the brain and muscles of the face work together to respond physically to social cues
Rockefeller University
image:
Winrich Freiwald
view moreCredit: Matthew Septimus/The Rockefeller University
When a baby smiles at you, it’s almost impossible not to smile back. This spontaneous reaction to a facial expression is part of the back-and-forth that allows us to understand each other’s emotions and mental states.
Faces are so important to social communication that we’ve evolved specialized brain cells just to recognize them, as Rockefeller University’s Winrich Freiwald has discovered. It’s just one of a suite of groundbreaking findings the scientist has made in the past decade that have greatly advanced the neuroscience of face perception.
Now he and his team in the Laboratory of Neural Systems have turned their attention to the counterpart of face perception: facial expression. How neural circuits in the brain and muscles of the face work together to, for example, form a smile has remained largely unknown—until now. As they published in Science, Freiwald’s team has discovered a facial motor network and the neural mechanisms that keep it operating.
In this first systematic study of the neural mechanisms of facial movement control, they found that both lower-level and higher-level brain regions are involved in encoding different types of facial gestures—contrary to long-held assumptions. It had long been thought that these activities were segregated, with emotional expressions (such as returning a smile) originating in the medial frontal lobe and voluntary actions (such as eating or speaking) in the lateral frontal lobe.
“We had a good understanding of how facial gestures are received, but now we have a much better understanding of how they're generated,” says Freiwald, whose research is supported by the Price Family Center for the Social Brain at Rockefeller.
“We found that all regions participated in all types of facial gestures but operate on their own distinct timescales, suggesting that each region is uniquely suited to the ‘job’ it performs,” says co-lead author Geena Ianni, a former member of Freiwald’s lab and a neurology resident at the Hospital of the University of Pennsylvania.
Where facial expressions come from
Our need to communicate through facial expressions runs deep—all the way down to the brain stem, in fact. It’s there that the so-called facial nucleus is located, which houses motoneurons that control facial muscles. They also project into multiple cortical regions, including different areas of the frontal cortex, which contributes to both motor function and complex thinking.
Neuroanatomical work has demonstrated that there are multiple regions in the cortex that directly access the muscles of facial expression—a unique feature of primates—but how each one specifically contributes has remained largely unknown. Studies of people with brain lesions suggest different regions may code for different facial movements. When people have damage to the lateral frontal cortex, for example, they lose the ability to make voluntary movements, such as speaking or eating, while lesions in the medial frontal cortex lead to the inability to spontaneously express an emotion, such as returning a smile.
“They don’t lose the ability to move their muscles, just the ability to do it in a particular context,” Freiwald says.
“We wondered, could these regions make unique contributions to facial expressions? It turns out that no one had really investigated this,” Ianni says.
Adopting an innovative approach designed by the Freiwald lab, they used an fMRI scanner to visualize the brain activity of macaque monkeys while they produced facial expressions. In doing so, they located three cortical areas that directly access facial musculature: the cingulate motor cortex (medially located), and the primary and premotor cortices (laterally located), as well as the somatosensory cortices.
Mapping the network
Using these methods, they were able map out a facial motor network composed of neural activity from the different regions of the frontal lobe—the lateral primary motor cortex, ventral premotor cortex, and medial cingulate motor cortex—and the primary somatosensory cortex, in the parietal lobe.
Using this targeted map, the researchers were able to then record neural activity in each cortical region while the monkeys produced facial expressions. The researchers studied three types of facial movements: threatening, lipsmacking, and chewing. A threatening look from a macaque involves staring straight ahead with an open jaw and bared teeth, while lipsmacking involves rapidly puckering the lips while flattening of the ears against the skull. These are both socially meaningful, contextually specific facial gestures that macaques use to navigate social interactions. Chewing is neither social nor emotional, but voluntary.
The researchers used a variety of dynamic stimuli to elicit these expressions in the lab, including direct interaction with other macaques, videos of other macaques, and artificial digital avatars controlled by the researchers themselves.
They were able to link neural activity from these regions to the coordinated movement of specific regions of the face: eyes and eyebrows; the upper and lower mouth; and the lower face and ears.
The researchers found that both higher and lower cortical regions were involved in producing both emotional and voluntary facial expressions. However, not all of that activity was the same: The neurons in each region operated at a distinct tempo when producing facial gestures.
“Lateral regions like the primary motor cortex housed fast neural dynamics that changed on the order of milliseconds, while medial regions like the cingulate cortex housed slow, stable neural dynamics that lasted for much longer,” says Ianni.
In related work based on the same data, the team recently documented in PNAS that the different cortical regions governing facial movement work together as a single interconnected sensorimotor network, adjusting their coordination based on the movement being produced.
“This suggests facial motor control is dynamic and flexible rather than routed through fixed, independent pathways,” says Yuriria Vázquez, co-lead author and a former postdoc in Freiwald’s lab.
“This is contrary to the standard view that they work in parallel and separate action,” Freiwald adds. “That really underscores the connectivity of the facial motor network.”
Better brain-machine interfaces
Now that Freiwald’s lab has made significant insights into both facial perception and expression in separate experiments, in the future he’d like to study these complementary elements of social communication simultaneously.
“We think that will help us better understand emotions,” he says. “There's a big debate in this field about how motor signals relate to emotions internally, but we think that if you have perception on one side and a motor response on the other, emotions somehow happen in between. We would like to find the areas controlling emotional states—we have ideas about where they are—and then understand how they work together with motor areas to generate different kinds of behaviors.”
Vázquez sees two possible future avenues of research that could build on their findings. The first involves understanding how dynamic social cues (faces, eye gaze), internal states, and reward influence the facial motor system. These insights would be crucial for explaining how decisions about facial expression production are made. The second relates to using this integrated network for clinical applications.
The findings may also help improve brain-machine interfaces. “As with our approach, those devices also involve implanting electrodes to decode brain signals, and then they translate that information into action, such as moving a limb or a robotic arm,” Freiwald says. “Communication has proven far more difficult to decode. And because of the importance of facial expression to communication, it will be very useful to have devices that can decode and translate these kinds of facial signals.”
Adds Ianni, “I hope our work moves the field, even the tiniest bit, towards more naturalistic and rich artificial communication designs that will improve lives of patients after brain injury.”
Journal
Science
Article Title
Facial gestures are enacted through a cortical hierarchy of dynamic and stable codes
Article Publication Date
8-Jan-2026
No comments:
Post a Comment