Research Spotlight: Using artificial intelligence to reveal the neural dynamics of human conversation
Jing Cai, PhD, of the Department of Neurosurgery at Massachusetts General Hospital, is the lead author of a paper published in Nature Communications, “Natural language processing models reveal neural dynamics of human conversation”
Mass General Brigham
image:
By combining AI with electrical recordings of brain activity, researchers were able to track the language exchanged during conversations and the corresponding neural activity in different brain regions.
view moreCredit: Mass General Brigham
What were you investigating?
We investigated how our brains process language during real-life conversations. Specifically, we wanted to understand which brain regions become active when we're speaking and listening, and how these patterns relate to the specific words and context of the conversation.
What methods did you use?
We employed artificial intelligence (AI) to take a closer look at how our brains handle the back-and-forth of real conversations. We combined advanced AI, specifically language models like those behind ChatGPT, with neural recordings using electrodes placed within the brain. This allowed us to simultaneously track the linguistic features of conversations and the corresponding neural activity in different brain regions.
By analyzing these synchronized data streams, we could map how specific aspects of language–like the words being spoken and the conversational context–were represented in the dynamic patterns of brain activity during conversation.
What did you find?
We found that both speaking and listening during a conversation engage a widespread network of brain areas in the frontal and temporal lobes. What's interesting is that these brain activity patterns are highly specific, changing depending on the exact words being used, the context and order of those words.
We also observed that some brain regions are active during both speaking and listening, suggesting a partially shared neural basis for these processes. Finally, we identified specific shifts in brain activity that occur when people switch from listening to speaking during a conversation.
Overall, our findings illuminate the dynamic way our brains organize themselves to produce and understand language during a conversation.
What are the implications?
Our findings offer significant insights into how the brain pulls off the seemingly effortless feat of conversation. It highlights just how distributed and dynamic the neural machinery for language is–it's not just one spot lighting up, but a network across different brain regions. The fact that these patterns are so finely tuned to the specifics of words and context shows the brain's remarkable ability to process the nuances of language as it unfolds.
The partial overlap we saw between the brain regions involved in speaking and listening hints at an efficient neural system, potentially a shared mechanism that gets repurposed depending on whether we're sending or receiving information. This could tell us a lot about how we efficiently switch roles during a conversation.
What are the next steps?
The next step involves semantic decoding. This means moving beyond simply identifying which brain regions are active during conversation and decoding the meaning of the words and concepts being processed.
Ultimately, this level of decoding could provide profound insights into the neural representation of language. This work could contribute to the development of brain-integrated communication technologies that can help individuals whose speech is affected by neurodegenerative conditions like amyotrophic lateral sclerosis (ALS).
Authorship: In addition to Jing Cai, additional Mass General Brigham authors include Alex Hadjinicolaou, Angelique Paulk, Daniel Soper, Tian Xia, Alexander Wang, John Rolston, R. Mark Richardson, and senior authors Ziv Williams and Sydney Cash.
Paper cited: Cai, J et al. “Natural language processing models reveal neural dynamics of human conversation” Nature Communications DOI: 10.1038/s41467-025-58620-w
Funding: Jing Cai is supported by the Mussallem Transformative Award and American Association of University Women. Ziv Williams is supported by NIH R01DC019653 and NIH U01NS123130. Sydney Cash, Alex Hadjinicolaou, Angelique Paulk and Daniel Soper are supported by NIH U01NS098968.
Disclosures: None
Journal
Nature Communications
Method of Research
Observational study
Subject of Research
People
Article Title
Natural language processing models reveal neural dynamics of human conversation
Article Publication Date
19-Apr-2025
How thoughts influence what the eyes see
A surprising study could point to new approaches for AI systems
Columbia University School of Engineering and Applied Science
image:
Early visual areas in the brain adapt their representations of the same visual stimulus depending on what task we're trying to perform.
view moreCredit: Rungratsameetaweemana lab/Columbia Engineering
When you see a bag of carrots at the grocery store, does your mind go to potatoes and parsnips or buffalo wings and celery?
It depends, of course, on whether you’re making a hearty winter stew or getting ready to watch the Super Bowl.
Most scientists agree that categorizing an object — like thinking of a carrot as either a root vegetable or a party snack — is the job of the prefrontal cortex, the brain region responsible for reasoning and other high-level functions that make us smart and social. In that account, the eyes and visual regions of the brain are kind of like a security camera collecting data and processing it in a standardized way before passing it off for analysis.
However, a new study led by biomedical engineer and neuroscientist Nuttida Rungratsameetaweemana, an assistant professor at Columbia Engineering, shows that the brain’s visual regions play an active role in making sense of information. Crucially, the way it interprets the information depends on what the rest of the brain is working on.
If it’s Super Bowl Sunday, the visual system sees those carrots on a veggie tray before the prefrontal cortex knows they exist.
Published April 11 in Nature Communications, the study provides some of the clearest evidence yet that early sensory systems play a role in decision-making — and that they adapt in real-time. It also points to new approaches for designing AI systems that can adapt to new or unexpected situations.
We sat down with Rungratsameetaweemana to learn more about the research.
What’s exciting about this new study?
Our findings challenge the traditional view that early sensory areas in the brain are simply “looking” or “recording” visual input. In fact, the human brain’s visual system actively reshapes how it represents the exact same object depending on what you’re trying to do. Even in visual areas that are very close to raw information that enters the eyes, the brain has the flexibility to tune its interpretation and responses based on the current task. It gives us a new way to think about flexibility in the brain and opens up ideas for how to potentially build more adaptive AI systems modeled after these neural strategies.
How did you come to this surprising conclusion?
Most previous work looked at how people learn categories over time, but this study zooms in on the flexibility piece: How does the brain rapidly switch between different ways of organizing the same visual information?
What were your experiments like?
We used functional magnetic resonance imaging (fMRI) to observe people’s brain activity while they put shapes in different categories. The twist was that the “rules” for categorizing the shapes kept changing. This let us determine whether the visual cortex was changing how it represented the shapes depending on how we had defined the categories.
We analyzed the data using computational machine learning tools, including multivariate classifiers. These tools allow us to examine patterns of brain activation in response to different shape images, and measure how clearly the brain distinguishes shapes in different categories. We saw that the brain responds differently depending on what categories our participants were sorting the shapes into.
What did you see in the data from these experiments?
Activity in the visual system — including the primary and secondary visual cortices, which deal with data straight from the eyes — changed with practically every task. They reorganized their activity depending on which decision rules people were using, which was shown by the brain activation patterns becoming more distinctive when a shape was near the grey area between categories. Those were the most difficult shapes to tell apart, so it’s exactly when extra processing would be most helpful.
We could actually see clearer neural patterns in the fMRI data in cases when people did a better job on the tasks. That suggests the visual cortex may directly help us solve flexible categorization tasks.
What are the implications of these findings?
Flexible cognition is a hallmark of human cognition, and even state-of-the-art AI systems currently still struggle with flexible task performance. Our results may contribute to designing AI systems that can better adapt to new situations. The results may also contribute to understanding how cognitive flexibility might break down in conditions like ADHD or other cognitive disorders. It’s also a reminder of how remarkable and efficient our brains are, even at the earliest stages of processing.
What’s next for this line of research?
We’re pushing the neuroscience further by studying how flexible coding works at the level of neural circuits. With fMRI, we were looking at large populations of neurons. In a new follow-up study, we are investigating the circuit mechanisms of flexible coding by recording neurological activity inside the skull. This lets us ask how individual neurons and neuronal circuits in the human brain support flexible, goal-directed behavior.
We’re also starting to explore how these ideas might be useful for artificial systems. Humans are really good at adapting to new goals, even when the rules change, but current AI systems often struggle with that kind of flexibility. We’re hoping that what we’re learning from the human brain can help us design models that adapt more fluidly, not just to new inputs, but to new contexts.
Nuttida Rungratsameetaweemana, assistant professor of biomedical engineering.
Credit
Rungratsameetaweemana lab/Columbia Engineering
Journal
Nature Communications
Article Title
Dynamic categorization rules alter representations in human visual cortex
Article Publication Date
18-Apr-2025
No comments:
Post a Comment