Brain scans reveal what happens in the mind when insight strikes
Research sheds light on how ‘aha!’ moments help you remember what you learn
image:
Example of the hidden picture puzzles in the black/white images on the left; corresponding real-world picture on the right.
view moreCredit: Courtesy of Maxi Becker
DURHAM, N.C. -- Have you ever been stuck on a problem, puzzling over something for what felt like ages without getting anywhere, but then suddenly the answer came to you like a bolt from the blue?
We’ve all experienced that “aha! moment,” that sudden clarity or magical epiphany you feel when a new idea or perspective pops into your head as if out of nowhere.
Now, new evidence from brain imaging research shows that these flashes of insight aren’t just satisfying — they actually reshape how your brain represents information, and help sear it into memory.
Led by researchers at Duke University and Humboldt and Hamburg Universities in Germany, the work has implications for education, suggesting that fostering “eureka moments” could help make learning last beyond the classroom.
If you have an aha experience when solving something, “you're actually more likely to remember the solution,” said first author Maxi Becker, a postdoctoral fellow at Humboldt University in Berlin.
The findings were published May 9 in the journal Nature Communications.
In the study, the researchers used a technique called functional magnetic resonance imaging (fMRI) to record people’s brain activity while they tried to solve visual brain teasers. The puzzles required them to "fill in the blanks" of a series of two-tone images with minimal detail, using their perception to complete the picture and identify a real-world object.
Such hidden picture puzzles serve as small-scale proxies for bigger eureka moments. “It's just a little discovery that you are making, but it produces the same type of characteristics that exist in more important insight events,” said senior author Roberto Cabeza, a professor of psychology and neuroscience at Duke.
For each puzzle the participants thought they solved, the researchers asked whether the solution just popped into their awareness in a flash of sudden insight, or whether they worked it out in a more deliberate and methodical way, and how certain they were of their answer.
The results were striking.
Participants tended to recall solutions that came to them in a flash of insight far better than ones they arrived at without this sense of epiphany. Furthermore, the more conviction a person felt about their insight at the time, the more likely they were to remember it five days later when the researchers asked them again.
“If you have an ‘aha! moment’ while learning something, it almost doubles your memory,” said Cabeza, who has been studying memory for 30 years. “There are few memory effects that are as powerful as this.”
A number of changes in the brain may cause people to have better memory for “aha! moments,” the researchers found.
They discovered that flashes of insight trigger a burst of activity in the brain’s hippocampus, a cashew-shaped structure buried deep in the temporal lobe that plays a major role in learning and memory. The more powerful the insight, the greater the boost.
They also found that the activation patterns across the participants’ neurons changed once they spotted the hidden object and saw the image in a new light -- particularly in certain parts of the brain’s ventral occipito-temporal cortex, the region responsible for recognizing visual patterns. The stronger the epiphany, the greater the change in those areas.
“During these moments of insight, the brain reorganizes how it sees the image,” said Becker, who did the work in the Cabeza lab.
Lastly, stronger “aha!” experiences were associated with greater connectivity between these different brain regions. “The different regions essentially communicate with each other more efficiently,” Cabeza said.
The current study looked at brain activity at two specific moments in time, before and after the eureka moment when the lightbulb appeared. As a next step, the researchers plan to look more closely at what happens during the few seconds in between that allows people to finally see the answer.
“Insight is key for creativity,” Cabeza said. In addition to shedding light on how the brain comes up with creative solutions, the findings also lend support for inquiry-based learning in the classroom.
“Learning environments that encourage insight could boost long-term memory and understanding,” the researchers wrote.
This research was funded by the Einstein Foundation Berlin (EPP-2017-423, RC) and by the Sonophilia Foundation.
CITATION: "Insight Predicts Subsequent Memory via Cortical Representational Change and Hippocampal Activity," Maxi Becker, Tobias Sommer, Roberto Cabeza. Nature Communications, May 9, 205. DOI: 10.1038/s41467-025-59355-4
Example of the hidden picture puzzles in the black/white images on the left; corresponding real-world picture on the right.
Credit
Courtesy of Maxi Becker
Journal
Nature Communications
Method of Research
Imaging analysis
Subject of Research
People
Article Title
Insight Predicts Subsequent Memory via Cortical Representational Change and Hippocampal Activity
Article Publication Date
15-May-2025
Energy and memory: A new neural network paradigm
A dynamic energy landscape is at the heart of theorists' new model of memory retrieval
(Santa Barbara, Calif.) — Listen to the first notes of an old, beloved song. Can you name that tune? If you can, congratulations — it’s a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again. We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality.
“It’s a network effect,” said UC Santa Barbara mechanical engineering professor Francesco Bullo, explaining that associative memories aren’t stored in single brain cells. “Memory storage and memory retrieval are dynamic processes that occur over entire networks of neurons.”
In 1982 physicist John Hopfield translated this theoretical neuroscience concept into the artificial intelligence realm, with the formulation of the Hopfield network. In doing so, not only did he provide a mathematical framework for understanding memory storage and retrieval in the human brain, he also developed one of the first recurrent artificial neural networks — the Hopfield network — known for its ability to retrieve complete patterns from noisy or incomplete inputs. Hopfield won the Nobel Prize for his work in 2024.
However, according to Bullo and collaborators Simone Betteti, Giacomo Baggio and Sandro Zampieri at the University of Padua in Italy, the traditional Hopfield network model is powerful, but it doesn’t tell the full story of how new information guides memory retrieval. “Notably,” they say in a paper published in the journal Science Advances, “the role of external inputs has largely been unexplored, from their effects on neural dynamics to how they facilitate effective memory retrieval.” The researchers suggest a model of memory retrieval they say is more descriptive of how we experience memory.
“The modern version of machine learning systems, these large language models — they don’t really model memories,” Bullo explained. “You put in a prompt and you get an output. But it’s not the same way in which we understand and handle memories in the animal world.” While LLMs can return responses that can sound convincingly intelligent, drawing upon the patterns of the language they are fed, they still lack the underlying reasoning and experience of the physical real world that animals have.
“The way in which we experience the world is something that is more continuous and less start-and-reset,” said Betteti, lead author of the paper. Most of the treatments on the Hopfield model tended to treat the brain as if it was a computer, he added, with a very mechanistic perspective. “Instead, since we are working on a memory model, we want to start with a human perspective.”
The main question inspiring the theorists was: As we experience the world that surrounds us, how do the signals we receive enable us to retrieve memories?
As Hopfield envisioned, it helps to conceptualize memory retrieval in terms of an energy landscape, in which the valleys are energy minima that represent memories. Memory retrieval is like exploring this landscape; recognition is when you fall into one of the valleys. Your starting position in the landscape is your initial condition.
“Imagine you see a cat’s tail,” Bullo said. “Not the entire cat, but just the tail. An associative memory system should be able to recover the memory of the entire cat.” According to the traditional Hopfield model, the cat’s tail (stimulus) is enough to put you closest to the valley labeled “cat,” he explained, treating the stimulus as an initial condition. But how did you get to that spot in the first place?
“The classic Hopfield model does not carefully explain how seeing the tail of the cat puts you in the right place to fall down the hill and reach the energy minimum,” Bullo said. “How do you move around in the space of neural activity where you are storing these memories? It’s a little bit unclear.”
The researchers’ Input-Driven Plasticity (IDP) model aims to address this lack of clarity with a mechanism that gradually integrates past and new information, guiding the memory retrieval process to the correct memory. Instead of applying the two-step algorithmic memory retrieval on the rather static energy landscape of the original Hopfield network model, the researchers describe a dynamic, input-driven mechanism.
“We advocate for the idea that as the stimulus from the external world is received (e.g., the image of the cat tail), it changes the energy landscape at the same time,” Bullo said. “The stimulus simplifies the energy landscape so that no matter what your initial position, you will roll down to the correct memory of the cat.” Additionally, the researchers say, the IDP model is robust to noise — situations where the input is vague, ambiguous, or partially obscured — and in fact uses the noise as a means to filter out less stable memories (the shallower valleys of this energy landscape) in favor of the more stable ones.
“We start with the fact that when you’re gazing at a scene your gaze shifts in between the different components of the scene,” Betteti said. “So at every instant in time you choose what you want to focus on but you have a lot of noise around.” Once you lock into the input to focus on, the network adjusts itself to prioritize it, he explained.
Choosing what stimulus to focus on, a.k.a. attention, is also the main mechanism behind another neural network architecture, the transformer, which has become the heart of large language models like ChatGPT. While the IDP model the researchers propose “starts from a very different initial point with a different aim,” Bullo said, there’s a lot of potential for the model to be helpful in designing future machine learning systems.
“We see a connection between the two, and the paper describes it,” Bullo said. “It is not the main focus of the paper, but there is this wonderful hope that these associative memory systems and large language models may be reconciled.”
Journal
Science Advances
Article Title
Input-Driven Dynamics for Robust Memory Retrieval in Hopfield Networks
Energy and memory: A new neural network paradigm
A dynamic energy landscape is at the heart of theorists' new model of memory retrieval
(Santa Barbara, Calif.) — Listen to the first notes of an old, beloved song. Can you name that tune? If you can, congratulations — it’s a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again. We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality.
“It’s a network effect,” said UC Santa Barbara mechanical engineering professor Francesco Bullo, explaining that associative memories aren’t stored in single brain cells. “Memory storage and memory retrieval are dynamic processes that occur over entire networks of neurons.”
In 1982 physicist John Hopfield translated this theoretical neuroscience concept into the artificial intelligence realm, with the formulation of the Hopfield network. In doing so, not only did he provide a mathematical framework for understanding memory storage and retrieval in the human brain, he also developed one of the first recurrent artificial neural networks — the Hopfield network — known for its ability to retrieve complete patterns from noisy or incomplete inputs. Hopfield won the Nobel Prize for his work in 2024.
However, according to Bullo and collaborators Simone Betteti, Giacomo Baggio and Sandro Zampieri at the University of Padua in Italy, the traditional Hopfield network model is powerful, but it doesn’t tell the full story of how new information guides memory retrieval. “Notably,” they say in a paper published in the journal Science Advances, “the role of external inputs has largely been unexplored, from their effects on neural dynamics to how they facilitate effective memory retrieval.” The researchers suggest a model of memory retrieval they say is more descriptive of how we experience memory.
“The modern version of machine learning systems, these large language models — they don’t really model memories,” Bullo explained. “You put in a prompt and you get an output. But it’s not the same way in which we understand and handle memories in the animal world.” While LLMs can return responses that can sound convincingly intelligent, drawing upon the patterns of the language they are fed, they still lack the underlying reasoning and experience of the physical real world that animals have.
“The way in which we experience the world is something that is more continuous and less start-and-reset,” said Betteti, lead author of the paper. Most of the treatments on the Hopfield model tended to treat the brain as if it was a computer, he added, with a very mechanistic perspective. “Instead, since we are working on a memory model, we want to start with a human perspective.”
The main question inspiring the theorists was: As we experience the world that surrounds us, how do the signals we receive enable us to retrieve memories?
As Hopfield envisioned, it helps to conceptualize memory retrieval in terms of an energy landscape, in which the valleys are energy minima that represent memories. Memory retrieval is like exploring this landscape; recognition is when you fall into one of the valleys. Your starting position in the landscape is your initial condition.
“Imagine you see a cat’s tail,” Bullo said. “Not the entire cat, but just the tail. An associative memory system should be able to recover the memory of the entire cat.” According to the traditional Hopfield model, the cat’s tail (stimulus) is enough to put you closest to the valley labeled “cat,” he explained, treating the stimulus as an initial condition. But how did you get to that spot in the first place?
“The classic Hopfield model does not carefully explain how seeing the tail of the cat puts you in the right place to fall down the hill and reach the energy minimum,” Bullo said. “How do you move around in the space of neural activity where you are storing these memories? It’s a little bit unclear.”
The researchers’ Input-Driven Plasticity (IDP) model aims to address this lack of clarity with a mechanism that gradually integrates past and new information, guiding the memory retrieval process to the correct memory. Instead of applying the two-step algorithmic memory retrieval on the rather static energy landscape of the original Hopfield network model, the researchers describe a dynamic, input-driven mechanism.
“We advocate for the idea that as the stimulus from the external world is received (e.g., the image of the cat tail), it changes the energy landscape at the same time,” Bullo said. “The stimulus simplifies the energy landscape so that no matter what your initial position, you will roll down to the correct memory of the cat.” Additionally, the researchers say, the IDP model is robust to noise — situations where the input is vague, ambiguous, or partially obscured — and in fact uses the noise as a means to filter out less stable memories (the shallower valleys of this energy landscape) in favor of the more stable ones.
“We start with the fact that when you’re gazing at a scene your gaze shifts in between the different components of the scene,” Betteti said. “So at every instant in time you choose what you want to focus on but you have a lot of noise around.” Once you lock into the input to focus on, the network adjusts itself to prioritize it, he explained.
Choosing what stimulus to focus on, a.k.a. attention, is also the main mechanism behind another neural network architecture, the transformer, which has become the heart of large language models like ChatGPT. While the IDP model the researchers propose “starts from a very different initial point with a different aim,” Bullo said, there’s a lot of potential for the model to be helpful in designing future machine learning systems.
“We see a connection between the two, and the paper describes it,” Bullo said. “It is not the main focus of the paper, but there is this wonderful hope that these associative memory systems and large language models may be reconciled.”
Journal
Science Advances
Article Title
Input-Driven Dynamics for Robust Memory Retrieval in Hopfield Networks
No comments:
Post a Comment