Humans and machines learn differently
Bielefeld researchers publish article in “Nature Machine Intelligence”
Bielefeld University
image:
Professors Dr Benjamin Paaßen and Dr Barbara Hammer from Bielefeld University are involved in the publication.
view moreCredit: TRR 318 and Bielefeld University
How do humans manage to adapt to completely new situations and why do machines so often struggle with this? This central question is explored by researchers from cognitive science and artificial intelligence (AI) in a joint article published in the journal “Nature Machine Intelligence”. Among the authors are Professor Dr. Barbara Hammer and Professor Dr. Benjamin Paaßen from Bielefeld University.
“If we want to integrate AI systems into everyday life, whether in medicine, transportation, or decision-making, we must understand how these systems handle the unknown,” says Barbara Hammer, head of the Machine Learning Group at Bielefeld University. “Our study shows that machines generalize differently than humans and this is crucial for the success of future human–AI collaboration.”
Differences between humans and machines
The technical term “generalization” refers to the ability to draw meaningful conclusions about unknown situations from known information, that is, to flexibly apply knowledge to new problems. In cognitive science, this often involves conceptual thinking and abstraction. In AI research, however, generalization serves as an umbrella term for a wide variety of processes: from machine learning beyond known data domains (“out-of-domain generalization”) to rule-based inference in symbolic systems, to so-called neuro-symbolic AI, which combines logic and neural networks.
“The biggest challenge is that 'Generalization' means completely different things for AI and humans,” explains Benjamin Paaßen, junior professor for Knowledge Representation and Machine Learning in Bielefeld. “That is why it was important for us to develop a shared framework. Along three dimensions: What do we mean by generalization? How is it achieved? And how can it be evaluated?”
Significance for the future of AI
The publication is the result of interdisciplinary collaboration among more than 20 experts from internationally leading research institutions, including the universities of Bielefeld, Bamberg, Amsterdam, and Oxford. The project began with a joint workshop at the Leibniz Center for Informatics at Schloss Dagstuhl, co-organized by Barbara Hammer.
The project also highlights the importance of bridging cognitive science and AI research. Only through a deeper understanding of their differences and commonalities will it be possible to design AI systems that can better reflect and support human values and decision-making logics.
The research was conducted within the collaborative project SAIL – Sustainable Life-Cycle of Intelligent Socio-Technical Systems. SAIL investigates how AI can be designed to be sustainable, transparent, and human-centered throughout its entire life cycle. The project is funded by the Ministry of Culture and Science of the State of North Rhine-Westphalia.
Journal
Nature Machine Intelligence
Method of Research
Computational simulation/modeling
Subject of Research
People
Article Title
Aligning generalization between humans and machines
Article Publication Date
15-Sep-2025
Why AI is never going to run the world
Researcher explains how ‘primal intelligence’ helps humans succeed
COLUMBUS, Ohio – The secret to human intelligence can’t be replicated or improved on by artificial intelligence, according to researcher Angus Fletcher.
Fletcher, a professor of English at The Ohio State University’s Project Narrative, explains in a new book that AI is very good at one thing: logic. But many of life’s most fundamental problems require a different type of intelligence
“AI takes one feature of intelligence – logic – and accelerates it. As long as life calls for math, AI crushes humans,” Fletcher writes in the book “Primal Intelligence.”
“It’s the king of big-data choices. The moment, though, that life requires commonsense or imagination, AI tumbles off its throne. This is how you know that AI is never going to run the world – or anything.”
Instead, Fletcher has developed a program to help people develop their primal intelligence, a program that has been successfully used with groups ranging from the U.S. Army to elementary school students.
At its core, primal intelligence is “the brain’s ancient ability to act smart with limited information,” Fletcher said.
In many cases, the most difficult problems people face involve situations where they have limited information and need to develop a novel plan to meet a challenge.
The answer is what Fletcher calls “story thinking.”
“Humans have this ability to communicate through stories, and story thinking is the way the brain has evolved to work,” he said.
“What makes humans successful is the ability to think of and develop new behaviors and new plans. It allowed our ancestors to escape the predator. It allows us to plan, to plot our actions, to put together a story of how we might succeed.”
Humans have four “primal powers” that allow us to act smart with little information.
Those powers are intuition, imagination, emotion and commonsense. In the book, Fletcher expands on each of these and the role they have in helping humans innovate.
In essence, he says these four primal powers are driven by “narrative cognition,” the ability of our brain to think in story. Shakespeare may be the best example of how to think in story, he said.
Fletcher, who has an undergraduate degree in neuroscience and a PhD in literature, discusses in the book how Shakespeare’s innovations in storytelling have inspired innovators well beyond literature. He quotes people from Abraham Lincoln to Albert Einstein to Steve Jobs about the impact reading Shakespeare had on their lives and careers.
Many of Shakespeare’s characters are “exceptions to rules” rather than archetypes, which encourages people to think in new ways, Fletcher said.
What Shakespeare has helped these pioneers – and many other people – do is see stories in their own lives and imagine new ways of doing things and overcoming obstacles, he said.
That’s something AI can’t do, he said. AI collects a lot of data and then works out probable patterns, which is great if you have a lot of information.
“But what do you do in a totally new situation? Well, in a new situation you need to make a new plan. And that’s what story thinking can do that AI cannot,” he said.
The U.S. Army was so impressed with Fletcher’s program that it brought him in to help train soldiers in its Special Operations unit. After seeing it in action, the Army awarded Fletcher its Commendation Medal for his “groundbreaking research” that helped soldiers see the future faster, heal quicker from trauma and act wiser in life-and-death situations.
In the book, Fletcher gave an example of how one Army recruit used his primal intelligence to overcome obstacles in the most literal sense.
As part of its curriculum, Army Special Operations had a final test for recruits: an obstacle course of logs and ropes. The recruits were told they had the ring the bell at the end of the course before time expires in order to pass the test.
This particular recruit knew he couldn’t beat the clock. At the starting line, he thought of a new plan: he ran around the obstacle course, rather than through it, ringing the bell in record time.
While other military schools would have flunked him, Special Operations passed him, based on his ingenuity in passing the test, Fletcher said. As the Army monitored his career after graduation, it found he outperformed many of his classmates on field missions.
The value of primal intelligence works in all walks of life, including business. While business often emphasizes management, Fletcher said primal intelligence shines when leadership is needed.
“Management is optimizing existing processes. But the main challenge of the future is not optimizing things that already work,” Fletcher said.
“The challenge of the future is figuring things out when we don’t know what works. That’s what leadership is all about, and that’s what story thinking is all about.”
In business and elsewhere, Fletcher said AI has a role. But it should not be seen as a replacement for human intelligence.
“Humans are able to say, this could work but it hasn’t been tried before. That’s what primal intelligence is all about,” he said.
“Computers and AI are only able to repeat things that have worked in the past or engage in magical thinking. That’s not going to work in many situations we face.”
Study: Reviewers increasingly divided on the use of generative AI in peer review
IOP Publishing
image:
Global reviewer survey shows growing divide on use of AI in peer review
view moreCredit: IOP Publishing
A new global reviewer survey from IOP Publishing (IOPP) reveals a growing divide in attitudes among reviewers in the physical sciences regarding the use of generative AI in peer review. The study follows a similar surveyconducted last year showing that while some researchers are beginning to embrace AI tools, others remain concerned about the potential negative impact, particularly when AI is used to assess their own work.
Currently, IOPP does not allow the use of AI in peer review as generative models cannot meet the ethical, legal, and scholarly standards required. However, there is growing recognition of AI’s potential to support, rather than replace, the peer review process.
Key Findings:
- 41% of respondents now believe generative AI will have a positive impact on peer review (up 12% from 2024), while 37% see it as negative (up 2%). Only 22% are neutral or unsure—down from 36% last year—indicating growing polarisation in views.
- 32% of researchers have already used AI tools to support them with their reviews.
- 57% would be unhappy if a reviewer used generative AI to write a peer review report on a manuscript they had co-authored and 42% would be unhappy if AI were used to augment a peer review report.
- 42% believe they could accurately detect an AI-written peer review report on a manuscript they had co-authored.
Women tend to feel less positive about the potential of AI compared with men, suggesting a gendered difference in the usefulness of AI in peer review. Meanwhile, more junior researchers appear more optimistic about the benefits of AI, compared to their more senior colleagues who express greater scepticism.
When it comes to reviewer behaviour and expectations, 32% of respondents reported using AI tools to support them during the peer review process in some form. Notably, over half (53%) of those using AI said they apply it in more than one way. The most common use (21%) was for editing grammar and improving the flow of text and 13% said they use AI tools to summarise or digest articles under review, raising serious concerns around confidentiality and data privacy. A small minority (2%) admitted to uploading entire manuscripts into AI chatbots asking it to generate a review on their behalf.
Interestingly, 42% of researchers believe they could accurately detect an AI-written peer review report on a manuscript they had co-authored.
“These findings highlight the need for clearer community standards and transparency around the use of generative AI in scholarly publishing. As the technology continues to evolve, so too must the frameworks that support ethical and trustworthy peer review”, said Laura Feetham-Walker, Reviewer Engagement Manager at IOP Publishing and lead author of the study.
“One potential solution is to develop AI tools that are integrated directly into peer review systems, offering support to reviewers and editors without compromising security or research integrity. These tools should be designed to support, rather than replace, human judgment. If implemented effectively, such tools would not only address ethical concerns but also mitigate risks around confidentiality and data privacy; particularly the issue of reviewers uploading manuscripts to third-party generative AI platforms,” adds Feetham-Walker.
No comments:
Post a Comment