AI use makes us overestimate our cognitive
performance
New research warns we shouldn’t blindly trust Large Language Models with logical reasoning –– stopping at one prompt limits ChatGPT’s usefulness more than users realize
Aalto University
image:
Researcher Robin Welsch.
view moreCredit: Matti Ahlgren / Aalto University
New research warns we shouldn’t blindly trust Large Language Models with logical reasoning –– stopping at one prompt limits ChatGPT’s usefulness more than users realise.
When it comes to estimating how good we are at something, research consistently shows that we tend to rate ourselves as slightly better than average. This tendency is stronger in people who perform low on cognitive tests. It’s known as the Dunning-Kruger Effect (DKE) –– the worse people are at something the more they tend to overestimate their abilities and the “smarter” they are, the less they realise their true abilities.
However, a study led by Aalto University reveals that when it comes to AI, specifically, Large Language Models (LLMs), the DKE doesn’t hold, with researchers finding that all users show a significant inability to assess their performance accurately when using ChatGPT. In fact, across the board, people overestimated their performance. On top of this, the researchers identified a reversal of the Dunning-Kruger Effect –– an identifiable tendency for those users who considered themselves more AI literate to assume their abilities were greater than they really were.
‘We found that when it comes to AI, the DKE vanishes. In fact, what’s really surprising is that higher AI literacy brings more overconfidence,’ says Professor Robin Welsch. ‘We would expect people who are AI literate to not only be a bit better at interacting with AI systems, but also at judging their performance with those systems – but this was not the case.’
The finding adds to a rapidly growing volume of research indicating that blindly trusting AI output comes with risks like ‘dumbing down’ people’s ability to source reliable information and even workforce de-skilling. While people did perform better when using ChatGPT, it’s concerning that they all overestimated that performance.
‘AI literacy is truly important nowadays, and therefore this is a very striking effect. AI literacy might be very technical, and it’s not really helping people actually interact fruitfully with AI systems’, says Welsch.
‘Current AI tools are not enough. They are not fostering metacognition [awareness of one’s own thought processes] and we are not learning about our mistakes,’ adds doctoral researcher Daniela da Silva Fernandes. ‘We need to create platforms that encourage our reflection process.’
The article was published on October 27th in the journal Computers in Human Behavior.
Why a single prompt is not enough
The researchers designed two experiments where some 500 participants used AI to complete logical reasoning tasks from the US’s famous Law School Admission Test (LSAT). Half of the group used AI and half didn’t. After each task, subjects were asked to monitor how well they performed –– and if they did that accurately, they were promised extra compensation.
‘These tasks take a lot of cognitive effort. Now that people use AI daily, it’s typical that you would give something like this to AI to solve, because it’s so challenging’, Welsch says.
The data revealed that most users rarely prompted ChatGPT more than once per question. Often, they simply copied the question, put it in the AI system, and were happy with the AI’s solution without checking or second-guessing.
‘We looked at whether they truly reflected with the AI system and found that people just thought the AI would solve things for them. Usually there was just one single interaction to get the results, which means that users blindly trusted the system. It’s what we call cognitive offloading, when all the processing is done by AI’, Welsch explains.
This shallow level of engagement may have limited the cues needed to calibrate confidence and allow for accurate self-monitoring. Therefore, it’s plausible that encouraging or experimentally requiring multiple prompts could provide better feedback loops, enhancing users’ metacognition, he says.
So what’s the practical solution for everyday AI users?
‘AI could ask the users if they can explain their reasoning further. This would force the user to engage more with AI, to face their illusion of knowledge, and to promote critical thinking,’ Fernandes says.
Full article: ‘AI makes you smarter but none the wiser: The disconnect between performance and metacognition’, Computers in Human Behavior, 10.1016/j.chb.2025.108779
Journal
Computers in Human Behavior
Research has found that using AI gives us a false sense of confidence.
Credit
Matti Ahlgren / Aalto University
Article Title
AI makes you smarter but none the wiser: The disconnect between performance and metacognition
Article Publication Date
27-Oct-2025
AI produces shallower knowledge than web search
PNAS Nexus
Learning about a topic by interacting with AI chatbots like ChatGPT rather than following links provided by web search can produce shallower knowledge. Advice given on the basis of this shallow knowledge tends to be sparser, less original, and less likely to be adopted by others. Shiri Melumad and Jin Ho Yun conducted seven experiments with thousands of online participants who were randomly assigned to learn about various topics, including how to plant a vegetable garden, how to lead a healthier lifestyle, or how to cope with financial scams, using either large language models (LLMs) or traditional Google web search links. Participants then wrote advice based on what they learned. Participants who used LLMs spent less time engaging with search results and reported developing shallower knowledge compared to those using web links, even when the underlying facts were identical. When forming advice, LLM users invested less effort and produced content that was objectively shorter, contained fewer factual references, and showed greater similarity to other participants' advice. In an experiment with 1,501 independent evaluators, recipients—who were unaware of where the advice came from—rated advice written after LLM searches as less helpful, less informative, and less trustworthy compared to advice based on web search, and were less willing to adopt the LLM-derived advice. While LLMs are undeniably efficient, relying on pre-synthesized summaries can transform learning from an active quest to a passive activity. According to the authors, LLMs are thus potentially less useful than web search if the goal is developing procedural knowledge—an understanding of how to actually do things.
Journal
PNAS Nexus
Article Title
Experimental evidence of the effects of large language models versus web search on depth of learning
Article Publication Date
28-Oct-2025
Pusan National University researchers show how AI can help in fashion trend prediction
Using a novel prompting technique, researchers showcase the potential of ChatGPT to capture upcoming fashion trends
Pusan National University
image:
Using the novel top-down prompting strategy and expert guidance, ChatGPT can become a valuable tool for fashion trend prediction, especially for fashion students and small brands.
view moreCredit: Yoon Kyung Lee from Pusan National University
Fashion trend forecasting helps companies predict which clothes will be popular in upcoming seasons. Traditionally, this has relied on experts’ intuition, experience and creativity. More recently, big-data analysis has been incorporated, offering deeper insights into consumer behavior. However, such methods pose technical barriers and remain out of reach for fashion students or small brands.
Recent developments in artificial intelligence (AI) can balance the scales. Large language models (LLMs) like ChatGPT have made big data analysis readily available to the public. LLMs draw from vast societal and cultural data and can potentially be used for predicting fashion trends. However, given their current limitations, such as hallucinations and factual errors, it is imperative to verify their suitability and to develop structured prediction methods.
In a new study, Assistant Professor Yoon Kyung Lee and Master’s student Chaehi Ryu, from the Department of Clothing and Textiles at Pusan National University, South Korea, developed a new approach for predicting fashion trends using ChatGPT. “Rather than simply asking ‘What fashion will be popular in the future?’, we designed a systematic strategy for prompting the AI for more specific and consistent answers,” explains Dr. Lee. “We also compared ChatGPT's predictions to an actual trend agency’s report.” Their findings were published in the Clothing and Textiles Research Journal on September 26, 2025.
The researchers first verified ChatGPT’s characteristics in fashion trend prediction using general prompts about 2023 fall/winter men’s fashion trends. Based on the results, they developed a new Top-Down Prompting (TDP) technique based on the Lotus Blossom (LB) brainstorming approach, for more accurate and specific forecasting.
TDP starts with a central problem prompt, a general query to predict ‘Fashion Trends’, and then expands into sub-problem prompts, including queries on “Silhouette,” “Materials,” “Key Items,” “Garment Details,” “Decorative Elements,” “Color,” “Moods,” and “Prints and Patterns.”
The researchers used this approach to predict men’s fashion trends for fall/winter 2024 using ChatGPT-3.5 and ChatGPT-4 Classic. The responses were compared and validated against the fall/winter 2024 men’s fashion trend predictions by the Official Fashion Trend Information Company (OFTIC), and the analysis was reviewed by two fashion experts.
Analysis showed that ChatGPT’s predictions mostly reflected established or generalized fashion ideas, rather than forward-looking or innovative designs. Moreover, it accurately identified only 9 out of 39 trends predicted in OFTIC’s report. Notably, however, both models predicted emerging themes, including gender fluidity and statement coats.
“While the prediction accuracy of ChatGPT is low, what's intriguing is that it captured new trends not found in existing data,” notes Dr. Lee. “AI can sense cultural shifts and open up new creative directions.”
Although ChatGPT should not yet be viewed as a definitive forecasting tool, it can effectively complement expert-led analysis. This is especially valuable for fashion students and small brands, helping them achieve more accurate and nuanced trend forecasts. For fashion education, the researchers also developed a TDP-based hybrid conceptual framework for fashion trend forecasting, integrating both AI-analysis and expert-knowledge.
Overall, this study shows how AI tools can make fashion trend forecasting more systematic and accessible.
***
Reference
DOI: 10.1177/0887302X251371969
About Pusan National University
Pusan National University, located in Busan, South Korea, was founded in 1946 and is now the No. 1 national university of South Korea in research and educational competency. The multi-campus university also has other smaller campuses in Yangsan, Miryang, and Ami. The university prides itself on the principles of truth, freedom, and service and has approximately 30,000 students, 1,200 professors, and 750 faculty members. The university comprises 14 colleges (schools) and one independent division, with 103 departments in all.
Website: https://www.pusan.ac.kr/eng/Main.do
About Assistant Professor Yoon Kyung Lee
Dr. Yoon Kyung Lee is an Assistant professor of Fashion Design for Dept. Clothing and Textiles at Pusan National University. Her research interests include Sustainability in fashion, Creativity, Innovative design education, Digital fashion technologies, AI & neuroscience in fashion. Before coming her current position, she worked as an Assistant research professor at Seoul National University. She completed the Postdoctoral training at DeLong’s lab at University of Minnesota. She received a PhD in Aesthetics in Dress from Seoul National University and an MFA in Fashion and Textile Design from the Institute European of Design (IED) Milan, Italy.
Journal
Clothing and Textiles Research Journal
Method of Research
Computational simulation/modeling
Subject of Research
Not applicable
Article Title
How the Field of Fashion can use ChatGPT to Predict Fashion Trends
No comments:
Post a Comment