Students prefer AI chatbots, until they know it is one
University of Cincinnati nursing professor studies AI chatbots in higher education advising
University of Cincinnati
image:
Dr. Joshua Lambert is an associate professor in the University of Cincinnati College of Nursing.
view moreCredit: Photo provided by the University of Cincinnati.
Do chatbots have a role in higher education?
It’s a question Joshua Lambert, an associate professor and biostatistician in the University of Cincinnati College of Nursing, is pondering. He’s turned to a group of his students to find out their thoughts about the helpfulness and satisfaction of a custom AI education chatbot.
Lambert piloted his custom chatbot by examining how a small group of Doctor of Nursing Practice (DNP) students evaluated answers to a set of questions from three different sources: a professor, a graduate assistant and a chatbot.
The results of the study have been published in the Journal of Nursing Education. This pilot project used a randomized, blinded, within-subjects comparison study with survey-based evaluation.
Seven doctoral students in the study submitted statistical questions related to their capstone projects and received blind responses from the professor, graduate assistant and chatbot. They rated each response on helpfulness, satisfaction and likelihood of use on a scale of one to five with the best being five. They then guessed which response came from the chatbot.
“Students first gave us their questions and then we gave them three responses back in a blinded and randomized fashion so students were unaware which response came from either the professor, graduate assistant or chatbot,” explains Lambert. “The students ranked each response in terms of helpfulness, overall satisfaction and guessed which of the three responses came from the chatbot.”
“The students rated the chatbot’s response the highest in terms of overall satisfaction and helpfulness,” adds Lambert.
The chatbot’s responses were preferred by the students, but Lambert thought the data offered a more nuanced story than originally thought. He found that when students were asked to guess what response came from the chatbot, the lowest rated responses in helpfulness and satisfaction were guessed as coming from the chatbot.
“Students preferred the large language model (LLM) chatbot’s responses when blinded yet demonstrated a bias against it when the source was suspected,” explains Lambert. “This bias is likely rooted in a lack of trust, and trust may influence AI adoption by both students and professors.
“The students rated the chatbot’s responses the highest yet consistently guessed the lowest rated response as the chatbot’s was very interesting and somewhat unexpected. Yet when we read the current academic literature on this topic, we found that user trust is an important component in almost all AI research right now,” says Lambert.
Other researchers in the study from the UC College of Nursing include Robyn Stamm, DNP, associate professor of clinical nursing; Shannon White, DNP, assistant professor in the doctor of nursing practice program; and Melanie Kroger-Jarvis, DNP, associate dean for graduate clinical learning programs. Bailey Martin, PhD, a postdoctoral research fellow at the University of Colorado Anschutz Medical Campus, is also a co-author of the study.
Researchers in the study acknowledge that while the small sample size is appropriate for a pilot study, it is insufficient for determining adequate effectiveness. They suggest that larger studies, replicated in multiple sites with additional qualitative and quantitative data are needed to thoroughly evaluate AI chatbot tools in nursing education and advising.
“For this reason, the descriptive results should be considered an initial ‘first step’ toward understanding how such a tool may assist in student learning and consultation,” the researchers wrote in their study.
Lambert says he considered using the chatbot because students, like others, are sometimes hesitant to ask questions to another individual, particularly a professor, that might seem silly or make them appear as not so smart.
However, the chatbot won’t judge you based on your questions, he adds.
“Sometimes the topics we cover are challenging or intimidating,” says Lambert. “Educators want something that will lower the barrier so students can ask any questions they like.”
Funding for the study came from an internal grant from the University of Cincinnati College of Nursing to support conference attendance, participant reimbursement and software fees. The authors of the study report no potential conflicts of interest.
Read the story on the UC website.
Journal
Journal of Nursing Education
Method of Research
Survey
Subject of Research
People
Article Title
Blinded But Biased: Students Prefer Chatbot Until They Know It Is One
Article Publication Date
1-Apr-2026
Teachers tend to help the same kids repeatedly when using AI-powered tutoring tools
A new study finds teachers tend to provide assistance to similar subsets of students when using AI-powered educational tools, rather than touching base regularly with everyone in their classes. The findings could be used to develop tools that help teachers track their classroom interactions to ensure they are giving each student the attention they need.
“AI-powered tools are increasingly common in K-12 classrooms, but teachers still play a critical role,” says Qiao Jin, first author of the study and an assistant professor of computer science at North Carolina State University. “For this study, we wanted to examine how teachers who use AI-powered tools determine which students need help – and how those teachers actually distribute their time among their students.”
For this study, the researchers looked specifically at teachers using intelligent tutoring systems (ITS) to teach middle-school math. ITS are AI-powered software that responds to student activity to provide customized assistance through hints and feedback, as well as tracking student performance.
For the first part of the study, researchers interviewed nine middle school math teachers who used ITS in their classrooms. The interviews helped researchers understand how the teachers determine which students require an intervention (a teacher visit) and what kind of help the teachers provide.
“While teachers said it would be ideal to spend one-on-one time with every student, they noted that this is not possible,” Jin says.
Instead, the teachers made decisions about who to help based on many factors. Two of the most significant factors were whether a student had required assistance in the past, and a student’s “engagement state.”
“ITS can notify teachers when students have been consistently entering incorrect answers or have not interacted with the system for an extended time,” Jin says. “Those are engagement states called ‘struggle’ and ‘idle,’ respectively. And either of those engagement states might lead a teacher to touch base with the relevant students.”
To see how these teacher behaviors are reflected in practice, the researchers drew on data covering 1,437,055 interactions between students and an ITS. The data covers 339 students enrolled in 14 middle and high school math classes across 10 U.S. schools during the 2022-23 school year. All of the data the researchers looked at is data that the relevant teachers had access to via their ITS dashboards.
“We found that teachers are more likely to interact with students that they have interacted with before, even after considering who is engaged and disengaged in the classroom,” says Jin. “Basically, if a teacher has intervened to help a student in the past, they are more likely to intervene to help that student in the future.
“Teachers have their own definitions of fairness and their own understanding of student needs, based on their training and experiences,” says Jin. “We believe our findings can be used to develop software tools, such as dashboard features, that support teachers by giving them information they can use to make decisions about how they allocate their time in a way that is consistent with their definitions of fairness and student need.
“Teachers have a difficult job and developing better tools to help them do that job effectively is worthwhile.”
The paper, “Sticky Help, Bounded Effects: Session-by-Session Analytics of Teacher Interventions in K-12 Classrooms,” will be presented at the 16th Annual Learning Analytics & Knowledge Conference (LAK26) being held April 27-May 1 in Bergen, Norway. The paper was co-authored by YiChen Yu of NC State; and by Conrad Borchers, Ashish Gurung, Sean Jackson, Sameeksha Agarwal, Cancan Wang, Pragati Maheshwary and Vincent Aleven of Carnegie Mellon University.
The work was done with support from the Institute of Education Sciences of the U.S. Department of Education, under grant R305A240281.
Method of Research
Observational study
Subject of Research
People
Article Title
Sticky Help, Bounded Effects: Session-by-Session Analytics of Teacher Interventions in K-12 Classrooms
JMIR Publications examines AI-driven discovery bottleneck: scientific evidence trapped in a predigital system
JMIR Publications
image:
Boon-How Chew, MD, MMed, PhD., JMIR Correspondent
view moreCredit: Boon-How Chew, MD, MMed, PhD.
(Toronto, April 6, 2026) — JMIR Publications today announced the release of a timely new article in its News and Perspectives section, showcasing the urgent need to modernize the scientific record. The article, “Our AI-Powered Discoveries Are Trapped in a Predigital System,” details how shifting from a static, paper-based model to a data-native ecosystem can bridge the widening gap between rapid AI innovation and slow formal validation.
Authored by Dr. Boon-How Chew, JMIR Correspondent, the report highlights the growing chasm between the speed of evidence generation and the glacial pace of traditional scholarly communication. The study finds that while AI is accelerating diagnostics and drug discovery, the 17th-century publishing infrastructure has become a direct threat to the promise of data-driven medicine.
The Crisis of Trust and Speed in Global Research
Traditional academic publishing remains a significant bottleneck for digital health innovations, governed by an economic and structural model that creates profound access and equity issues. Beyond fragmented AI solutions, the report emphasizes that while a chaotic ecosystem of AI super-assistants like Paperpal, Elicit, and ResearchRabbit has emerged, these tools often only patch symptoms. They help authors write papers faster but do not change the fact that the final output remains non-interactive and largely unverifiable.
The analysis reveals several breakthrough insights:
The High Cost of Access: Top-tier research universities report annual subscription expenditures exceeding $10 to $15 million, while author-facing processing charges can range from $5,000 to over $11,000 per article.
The Reproducibility Crisis: The foundation of scientific evidence faces ongoing threats, with estimates suggesting that 50% to 90% of published research findings are not reproducible across various disciplines.
The Static Article Constraint: By focusing on opaque narrative summaries that decouple claims from underlying data, the current system makes verification nearly impossible for complex AI models.
"The black box of a clinical AI model cannot be built on the black box of a nonreproducible study," says Dr. Chew. "We need a new operating system for science that is dynamic, transparent, and data-driven."
Transitioning to a New Operating System for Science
While a chaotic ecosystem of AI tools currently offers fragmented help by optimizing the creation of traditional manuscripts, the article argues that the future unit of publication must move toward enriched dynamic research objects. In this new model, data, methods, analysis logs, and peer validation are structurally and permanently linked to ensure rigorous reporting and transparency by design.
"The technology is almost here," adds Dr. Chew. "What is required now is the collective will to build, adopt, and apply a publishing model that is worthy of the future."
Please cite as:
Chew B
Our AI-Powered Discoveries Are Trapped in a Predigital System
J Med Internet Res 2026;28:e96018
URL: https://www.jmir.org/2026/1/e96018
DOI: 10.2196/96018
About JMIR Publications News and Perspectives
JMIR Publications is a leading open access publisher of digital health research. The News and Perspectives section is the newest addition to its portfolio, established to bring the rigor and integrity of academic publishing to scientific journalism. The section features well-researched, expert-driven content from the Scientific News Editor, Kayleigh-Ann Clegg, PhD, and a network of specialist JMIR Publications Correspondents to keep the digital health community informed, inspired, and ahead of the curve.
About JMIR Publications
JMIR Publications is a leading open access publisher of digital health research and a champion of open science. With a focus on author advocacy and research amplification, JMIR Publications partners with researchers to advance their careers and maximize the impact of their work. As a technology organization with publishing at its core, we provide innovative tools and resources that go beyond traditional publishing, supporting researchers at every step of the dissemination process. Our portfolio features a range of peer-reviewed journals, including the renowned Journal of Medical Internet Research.
To find out more about JMIR Publications, visit jmirpublications.com or connect with them on Bluesky, X, LinkedIn, YouTube, Facebook, and Instagram.
Media Contact:
Dennis O’Brien, Vice President, Communications & Partnerships
JMIR Publications
communications@jmir.org
+1 416-583-2040
The content of this communication is licensed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, published by JMIR Publications, is properly cited.
Journal
Journal of Medical Internet Research
Method of Research
Commentary/editorial
Subject of Research
People
Article Title
Our AI-Powered Discoveries Are Trapped in a Predigital System
No comments:
Post a Comment