Tuesday, April 07, 2026

 

Students prefer AI chatbots, until they know it is one


University of Cincinnati nursing professor studies AI chatbots in higher education advising



University of Cincinnati

Dr. Joshua Lambert, University of Cincinnati 

image: 

Dr. Joshua Lambert is an associate professor in the University of Cincinnati College of Nursing.

view more 

Credit: Photo provided by the University of Cincinnati.





Do chatbots have a role in higher education?

It’s a question Joshua Lambert, an associate professor and biostatistician in the University of Cincinnati College of Nursing, is pondering. He’s turned to a group of his students to find out their thoughts about the helpfulness and satisfaction of a custom AI education chatbot

Lambert piloted his custom chatbot by examining how a small group of Doctor of Nursing Practice (DNP) students evaluated answers to a set of questions from three different sources: a professor, a graduate assistant and a chatbot. 

The results of the study have been published in the Journal of Nursing Education. This pilot project used a randomized, blinded, within-subjects comparison study with survey-based evaluation.

Seven doctoral students in the study submitted statistical questions related to their capstone projects and received blind responses from the professor, graduate assistant and chatbot. They rated each response on helpfulness, satisfaction and likelihood of use on a scale of one to five with the best being five. They then guessed which response came from the chatbot.

“Students first gave us their questions and then we gave them three responses back in a blinded and randomized fashion so students were unaware which response came from either the professor, graduate assistant or chatbot,” explains Lambert. “The students ranked each response in terms of helpfulness, overall satisfaction and guessed which of the three responses came from the chatbot.”

“The students rated the chatbot’s response the highest in terms of overall satisfaction and helpfulness,” adds Lambert.

The chatbot’s responses were preferred by the students, but Lambert thought the data offered a more nuanced story than originally thought. He found that when students were asked to guess what response came from the chatbot, the lowest rated responses in helpfulness and satisfaction were guessed as coming from the chatbot.

“Students preferred the large language model (LLM) chatbot’s responses when blinded yet demonstrated a bias against it when the source was suspected,” explains Lambert. “This bias is likely rooted in a lack of trust, and trust may influence AI adoption by both students and professors.

“The students rated the chatbot’s responses the highest yet consistently guessed the lowest rated response as the chatbot’s was very interesting and somewhat unexpected. Yet when we read the current academic literature on this topic, we found that user trust is an important component in almost all AI research right now,” says Lambert.

Other researchers in the study from the UC College of Nursing include Robyn Stamm, DNP, associate professor of clinical nursing; Shannon White, DNP, assistant professor in the doctor of nursing practice program; and Melanie Kroger-Jarvis, DNP, associate dean for graduate clinical learning programs. Bailey Martin, PhD, a postdoctoral research fellow at the University of Colorado Anschutz Medical Campus, is also a co-author of  the study.

Researchers in the study acknowledge that while the small sample size is appropriate for a pilot study, it is insufficient for determining adequate effectiveness. They suggest that larger studies, replicated in multiple sites with additional qualitative and quantitative data are needed to thoroughly evaluate AI chatbot tools in nursing education and advising.

“For this reason, the descriptive results should be considered an initial ‘first step’ toward understanding how such a tool may assist in student learning and consultation,” the researchers wrote in their study.

Lambert says he considered using the chatbot because students, like others, are sometimes hesitant to ask questions to another individual, particularly a professor, that might seem silly or make them appear as not so smart.

However, the chatbot won’t judge you based on your questions, he adds.

“Sometimes the topics we cover are challenging or intimidating,” says Lambert. “Educators want something that will lower the barrier so students can ask any questions they like.”

Funding for the study came from an internal grant from the University of Cincinnati College of Nursing to support conference attendance, participant reimbursement and software fees. The authors of the study report no potential conflicts of interest.

Read the story on the UC website.

Teachers tend to help the same kids repeatedly when using AI-powered tutoring tools




North Carolina State University





A new study finds teachers tend to provide assistance to similar subsets of students when using AI-powered educational tools, rather than touching base regularly with everyone in their classes. The findings could be used to develop tools that help teachers track their classroom interactions to ensure they are giving each student the attention they need.

“AI-powered tools are increasingly common in K-12 classrooms, but teachers still play a critical role,” says Qiao Jin, first author of the study and an assistant professor of computer science at North Carolina State University. “For this study, we wanted to examine how teachers who use AI-powered tools determine which students need help – and how those teachers actually distribute their time among their students.”

For this study, the researchers looked specifically at teachers using intelligent tutoring systems (ITS) to teach middle-school math. ITS are AI-powered software that responds to student activity to provide customized assistance through hints and feedback, as well as tracking student performance.

For the first part of the study, researchers interviewed nine middle school math teachers who used ITS in their classrooms. The interviews helped researchers understand how the teachers determine which students require an intervention (a teacher visit) and what kind of help the teachers provide.

“While teachers said it would be ideal to spend one-on-one time with every student, they noted that this is not possible,” Jin says.

Instead, the teachers made decisions about who to help based on many factors. Two of the most significant factors were whether a student had required assistance in the past, and a student’s “engagement state.”

“ITS can notify teachers when students have been consistently entering incorrect answers or have not interacted with the system for an extended time,” Jin says. “Those are engagement states called ‘struggle’ and ‘idle,’ respectively. And either of those engagement states might lead a teacher to touch base with the relevant students.”

To see how these teacher behaviors are reflected in practice, the researchers drew on data covering 1,437,055 interactions between students and an ITS. The data covers 339 students enrolled in 14 middle and high school math classes across 10 U.S. schools during the 2022-23 school year. All of the data the researchers looked at is data that the relevant teachers had access to via their ITS dashboards.

“We found that teachers are more likely to interact with students that they have interacted with before, even after considering who is engaged and disengaged in the classroom,” says Jin. “Basically, if a teacher has intervened to help a student in the past, they are more likely to intervene to help that student in the future.

“Teachers have their own definitions of fairness and their own understanding of student needs, based on their training and experiences,” says Jin. “We believe our findings can be used to develop software tools, such as dashboard features, that support teachers by giving them information they can use to make decisions about how they allocate their time in a way that is consistent with their definitions of fairness and student need.

“Teachers have a difficult job and developing better tools to help them do that job effectively is worthwhile.”

The paper, “Sticky Help, Bounded Effects: Session-by-Session Analytics of Teacher Interventions in K-12 Classrooms,” will be presented at the 16th Annual Learning Analytics & Knowledge Conference (LAK26) being held April 27-May 1 in Bergen, Norway. The paper was co-authored by YiChen Yu of NC State; and by Conrad Borchers, Ashish Gurung, Sean Jackson, Sameeksha Agarwal, Cancan Wang, Pragati Maheshwary and Vincent Aleven of Carnegie Mellon University.

The work was done with support from the Institute of Education Sciences of the U.S. Department of Education, under grant R305A240281.

No comments: