Tuesday, August 01, 2023

 

Researchers find little evidence of cheating with online, unsupervised exams


Peer-Reviewed Publication

IOWA STATE UNIVERSITY

Students work on laptops above “Gene Pool,” a tile mosaic by Andrew Leicester inside the Molecular Biology Building at Iowa State University. 

IMAGE: STUDENTS WORK ON LAPTOPS ABOVE “GENE POOL,” A TILE MOSAIC BY ANDREW LEICESTER INSIDE THE MOLECULAR BIOLOGY BUILDING AT IOWA STATE UNIVERSITY. view more 

CREDIT: PHOTO BY CHRISTOPHER GANNON/IOWA STATE UNIVERSITY.




AMES, IA — When Iowa State University switched from in-person to remote learning halfway through the spring semester of 2020, psychology professor Jason Chan was worried. Would unsupervised, online exams unleash rampant cheating?

His initial reaction flipped to surprise as test results rolled in. Individual student scores were slightly higher but consistent with their results from in-person, proctored exams. Those receiving B’s before the COVID-19 lockdown were still pulling in B’s when the tests were online and unsupervised. This pattern held true for students up and down the grading scale.

“The fact that the student rankings stayed mostly the same regardless of whether they were taking in-person or online exams indicated that cheating was either not prevalent or that it was ineffective at significantly boosting scores,” says Chan.

To know if this was happening at a broader level, Chan and Dahwi Ahn, a Ph.D. candidate in psychology, analyzed test score data from nearly 2,000 students across 18 classes during the spring 2020 semester. Their sample ranged from large, lecture-style courses with high enrollment, like introduction to statistics, to advanced courses in engineering and veterinary medicine.

Across different academic disciplines, class sizes, course levels and test styles (i.e., predominantly multiple choice or short answer), the researchers found the same results. Unsupervised, online exams produced scores very similar to in-person, proctored exams, indicating they can provide a valid and reliable assessment of student learning.

The research findings were recently published in Proceedings of the National Academy of Sciences.

“Before conducting this research, I had doubts about online and unproctored exams, and I was quite hesitant to use them if there was an option to have them in-person. But after seeing the data, I feel more confident and hope other instructors will, as well,” says Ahn.

Both researchers say they’ve continued to give exams online, even for in-person classes. Chan says this format provides more flexibility for students who have part-time jobs or travel for sports and extra-curriculars. It also expands options for teaching remote classes. Ahn led her first  online course over the summer.

Why might cheating have had a minimal effect on test scores?

The researchers say students more likely to cheat might be underperforming in the class and anxious about failing. Perhaps they’ve skipped lectures, fallen behind with studying or feel uncomfortable asking for help. Even with the option of searching Google during an unmonitored exam, students may struggle to find the correct answer if they don’t understand the content. In their paper, the researchers point to evidence from previous studies comparing test scores from open-book and close-book exams.

Another factor that may deter cheating is academic integrity or a sense of fairness, something many students value, says Chan. Those who have studied hard and take pride in their grades may be more inclined to protect their exam answers from students they view as freeloaders.

Still, the researchers say instructors should be aware of potential weak spots with unsupervised, online exams. For example, some platforms have the option of showing students the correct answer immediately after they select a multiple-choice option. This makes it much easier for students to share answers in a group text.

To counter this and other forms of cheating, instructors can:

  • Wait to release exam answers until the test window closes.
  • Use larger, randomized question banks.
  • Add more options in multiple-choice questions and making the right choice less obvious.
  • Adjust grade cutoffs.

COVID-19 and ChatGPT

Chan and Ahn say the spring 2020 semester provided a unique opportunity to research the validity of online exams for student evaluations. However, there were some limitations. For example, it wasn’t clear what role stress and other COVID-19-related impacts may have played on students, faculty and teaching assistants. Perhaps instructors were more lenient with grading or gave longer windows of time to complete exams.

The researchers said another limitation was not knowing if the 18 classes in the sample normally get easier or harder as the semester progresses. In an ideal experiment, half of the students would have taken online exams for the first half of the semester and in-person exams for the second half.

They attempted to account for these two concerns by looking at older test score data from a subset of the 18 classes during semesters when they were fully in-person. The researchers found the distribution of grades in each class was consistent with the spring 2020 semester and concluded that the materials covered in the first and second halves of the semester did not differ in their difficulty.

At the time of data collection for this study, ChatGPT wasn’t available to students. But the researchers acknowledge AI writing tools are a gamechanger in education and could make it much harder for instructors to evaluate their students. Understanding how instructors should approach online exams with the advent of ChatGPT is something Ahn intends to research.

The study was supported by a National Science Foundation Science of Learning and Augmented Intelligence Grant.

No comments:

Post a Comment