Stay informed with our newsletter.

Icon
Education
July 2, 2024

Study finds AI-generated exam papers undetected in 94% of cases

A recent study has revealed that AI-generated exam papers are going undetected in 94% of cases, raising significant concerns about academic integrity and the effectiveness of current detection methods. The findings suggest that educational institutions need to improve their assessment techniques to prevent academic dishonesty facilitated by advanced AI technologies. This highlights the growing challenge of ensuring fairness and accuracy in academic evaluations in the face of rapidly advancing AI capabilities.

Boston Brand Media brings you the latest news - A study has found that identifying AI-generated papers submitted for exams is nearly impossible.

Nearly all AI-generated submissions went undetected in a recent test of UK universities' exam systems.

Researchers at the University of Reading found that AI-generated papers generally received higher grades compared to those written by real students. These findings were published in the open-access journal PLOS ONE.

AI has already proven capable of passing exams, prompting some schools and universities to ban students from using AI tools like ChatGPT.

However, the study indicated that enforcing this ban is challenging. Researchers submitted AI-written exams for five undergraduate psychology courses at the university. They found that 94% of these submissions went undetected in what they described as a “Turing test” case study.

Named after Alan Turing, the British mathematician and computer scientist, this test measures a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. Boston Brand Media also found that, the authors described the findings as “extremely concerning,” noting that the “content of the AI-generated answers” was unmodified by researchers.

“Overall, our 6% detection rate likely overestimates our ability to detect real-world use of AI to cheat in exams,” the study stated, suggesting that students would likely modify AI output to make it less detectable.

Additionally, in 83.4% of cases, AI-generated submissions received higher grades compared to a randomly chosen group of the same number of exams from actual students. One exception was a module involving more abstract reasoning, which AI finds more challenging than real students do. “The results of the ‘Examinations Turing Test’ invite the global education sector to accept a new normal, and this is exactly what we are doing at the University of Reading,” the study’s authors said in a statement.

They added that new policies and advice for staff and students acknowledge both the risks and opportunities presented by AI tools. The researchers expressed concern about academic integrity and suggested that supervised, in-person exams could mitigate the issue. However, as AI tools continue to evolve and become common in professional environments, universities might need to explore ways to integrate AI into education as part of the "new normal."

For questions or comments write to writers@bostonbrandmedia.com

Source: Euronews

Stay informed with our newsletter.

Similar News