There’s no simple solution to universities’ AI worries

5 hours ago 1

I enjoyed the letter from Dr Craig Reeves (17 June) in which he argues that higher education institutions are consciously choosing not to address widespread cheating using generative AI so as not to sacrifice revenues from international students. He is right that international students are propping up the UK’s universities, of which more than two-fifths will be in deficit by the end of this academic year. But it is untrue that universities could simply spot AI cheating if they wanted to. Dr Reeves says that they should use AI detectors, but the studies that he quotes rebut this argument.

The last study he cites (Perkins et al, 2024) shows that AI detectors were accurate in fewer than 40% of cases, and that this fell to just 22% of “adversarial” cases – when the use of AI was deliberately obscured. In other words, AI detectors failed to spot that AI had been used three‑quarters of the time.

That is why it is wrong to say there is a simple solution to the generative AI problem. Some universities are pursuing academic misconduct cases with verve against students who use AI. But because AI leaves no trace, it is almost impossible to definitively show that a student used AI, unless they admit it.

In the meantime, institutions are switching to “secure” assessments, such as the in-person exams he celebrates. Others are designing assessments assuming students will use AI. No one is saying universities have got everything right. But we shouldn’t assume conspiracy when confusion is the simpler explanation.
Josh Freeman
Policy manager, Higher Education Policy Institute; author, Student Generative AI Survey 2025

The use of AI to “write” things in higher education has prompted significant research and discussion in institutions, and the accurate reporting of that research is obviously important. Craig Reeves mentions three papers in support of the Turnitin AI checker, claiming that universities opted out of this function without testing it because of fears over false positive flagging of human-written texts as AI generated. One of those papers says: “The researchers conclude that the available detection tools are neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI-generated text” (Weber-Wulff et al); and a second found Turnitin to be the second worst of the seven AI detectors tested for flagging AI generated texts, with 84% undetected (Perkins et al). An AI detector can easily avoid false positives by not flagging any texts.

We need to think carefully about how we are going to assess work, when at a click almost limitless superficially plausible text can be produced.
Prof Paul Johnson
University of Chester

In an otherwise well thought out critique of the apparent (and possibly convenient) blind spot higher education has for the use of AI, Craig Reeves appears to be encouraging a return to traditional examinations as a means of rooting out the issue.

While I sympathise (and believe strongly that something should be done), I hope that this return to older practices will not happen in a “one size fits all” manner. I have marked examinations for well over 30 years. During that period I have regularly been impressed by students’ understanding of a topic; I can remember only enjoying reading one examination essay. The others, no matter how good, read like paranoid streams of consciousness. A central transferable skill that degrees in the humanities offer is the ability to write well and cogently about any given topic after research. Examinations don’t – can’t – offer that.

I would call for a move towards more analytical assessment, where students are faced with new material that must be considered in a brief period. I think that the move away from traditional essays as the sole form of assessment might help to lessen (not, of course, halt) the impact of external input. From experience, this focus also helps students move towards application of new understanding, rather than a passive digestion of ideas.
Prof Robert McColl Millar
Chair in linguistics and Scottish language, University of Aberdeen

Read Entire Article