Artificial intelligence and academic integrity: how British universities are dealing with it

Between 2023 and 2024, approximately 7,000 confirmed cases of academic misconduct directly related to the use of artificial intelligence were recorded in British universities. The Guardian published these statistics following a journalistic investigation into the subject.
The increase in the number of AI-related violations indicates a new trend: more and more students are turning to digital tools to complete their academic assignments. In the 2023–2024 academic year, the average rate of such violations reached 5.1 cases per thousand students — three times more than a year earlier. The forecast for the current period is already 7.5 per thousand.
Traditional copying of texts without references, i.e., plagiarism, is gradually becoming less relevant. Whereas previously there were up to 19 such incidents per thousand students, this figure has now fallen to 15.2 and continues to decline.
The Guardian collected data from 131 universities. It is worth noting that more than a quarter of them still do not single out AI violations as a separate category in the field of academic integrity. This indicates a lack of a systematic approach to the problem, and many cases likely remain outside the official statistics.

AI is more difficult to detect than plagiarism

Educators admit that proving the use of artificial intelligence is much more difficult than catching a student cheating in the traditional way. Although there are specialized tools that can detect the probable use of generative models, this is not enough to impose disciplinary sanctions. Universities are also afraid to accuse a student without clear evidence. The transition to a completely offline assessment format is too costly and complicated in terms of organization.
Some students openly admit that they use ChatGPT and similar services not for cheating, but as an auxiliary tool for finding ideas, constructing the logic of a text, or selecting sources. Some deliberately rework AI-generated texts using rewriters or stylistic editors to make them less “machine-like” and avoid detection.

AI as a tool for inclusion

It is worth noting that artificial intelligence opens up new educational opportunities for students with special needs. For example, people with dyslexia use AI to better structure their thoughts and logically organize their written work.
The state has already joined the process of transforming education: the UK government is allocating funds to develop digital skills among young people and providing recommendations for the safe and effective implementation of AI in educational processes. Scientists and experts insist on the need to reform assessment approaches, reducing the emphasis on memorization and moving towards the development of “non-automated” competencies — communication, creativity, and teamwork skills.

The American experience: how ChatGPT changed the university system in the US

A striking example of the large-scale implementation of AI in education is California State University (CSU). This educational institution was the first in the world to integrate ChatGPT system-wide.
Results of implementation:
  • More than 460,000 students and over 63,000 teachers have gained access to a specially adapted version of ChatGPT Edu.
  • Students can personalize their courses, create individual educational paths, and receive automated assistance with learning materials.
  • Teachers use AI to create new courses, automate administrative processes, and improve communication with their audience.
This case study demonstrates that artificial intelligence is not only a challenge but also a powerful tool for the development of the modern education system.
Text author: Columbia Proof

12 July 2025

Did you like this article?