Is using ChatGPT cheating? Reflections on student fraud in the age of generative AI

Share:

The use of generative artificial intelligence is now widespread among new generations of students, disrupting the established norms and challenges of knowledge assessment. This poses a number of dilemmas for universities. How can they rethink their exams to maintain the credibility of their degrees?


If truly disruptive innovations exist in education, the uses of generative artificial intelligence could be among them. Nothing less than a new relationship with knowledge is emerging before our very eyes. At university, it is probably the assessment of learning and the risk of cheating that raise the most questions.

Fraud is difficult to detect. By definition, cheating is hidden and difficult to distinguish from legitimate uses of generative artificial intelligence. Furthermore, no robust study currently exists in France that would allow for its qualification and quantification, especially since plagiarism detection platforms have proven ineffective. These unreliable platforms produce both false positives and false negatives, as demonstrated by the studies of William H. Walters and Philippe Dessus and Daniel Seyve .

On the other hand, we know that students make massive use of generative artificial intelligence. A survey by the Digital Education Council , published in August 2024, shows that 86% of them, in a panel of 16 countries including France, use it, while a more recent study by the Higher Education Policy Institute , carried out in February 2025, estimates that 92% of British students use it, 88% of them for activities that lead to an assessment.

Faced with this twofold situation, universities seem rather helpless. The collapse of their capacity to maintain traditional assessment formats calls for a radical rethinking of their aims and methods in order to maintain the effectiveness of their programs and the credibility of their degrees.

What is the purpose of assessments?

In education, as elsewhere, assessment is usually defined as a value judgment made about a measurement and intended to inform decision-making. At university, this means offering students activities, whether specific or general, that will allow their knowledge and/or skills to be measured. These activities can take various forms, including written exams, oral presentations, research papers, or internship reports.

Evaluation is a process serving two very different purposes, potentially complementary but most often confused.

The first approach aims to support students by providing them with qualitative (analysis of progress and difficulties, advice on how to overcome them, etc.) and/or quantitative (grades) information about their learning. This information allows students to guide and adjust their efforts, while encouraging faculty to adapt their teaching to students’ needs. For these reasons, this form of assessment is called “formative” and plays a crucial role in student success.

The other purpose, most often described as “summative ,” aims to assess students’ knowledge and/or skills at a given stage of their training, often at the end, in order to authorize further studies, award a certificate, or grant a diploma. The results of a summative assessment are most often communicated using quantitative methods (grades).

Whatever the purpose of an assessment, its quality depends first and foremost on its alignment with the intended learning objectives. It must accurately reflect what is expected in terms of knowledge and/or skills. Furthermore, it must be reliable, meaning it must measure what it is supposed to measure and do so with sufficient precision. Finally, it must be fair, taking into account difficulties encountered by students that may mask their learning, such as invisible disabilities like dyslexia.

What does it mean to cheat with generative artificial intelligence?

It is important to clearly distinguish fraud from all other situations in which students delegate all or part of their assigned tasks to generative artificial intelligence. Beyond assessment, the expected assistance from generative artificial intelligence also presents a pedagogical issue of great importance, but it does not undermine the integrity of the relationship to academic rules.

Cheating is established if a student’s work is part of an assessment process when the use of generative artificial intelligence has been prohibited. Thus, solving a statistics problem in a final exam by surreptitiously using these tools, despite their being forbidden by the faculty, constitutes cheating. Using the same generative artificial intelligence to assist in solving the same problem, with the agreement and guidance of the instructor, does not.

In fact, cheating with generative artificial intelligence undermines the quality of assessment, particularly its reliability, since the assessment no longer measures what it is supposed to measure. Similarly, this fraud leads to unequal treatment in the assessment process. More generally, academic fraud refers to prohibited and/or deceptive student practices intended to gain an advantage in the evaluation of their performance.

Why do students cheat?

Cheating must be considered in relation to what assessment represents for students. A recent scientific publication highlights the importance that students place on the assessment of their learning, but also the criticisms they make of assessments which they consider to be in their current forms stressful, unfair, opaque, and lacking in feedback.

This evaluative pressure is exerted within a social and academic context where individualism, competition, and short-termism are so prevalent that the rise of a utilitarian view of university studies, and consequently the weakening of moral standards, should come as no surprise. The “diamond of cheating” model ( Wolfe and Hermanson, 2004 ) identifies four main factors that can explain (and predict) any cheating: rationalization of the activity, opportunity to cheat, motivation, and perceived ability.

Comparing this model to the issue of academic fraud is enlightening. The four factors are relevant in the university context:

  • Cheating allows for a form of strong rationalization of the activity with a maximization of results and a minimization of effort.
  • The opportunity to cheat is significant since the performance of generative artificial intelligences makes it possible to respond quite effectively to most assessment formats (answering course questions, analyzing a text, processing data, coding, etc.).
  • The strong motivation to cheat is linked to the utilitarian value attributed to studies and leads to prioritizing obtaining a diploma over the intrinsic interest of learning. Surprisingly, it also reflects a desire to rebalance students who feel that if they do not use generative artificial intelligence, they are at a disadvantage compared to those who do.
  • The perceived capability, finally, is strong, since generative artificial intelligences are easy to use, and even the most clumsy novice uses produce interesting results.

What can universities do?

Since maintaining current assessment methods is not an option, and effectively prohibiting the use of generative artificial intelligence and detecting its use after the fact is not possible, universities will have to rethink their assessment doctrine.

Using oral assessment more frequently, increasing exam monitoring, revising exam charters, imposing harsher penalties for fraud, developing and disseminating codes of conduct, and designing tests that are more resistant to generative artificial intelligence are important avenues to explore.

However, these measures cannot solve the problem, especially since they are very time-consuming, a scarce and expensive commodity in universities. Another possible solution is for students to find meaning in the assessments that encourage them not to cheat.

To achieve this, one approach is to strictly separate assessments designed to support students in their learning journeys, with the analysis of their difficulties and ways to help them overcome them (formative assessments), from those designed to formally validate the stages of their training, with grades or validations of skills (summative assessments).

Regarding summative assessments, this would allow us to safeguard them and maintain their reliability. While not eliminating all risk of cheating, a drastic reduction in their number would allow us to concentrate more resources on mitigating the risks of cheating.

Thus freed from their summative value, all other assessments could be designed around their formative purpose and encourage students to be sincere in their work for better support.

It is true that this organization runs counter to the logic of continuous summative assessment that has been implemented in recent years. There is no miracle solution, therefore, but an important undertaking that must be initiated, without forgetting to include the faculty and students who are not only the most directly affected, but also the only ones with an intimate understanding of the situation.

Author Bio: Jean-François Cerisier is a Professor of Information and Communication Sciences at the University of Poitiers

Tags: