ChatGPT at university: why supervision is better than prohibition or laissez-faire

Share:

Since the arrival of ChatGPT, schools and universities have been faced with a major challenge: how to integrate these tools without compromising their fundamental educational principles?


Generative artificial intelligence (AI) offers a multitude of promising prospects in higher education institutions: pedagogical innovation, personalized learning, time savings and stimulation of creativity. However, it raises serious ethical concerns: risk of cheating, lack of critical thinking, problems of unequal access or, even, loss of the sense of effort . In this context, should we ban these technologies , authorize them freely or regulate their use? and how?

Our research highlights a paradox: if students can freely use ChatGPT and other generative AI, the institution is perceived as more innovative, but its practices are then judged as less ethical. Conversely, a total ban provides reassurance regarding ethics, but at the cost of a shabby image. Faced with this dilemma, it is essential to find the winning combination.

ChatGPT in the classroom: finding the balance between innovation and ethics

Generative AI tools (ChatGPT, Midjourney, Dall-E, etc.) can produce text, images, and even code in seconds. In 2023, a study estimated that these tools had increased professionals’ productivity by 40% and improved the quality of their work by 18%.

The widespread use of generative AI, including in universities and sometimes without teachers’ knowledge, is not without its problems. Several reports warn of key ethical issues related to this use, such as accountability, human oversight, transparency, and inclusivity.

To better understand how generative AI usage policies influence an institution’s image, we conducted two experimental studies with over 500 students. In each, participants were randomly assigned to three groups. Each group was asked to read a short description of a school that implemented different policies and rules to manage the use of generative AI. In one case, the school banned its use completely. In another, it allowed it without any rules. And in the last, it allowed AI but with clear rules, such as indicating when it had been used or limiting its use to certain tasks.

Participants were then asked to rate the school’s image by indicating how innovative, ethical, and trustworthy it seemed to them—that is, whether they would want to support or study there.

The results are clear, regardless of the context (e.g., a homework assignment, a class project, a dissertation). Schools that completely ban AI are considered fairly ethical, but they are significantly penalized in terms of innovation. Those that allow unsupervised AI appear fairly innovative, but suffer from an unethical image, generating the least support from students. Schools that allow the supervised and regulated use of AI manage to achieve a double advantage, being perceived as both the most innovative and the most ethical.

In other words, vagueness or a lack of framework undermines credibility. Conversely, a clear policy, authorizing the use of generative AI with defined rules, strengthens student confidence and buy-in.

AI Questions: A Strong Indicator for Students

Parents, students, teachers, and recruiters want to know if a school truly prepares them for the challenges of the 21st century  . Embracing AI without a framework certainly projects a tech-savvy image, but it also presents an irresponsible one. Rejecting AI outright may be reassuring at first, but ultimately amounts to ignoring changes in society.

Only a structured strategy can combine innovation and educational values, thereby meeting everyone’s expectations and needs. For students, AI must remain a useful tool, but must be supervised to avoid any risk of fraud. For their part, teachers need clear guidelines for integrating these technologies without compromising academic values.

Parents expect schools to combine rigor and modernity. As for policymakers, they have every interest in defining a coherent regulatory framework, rather than leaving each institution to devise its own response.

Show clear rules

Our study shows that regulating the use of AI in universities is essential for achieving innovation while maintaining ethics. Without clear rules, the use of AI can undermine academic integrity and devalue degrees. It is therefore crucial that institutions take this issue seriously, and that public authorities support them with specific and consistent recommendations.

Several pioneering institutions have already implemented some simple rules to govern the use of generative AI. This includes transparency regarding the tools used and the instructions given to the AI, explicit disclosure of any assistance received with assignments, and clear limits on permitted tasks (such as paraphrasing or grammar help, but not full writing). Some schools are also integrating AI ethics into their curricula from the first year and encouraging the pedagogical use of AI in specific exercises.

Some pioneering institutions have already implemented such charters, as evidenced by the European Commission’s report on the responsible use of AI in education. In a world where technologies evolve faster than institutions, only schools capable of combining rigor and adaptation will gain the trust of their communities.

Integrating AI should not be seen as a passing fad, but rather as a concrete response to a major educational challenge, while preserving the ethical values ​​of teaching and the essential openness to innovation. Institutions capable of meeting this challenge will be best placed to train critical thinkers ready to face the challenges of tomorrow.

Author Bios: Karine Revet is Professor of Strategy at Burgundy School of Business, Malak El Halabi is Doctor of Marketing – Consumer Behavior at Rennes School of Business, Sumayya Shaikh is a PhD student in marketing – consumer behavior at Grenoble School of Management (GEM) and Xingming Yang is Assistant Professor of Marketing at Neoma Business School

Tags: