
The widespread and uncritical adoption of generative artificial intelligence (GAI), with large language models like ChatGPT and others, can degrade the university experience and undermine the mission of the university. While this technology offers interesting applications, when used without reflection or pedagogical integration, emerging studies indicate that it tends to empty learning of content, undermine academic work, and reinforce inequalities and forms of control.
From innovation to enshitization
Interactive learning environments (ILEs) have been introduced into teaching with the promise of personalizing learning and increasing efficiency, but they may end up homogenizing tasks and assessments and degrading the pedagogical experience. There are already indications that ILEs produce sufficient —though not excellent—work, repetitive templates, and routine assessments that encourage adaptation to the format rather than reflection.
AIG is marketed to universities as a tool capable of personalizing assessment, saving time by analyzing thousands of texts, and generating immediate feedback. However, in practice, it tends to favor standardized structures (introduction–three arguments–conclusion, short sentences, neutral vocabulary). At the same time, it penalizes more complex , creative, or risky styles because its models have been trained on conventional texts.
As a result, students learn what kind of writing the system prefers and adjust their work to maximize their grades, even if it means diminishing the content and avoiding original perspectives. In other words, a technology presented as personalized ends up homogenizing tasks, standardizing assessment, and encouraging writing for the machine.
The concept of “shitification ,” coined by technology journalist Cory Doctorow from the English word ” shit ,” helps to understand this: at first, technology seems to provide value, but little by little its operation becomes geared towards metrics, data extraction, and dependence on providers, so that it serves corporate interests more than educational ones.
Thus, digital classrooms become spaces saturated with mediocre automation, where the main focus is on generating effective prompts and traceable content.
Cognitive damage
Intensive use of IAG to generate tasks can trigger a process of externalization and cognitive debt . Some studies associate increased IAG use with impaired critical thinking. Its continued use for essay writing could reduce cognitive engagement and lead to poorer performance at the neuronal level, even if the task is perceived as smoother and easier.
Thus emerges the concept of chatversity —a contraction of “university” and “ chatbot ”—: the goal shifts from understanding the world to meeting deadlines and quickly producing competent texts. This dynamic can erode tolerance for ambiguity and sustained intellectual effort, pillars of an emancipatory education, and weaken habits of verification, in-depth reading, and argumentative discussion.
Domesticated creativity
The IAG works by recombining historical data that reflects an imperfect society, thus tending to reinforce dominant patterns, biases, and established norms, rather than promoting disruptions or minority perspectives. This results in polished but conformist texts, resistant to imagining radical solutions or giving centrality to feminist, decolonial, or Global South epistemologies.
Related to this, artificial intelligence appears to increase perceived creativity but limit the diversity of stories produced. Furthermore, AIG recommends fewer works by women and reproduces gender hierarchies in authorship visibility.
Thus, AI-based educational systems incorporate normative assumptions that tend to privilege hegemonic profiles and knowledge , while their uncritical use in teaching can reinforce exclusions. Similarly, UNESCO confirms that AI in education tends to render the work of female authors invisible and reinforce stereotypes, fostering a less pluralistic ecology of knowledge.
When faculty and students use it to select readings and generate case studies, data biases creep into the selection of references and voices. This risks further invisibility for female authors, critical disciplines, and marginalized communities, normalizing a more homogeneous university.
Ghost authorship
Writing papers, solving problems, coding, and writing comments with this technology also calls into question what it means to be an author. If a substantial part of the work is done by the machine, university degrees can become mere credentials , increasingly detached from the intellectual effort of the person who earns them.
To curb this, many institutions are resorting to unreliable and discriminatory AI detectors and surveillance systems that can generate false positives and affect students who write in a second language or in non-standard styles. This can erode trust between faculty and students, rather than strengthen accountability, integrity, and support.
Data extraction
Another concern is political and economic: some North American universities are becoming providers of data, legitimacy, and captive users for large AI companies, often through opaque contracts. Behaviors, course content, and assignments are used as raw material to train models, with clauses for indiscriminate use or for which consent is difficult to obtain, thus weakening academic autonomy.
Furthermore, the maintenance of these systems relies on precarious labor chains in the Global South ( data tagging, moderation of traumatic content ) and environmental damage. Meanwhile, in the Global North, staff cuts and program closures are justified in the name of modernization. In this scenario, the university risks being reconfigured as a distribution and training hub for corporate infrastructures, diverting resources from teaching and stable employment toward funding private platforms.
Erosion of the democratic mission
On the other hand, chatbots and AI study assistants can also generate emotional dependence , even leading to extreme situations of self-harm . For vulnerable students, simulated empathy, inappropriate responses, and a lack of reliable risk detection can exacerbate loneliness and delay access to professional help.
On the epistemological and democratic level, the expansion of synthetic content, the post-literate culture (skimming , focusing skills on creating summaries and prompts ) and the dependence on closed infrastructures can undermine the university’s capacity to be a space for critical reflection.
Let us bear in mind that, in some cases, agreements with AI providers are sometimes made without the participation of students, unions and faculty, while precisely the areas most equipped to question these processes, such as gender studies, philosophy or critical media studies, are cut.
What can be done from the university?
The aim is not to banish IAG from the university, but to subordinate it to a project focused on critical thinking, creativity, and equality. This implies at least three things:
- Strengthen critical digital literacy and work on disinformation, biases, limits and ways of responding to AI outputs at all educational levels.
- Develop governance frameworks for AI systems based on fundamental rights , transparency, participation and accountability, especially in high-risk uses such as assessment or psychological support.
- Protect and fund the disciplines and pedagogies that underpin deep reading, creativity, political imagination , and alternative perspectives. This is about resisting the shift toward a university understood as a factory of platform-optimized credentials.
There is evidence that the misuse of AI in universities can cause harm that goes beyond plagiarism: AI can reshape what it means to learn, teach, and research. This forces institutions to decide whether they will succumb to the corporate logic of enshrinement or transform this technology through a commitment to fairness.
Author Bio: Look at Gutierrez is a Researcher at the University of Deusto