
The rise of generative artificial intelligence has weakened the traditional university assessment model of “independent work + practical activities + final exam”. Today, a student can produce a well-structured, academically-sounding report in minutes.
Banning AI or turning assessment into a cheating hunt doesn’t fix the underlying problem: what’s at stake is not the technology, but how we demonstrate that there is real learning, authorship and thinking, beyond a polished final product.
A pedagogical problem
Assessment based on independent work, practice, and cumulative exams was already weak before AI : it favors the reproduction of content, punishes mistakes as “failures” (penalizing errors instead of integrating them into learning), and leaves little trace of the learning process . AI doesn’t create this problem; it makes it visible.
The assessment that higher education needs today is not a specific technique, but a shift in logic and paradigm. Instead of asking, “How do we detect AI?”, we should be asking, “What evidence demonstrates that the student has learned and can transfer what they have learned?” Framing the problem in this way moves us from control to pedagogical quality and forces us to re-examine what we understand by learning, teaching, and assessment.
Evaluate processes, not results
How can we assess students fairly and in a way that is more in line with the times ? How can we ensure that they acquire the necessary skills and competencies? We can implement different types of tasks that allow us to assess reasoning and ensure the authorship of responses, such as oral presentations, micro-tasks with immediate feedback, academic interviews, or guided debates.
This method of assessment can also be applied to work done at home, collecting evidence of the process in different phases – drafts, revisions, explanations, reflections – which allow us to see the evolution of learning.
In all these cases, artificial intelligence is integrated ethically and transparently, asking the student to explain how they used it, what the tool contributed and what they contributed, so that their critical thinking, their ability to detect errors and their judgment when making decisions can be evaluated.
Assessment microtasks
“Microtasks” are very short exercises in which the student explains what they would do in a specific situation and why, making their reasoning and authorship visible.
For example, when faced with a simple problem – such as choosing the best strategy to resolve a conflict in a team – simply ask them to explain the steps they would follow, how they would verify the information (including that generated by AI) and why they choose a particular solution; thus, the evaluation arises from their thought process, not from an exam.
Focus on the real world
The tasks assigned, both for the classroom and for homework, should be similar to those they will face in their professional lives: tackling open problems, real or plausible cases, exploring multiple solutions, and assuming ethical, social, or professional restrictions—that is, limits that condition how they can act.
For example, if they have to propose a solution to improve a public service, a restriction could be respecting data privacy, complying with professional regulations, adhering to a specific budget, or ensuring that the proposal does not discriminate against any group; these limits force them to make responsible decisions, just like in real life.
Dialogic and explanatory evaluation
Explaining and discussing are part of the learning process. That’s why teaching strategies such as oral presentations, academic interviews, guided debates, and decision justification are so important in this approach.
Oral communication allows students to verify what they have learned and understood, reduce inequalities, demonstrate their own thinking, and reinforce intellectual responsibility. It encourages students to argue, clarify their doubts, defend their ideas, and show how they arrived at them.
Evaluate metacognition
Students can work on metacognition when they answer questions such as: What have you learned? What was the most difficult thing for you? What mistakes did you make? What would you do differently? What role did AI play in my learning process?
These types of questions reinforce autonomy , strengthen motivation , and connect assessment and learning .
Evaluate your own use of AI
This competency-based curriculum does not prohibit artificial intelligence, but rather integrates it critically. This is achieved by evaluating the student’s judgment when using it. When submitting an assignment, for example, we can ask them to include a brief statement explaining how they use this tool within the work itself—at the end of the document, in an appendix, or right after the activity.
In that section, which could be titled “Use of artificial intelligence in my process” , I would explain how and why I used it, what tools I used, what parts of the work were my own, what decisions I made, what limits I set and how I verified the information generated.
Thus, the evaluation remains focused on the student’s judgment, reflection, and responsibility.
Formative and fair assessment
Finally, assessment should be formative: that is, it should guide the student. For corrections or feedback to be meaningful, they must help the student improve.
In a traditional model, assessment categorizes and acts as a filter: the student completes a task, receives a grade, and the process ends there. The grade functions as a label that determines whether the student “met the standard” or “did not,” without offering useful information for improvement or space to review mistakes; the implicit message is that making mistakes has negative consequences, but not learning opportunities.
Formative assessment transforms that very moment into a guiding process: the teacher analyzes the work, highlights strengths, clearly identifies areas for improvement, and explains how to address them. This allows the student to understand what they have learned and what steps they can take to progress and improve, making assessment a form of ongoing support. Thus, instead of closing off paths, assessment becomes a resource that opens possibilities, strengthens understanding, and helps students learn more effectively.
Visible learning
In short, assessment in the age of AI compels us to investigate how students think, decide, make mistakes, learn, and act with sound judgment. It means shifting from certifying products (exams, assignments) to making learning visible, from measuring responses to understanding processes, from penalizing errors to recognizing them as evidence of thought.
Because when the shortcut is perfect, what is truly transformative is not prohibiting it, but daring to change the path.
Author Bio: Ángeles Caballero García is a University Professor. Research and Diagnostic Methods in Education. Faculty of Education at Camilo José Cela University