How to rethink assessment in higher education

Share:

The international rankings of universities are today a reality of globalization which sheds a different light than the historical reputation of institutions or the evaluation reports produced on them. They are based on purely quantitative comparative performance elements that must be interpreted within their scope: what data are used? What are the indicators? What are the calculation algorithms?

For example, the Shanghai ranking only covers the field of research and obscures other fundamental missions of universities: the transmission of knowledge, the issuance of diplomas and professional integration.

Important for our international influence, these rankings are often out of step with the information needs of citizens, who first seek, with limited budgetary resources, the best solutions in their territories to educate their children. Concretely, they are more interested in a particular degree course, IUT, engineer or master rather than the international recognition of the university establishment.

We also know very well that behind the word “university”, it is in fact ecosystems in France that are classified, very often with an undeniable contribution from research organizations.

Complex environment

The evocation of “evaluation” in higher education and research quickly leads to tensions linked to our history and our practices, on subjects such as student guidance, selective courses, registration fees or still academic freedom. We will distinguish in particular the institutional expertise, carried out by a committee of peers, control, inspection, or audit.

The current debates on the multiannual research programming law clearly illustrate the question of the organization and usefulness of institutional evaluation which interests us in this article.

There are many subjects for discussion: what is the place of evaluation, and its usefulness? Is it accepted by the assessed communities? What is its impact? What practices can be envisaged to better communicate the results and ensure that they appear perfectly legible to all stakeholders and especially future students?

The peer review of a higher education and public research entity (university, school, laboratory, research organization, etc.) is organized around three actors:

  • the entity assessed;
  • the expert committee (peers);
  • the organizer: Hcéres (High Council for the Evaluation of Research and Higher Education) or the Cti (Commission for Engineering Qualifications ) or other foreign evaluation agencies .

The context of the organization of an evaluation is complex and alternates several parameters. The relationship between assessors and assessors should be based on trust and the absence of bonds of interest. The distance between evaluation and decision-making (attribution of a label, allocation of resources, etc.) is essential. The current health crisis underlines in particular the importance of questions of scientific integrity in research but also in the training of doctoral students and students.

The evaluation cannot have the sole objective of sanctioning and regulating the system, at the risk of leading to adaptation bias of the actors. It must be designed from a triple perspective of aid for the development of the entities evaluated, of decision-making support for the supervisory bodies, and of information for the public and users of higher education.

Current issues

The peer review systems put in place by Hcéres, in application of the current law on higher education and research, thus lead to the criteria adopted and to the observation of reality (self-assessment report, indicators, visit of the committee), so many steps essential to the expression of a judgment (report of the committee of experts). These methods are also part of a quality and continuous improvement process, formalized at European level, a consequence of the Bologna process.

The current obligation to evaluate all training and all research units nevertheless raises the question of the effectiveness of the evaluation system. Taking into account the load induced by this “industrialization” of a very large number of expertises (several hundred for a university every five years), this does not allow investigations likely to generate better capital gains for a given establishment. .

In addition, the institutional assessment carried out by Hcéres is concentrated on around fifty establishments each year, while other establishments, for example private not contracted with the state, or specific establishments such as ENA have never been , evaluated by Hcéres.

The definition of the evaluation grain, that is to say of the components to be evaluated within a university (diplomas, faculties, schools, institutes, UFR, teaching departments, research departments, laboratories, research teams ) should not be frozen, because the autonomy of the establishments has led to different organizational models.

It is thus advisable to define with flexibility a framework allowing the diversity of establishments to be able to express their specificities and their strategies, and to avoid forcing them to return to a stereotype. It is in this sense that the subject of the updating of the law is essential.

Possible evolutions

Scientific and educational life, creativity and student success cannot, however, be limited to standardized and fixed indicators or rankings. Risk-taking and the detection of “weak signals” in innovation are, for example, fundamental issues for progress.

How to develop a performance measurement which is not normative, which can be adapted to the diversity of people, assessed establishments and ecosystems concerned, and which stimulates establishment dynamics? In particular, this involves assessing the levers used by establishments to improve the efficiency of their action and their performance.

A global change in operating methods could not be reduced to an isolated action by an assessment agency to compare entities, especially since, in the past, the rating of laboratories has made it possible to highlight the limitations of such an approach (and its rejection) if only by the limited territorial scope of the comparisons made.

We could thus consider discussing a more global approach involving not only assessment agencies, but also establishments and line ministries, by integrating acceptance by the communities concerned in the establishments into the process. Let’s talk about a few avenues:

  • at the level of training and student success, distinguishing between the License levels (and issues related to the law relating to the guidance and success of students) and Master-Doctorate (and issues related to research), and by resorting to the use of public data certified by the establishments on the follow-up of students, updated annually at the national level, as the Cti currently practices for engineering schools;
  • at the research level, by distinguishing the contribution of laboratories to an establishment strategy, supplemented by national analyzes by major disciplinary fields (involvement of the Science and Technology Observatory, coordinated evaluation of research teams from the same scope, national disciplinary summaries) to analyze France’s position.

To preserve a climate of trust, it is therefore proposed that the current assessment methods be gradually changed rather than a brutal and radical transformation, carrying risks of rejection, by placing henceforth – as can be seen in other European countries – at the heart of the evaluation process, the establishment as the main actor in its own internal and then external evaluation.

These structural and structuring reflections are all the more topical today as they are part of a context deeply impacted by the climate transition the health crisis which, by imposing the shift into a society of physical distancing, will us necessarily lead to a change of course.

Author Bio: Michel Robert is Professor of Microelectronics at the University of Montpellier

Tags: