On the illogics of the Times Higher Education World Reputation Rankings (2013)

Share:

\"\"

Amidst all the hype and media coverage related to the just released Times Higher Education World Reputation Rankings (2013), it\’s worth reflecting on just how small of a proportion of the world\’s universities are captured in this exercise (see below). As I noted last November, the term \’world university rankings\’ does not reflect the reality of the exercise the rankers are engaged in; they only focus on a miniscule corner of the institutional ecosystem of the world\’s universities.

The firms associated with rankings have also normalized the temporal cycle of rankings despite this being an illogical exercise (unless you are interested in selling advertising space in a magazine and on a website). As Alex Usher pointed out earlier today in \’The Paradox of University Rankings\’ (and I quote in full):

\”By the time you read this, the Times Higher Education’s annual Reputation Rankings will be out, and will be the subject of much discussion on Twitter and the Interwebs and such. Much as I enjoy most of what Phil Baty and the THE do, I find the hype around these rankings pretty tedious.

Though they are not an unalloyed good, rankings have their benefits. They allow people to compare the inputs, outputs, and (if you’re lucky) processes and outcomes at various institutions. Really good rankings – such as, for instance, the ones put out by CHE in Germany – even disaggregate data down to the departmental level so you can make actual apples-to-apples comparisons by institution.

But to the extent that rankings are capturing “real” phenomena, is it realistic to think that they change every year? Take the Academic Ranking of World Universities (ARWU), produced annually by Shanghai Jiao Tong University (full disclosure: I sit on the ARWU’s advisory board). Those rankings, which eschew any kind of reputational surveys, and look purely at various scholarly outputs and prizes, barely move at all. If memory serves, in the ten years since it launched, the top 50 has only had 52 institutions, and movement within the 50 has been minimal. This is about right: changes in relative position among truly elite universities can take decades, if not centuries.

On the other hand, if you look at the Times World Reputation Rankings (found here), you’ll see that, in fact, only the position of the top 6 or so is genuinely secure. Below about tenth position, everyone else is packed so closely together that changes in rank order are basically guaranteed, especially if the geographic origin of the survey sample were to change somewhat. How, for instance, did UCLA move from 12th in the world to 9th overall in the THE rankings between 2011 and 2012 at the exact moment the California legislature was slashing its budget to ribbons? Was it because of extraordinary new efforts by its faculty, or was it just a quirk of the survey sample? And if it’s the latter, why should anyone pay attention to this ranking?

This is the paradox of rankings: the more important the thing you’re measuring, the less useful it is to measure it on an annual basis. A reputation ranking done every five years might, over time, track some significant and meaningful changes in the global academic pecking order. In an annual ranking, however, most changes are going to be the result of very small fluctuations or methodological quirks. News coverage driven by those kinds of things is going to be inherently trivial.\”

The real issues to ponder are not relative placement in the ranking and how the position of universities has changed, but instead why this ranking was created in the first place, and whose interests it serves.

Tags: