Menu

Search

  |   Insights & Views

Menu

  |   Insights & Views

Search

University rankings: good intentions, image polishing and more bureaucracy

Some UK universities will be cheering, some groaning, after the release of rankings under the Teaching Excellence Framework (TEF). My own university received a silver, so we’re shrugging. Despite all these reactions, we don’t know if we can expect any impact on the quality of teaching. What we do know, however, is that it will lead to a large-scale image polishing, the mushrooming of rankings-related bureaucracy, judicious gaming of the new rules, and cynicism amongst professors and lecturers.

When the universities minister, Jo Johnson, announced the TEF, he had good intentions. He hoped to address the widely recognised problem that academics were rewarded for obscure research read by a handful of people. Teaching commitments were being neglected and Johnson worried this meant that students suffered.

Now, those students can see if their £9,000-a-year tuition fees are spent on gold, silver or bronze-rated universities. The government hopes this will create transparency and allow for more informed choices. Johnson also hoped that the rankings would also drive up teaching quality across the sector. I have spent much of my academic career studying how knowledge-intensive organisations, including universities, build reputations and respond to new challenges. I fear that Johnson may be disappointed.

Quibbles

Of course this isn’t the first rankings system. In 1910, James McKeen Cattell’s directory, American Men of Science, ranked US institutions on the basis of the concentration of distinguished people. In 1983, the US News and World Report Best College Ranking attempted a comprehensive national ranking. Since 2003, there has been a flowering of global ranking systems, including the Academic Ranking of World Universities, the THE-Thomson Reuters World University Ranking and QS World University Rankings.

There have always been quibbles over whether these are meaningful indicators of university quality. How reliable are they? Are the right questions being asked? How do they deal with missing information? How are different indicators weighted? How much should small differences affect relative ranking? The TEF now faces the same questions.

In truth though, rankings are more about perception than performance: more about PR than getting accurate information about how a university operates. The results will make it on to promotional material one way or another – however the rankings come out. When the results of the most recent Research Excellence Framework was announced, I read an email from one disappointed dean who nevertheless saw cause to celebrate his achievement – of ensuring the school was ranked at the top of the list in the West Midlands.

These image-polishing activities are not in vain. An OECD study found that students do indeed use rankings as a way to filter information about institutions. One US study found that rankings had a genuine effect on the number of student applications.

Impact assessment

Aside from PR puffery, do rankings actually affect how universities operate? Well, yes. But that may not be a good thing. Generally speaking, as soon as you create a ranking system, you also create a whole system for gaming the rankings.

In some cases, this has involved outright lying as institutions have fabricated statistics about various things including graduation rates, staff-student ratios and test scores. This is relatively rare. What is more common is known in US law schools as “jukin’ the stats” – manipulating the results to get a favourable ranking.

Common tricks include bringing in students who you know will perform well, will feel satisfied and go on to earn high salaries. This of course might boost your scores, but it can mean many “non-traditional” students face discrimination.

One effect of the gaming of the system is that universities become increasingly standardised. To fit in with rankings, universities can spend big on building up attributes and offerings which they hope will push them up the table.

We can see this effect from the creation of global systems such as the Shanghai rankings have encouraged universities across the world to model themselves on large US science-intensive universities. The Finnish government ploughed tens of millions into merging three institutions in Helsinki to create a “Nordic MIT”, an exercise in reputation-building with the aim of improving its standing in the rankings. Of course, this may turn out to be a wonderful investment for the university and students alike, but you have to wonder about the fragile motivation behind it.

In other cases, it has led to universities putting on the appearance that they have changed. The French government recently formed a single research-intensive university in Paris by pushing together individual institutions in the region. This changed little about how the institution operated. But it did create a new brand which could climb up the global rankings.

Rankings rituals

Rankings have also created vast bureaucracies. Many universities have whole offices entirely devoted to processing and dealing with the wide range of accreditation and ranking exercises in which they participate. Rankings often require academics and administrators to engage in shallow bureaucratic rituals. As a result, faculty time is taken up with tasks that do nothing to increase the quality of teaching or research, but simply grease the wheels of the rankings process. Academics might find themselves inputting data on fleeting interaction with students, laboriously documenting the most minor “teaching innovations” or attending poor-quality teacher training courses.

That’s where the cynicism sets in. In research on business schools, we found that academics would often be deeply cynical about their role in producing the rankings, but would participate in the process anyway. Many talked about it as “playing the game”, helping to pull in the punters by whatever means necessary.

It is a possibility that the TEF will drive up teaching quality in UK universities, but the certainties are less benign. And it must be a concern that the troubling baggage of rankings systems – the gaming, the bureaucracy, the cynicism – will end up undermining that primary goal.

The ConversationAndre Spicer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond the academic appointment above.

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.