Strategy Over Surrender in Business School Rankings

When we take the rankings too literally, we demean the intelligence of the institution—and ultimately harm higher education.

By Kai Peters is pro vice chancellor of business and law at Coventry University in the United Kingdom.

"Rankings are really theater”—Trevor Barratt, Managing Director, Times Higher Education
Kai Peters


These words were said to me in conversation this past March. They have stuck with me ever since. Business school rankings have been published for 30 years now, and over those decades, the rankings themselves have changed and grown into an increasingly subjective measure of b-school quality. But how have business schools responded and adapted to these changes? Let us first examine how we got where we are today by tracing the history of the rankings so we can then look at what I believe business school administrators can do to make sure they can benefit from the rankings, rather than be at their mercy.

In 2007, I wrote an article titled “Business School Rankings: Content and Context” for the Journal of Management Development (restricted access). As will not come as a surprise to anyone, the article pointed out that different rankings are all subjective and value different attributes in different ways, that it truly is impossible to say what makes a business school great in any objective way, and that subtle or not-so-subtle changes come along as the rankings evolve from year to year. Further, I assert that static rankings, where there is no institutional movement, would be boring, and that rankings are indeed theater.

Now, more than 10 years later, deans invariably need to come to terms with the myriad rankings of business schools and programs, led by the MBA rankings. John Byrne, former editor of what was once known as just BusinessWeek, launched the MBA rankings in 1988. This ranking inspired two other publications to follow suit: U.S. News & World Report launched its ranking in 1990, and nine years later the Financial Times debuted its own. In addition to these “big” MBA rankings, a host of other national and international rankings also have become influential.

In my article of a decade ago, I showed why rankings matter. I calculated a “rankings versus tuition” line of best fit for MBA programs. This valuation clearly showed that a significant price premium existed for the top-ranked schools, as did a tremendous increase in inquiries and applications for the top 20 schools. It was absolutely clear that excellent rankings led to increased demand and a subsequent ability to increase prices. Because the rankings are not only theater but also big business, schools dissect each criterion and try to optimize each one.

“Working” these rankings in this way is now common. I’ve seen students admitted because schools believe their pre- and post-MBA salaries will maximize economic value added; I’ve seen advisory boards designed specifically to maximize internationalism and gender diversity; and I’ve seen faculty rewarded massively for publishing in the journals that count toward various rankings. Perhaps some of these factors actually do add value to a business school. Alas, it does not always stop there, and schools have also been caught “gaming” the rankings via auditors from the “Big Four” firms that have unfortunately become a feature of the rankings world.

MBA rankings are largely international, whereas undergraduate rankings, or “league-tables” as they are called here in the U.K., are more nationally oriented, as the largest numbers of undergraduate students are recruited nationally. In the U.K., these rankings really began in earnest some years after the MBA rankings theater began. The Times ranking began in 1992, the Guardian’s in 1999, and the Complete Universities Guide in 2007. As with the MBA rankings, there are many more publications that have joined the rankings business over the years.

Researching the history of rankings is actually quite a challenge—type in “history of rankings” and you get the ranking of history as a subject. With some diligence, however, I did make some progress. What was actually the most interesting finding for me was that much of the work around the history of rankings, done by Rachel Bowden in 2000 and Sarah Amsler and Chris Bolsmann in 2012, was a criticism of rankings as social exclusion. What is as interesting for me is that there is invariably some intrigue surrounding the authorship of the various rankings. The U.K.’s Complete Universities Guide began when the author of the Times ranking, Bernard Kingston, no longer saw eye to eye with the Times and set out on his own.

Given that rankings not only move markets but also shift publications, it is surprising that overall university rankings only began in the early 2000s. The Academic Ranking of World Universities by Shanghai Jiao Tong University began in 2003 and was quickly followed by the ever-enterprising duo of Nunzio Quacquarelli and Matt Symonds—the Q&S of QS—in 2004. The QS ranking originally appeared in Times Higher Education (THE), but THE managers soon thought they could do things better themselves and went their separate way. QS remains in business and claims 50 different ranking variants as a part of its portfolio. Times Higher Ed launched its own rankings portfolio in 2010 and has repositioned itself from a news magazine about higher education to a data metrics business about higher education. It has increased its market value logarithmically by doing so.

University presidents and vice chancellors obsess as much about THE and the QS global university rankings as business school deans worry about Bloomberg Businessweek and the Financial Times. So, what should they do about the rankings?

In my opinion, the first step should be to seriously think about their own institution’s strategy and mission. What is their positioning? Which potential students, faculty members, and stakeholders are they seeking to influence? Is their audience regional, national, or international? Is it undergraduate, graduate, or executive education? Is there an element of a ranking (for example: internationalization), a specific ranking (online programs?) or a compendium of rankings (all-around European school in the FT?) that will help make their position clear? If so, they should reflect on whether getting into, or making progress in, that particular ranking is realistic.

Making progress is not necessarily a matter of making complete changes but often a matter of understanding the ranking factors really well and “answering the exam questions” properly—counting publications accurately, checking on faculty members’ international experience, tracking alumni well enough to be able to reach them when the rankings surveys are out. The outcome target need not be coming out at the top of the table but simply coming out better than one’s main competitive set of schools so that the marketing proposition can be “most international in the city,” “best school in state,” or “highest employability in the region.”

Rankings have now been a core feature in the world of business schools and higher education in general for the past three decades, and they are not going away. A nuanced strategy, as described above, can yield positive results for a school. When we take the rankings too literally, we demean the intelligence of the institution—and ultimately harm higher education. We should not slavishly view what other people think as the ultimate mark of quality.

Perhaps a sensible last word on rankings is to quote Michael Rosen, author of the children’s book We’re Going on a Bear Hunt. Rosen writes, “We can’t go over it, we can’t go under it, we’ve got to go through it.” He’s writing about a dark cave, which does seem entirely appropriate, don’t you think?

(This article is a collaboration of AACSB International for the MBA International Business magazine and was published in BizEd on June 15, 2018)

Comentarios