‘’ Is there deliberate bias towards English speaking higher education intuitions, particularly in the UK and US, or a vile conspiracy to demote and disparage intuitions in low and middle-income countries? These merely would have remained under wraps as hearsays until vociferous protests by South Korean universities, which called for a global boycott of QS, unraveled many uncomfortable truths.’’
Many academicians across geographies were utterly dismayed at the fact that Utrecht University, a four century old well-known public research university in Netherlands, was omitted from the Times Higher Education (THE) World University Ranking 2024. Reason: Utrecht University made a deliberate decision not to submit data for rankings. They believed that rankings placed too much emphasis on comparison and competition, while their focus was on collaboration and Open Science. At this oldest Dutch university, they were convinced that it was impossible to capture the full value of their educational and research programs in a single rating as universities vary widely in terms of size, budget, and objectives, and they exceled in different areas.
How the Land of the Morning Calm (Han-guk) turned to be So Disquiet?
In an audacious and unprecedented move in year 2023, South Korea’s leading research universities united in a collective effort to challenge the global university rankings issued by QS. The universities went ballistic over what they see as a lack of transparency in the changes made to 2023 year’s QS rankings, which they claimed to have been riddled with “mathematical flaws” and deep seated bias.
A total of 52 South Korean universities formed the University Rankings Forum of Korea (URFK) in a concerted effort to address these concerns. It was almost like South Korean Armageddon against North Korea. Even Sejong University, which rose by 150 places, joined URFK to boycott the rankings. This unified stance came in response to significant methodology changes in the QS World University Rankings, which were released at the end of June 2023. These changes notably affected the rankings of numerous South Korean universities, with nearly every university in the country (except one) experiencing a drop, along with several prominent universities in Japan, Hong Kong, and Taiwan.
The universities argue that the main cause of these declines lied in the new International Research Network (IRN) indicator introduced by QS this year. According to URFK, the IRN indicator were particularly disadvantageous for non-English-speaking countries, such as South Korea, and was an unreasonable measure when evaluating global research performance. The new indicator had reportedly benefitted Australian universities, which saw an improvement in their rankings as a result.
The URFK included the country’s most prestigious institutions, such as Seoul National University (SNU), Korea University, and Yonsei University (often referred to as SKY universities), as well as leading research and technology schools like the Korea Advanced Institute of Science and Technology (KAIST) and Pohang University of Science and Technology (POSTECH), among others.
Some South Korean universities saw their rankings drop by as much as 200 to 300 places following the new methodology. However, the URFK had emphasized that their main concern was not the specific drops in rankings for individual universities, but rather the overall significant decline for Korean universities as a group. They argued that the methodology changes had a disproportionate impact on the country’s higher education sector as a whole.
In contrast, QS has defended its methodology, suggesting that the primary factor behind the drop in Korean universities’ rankings was not the IRN indicator, but rather the reduced weight given to the faculty-student ratio, which has been halved in this year’s rankings. QS argued that the change in emphasis on this indicator had a more significant impact on the rankings of Korean universities.
Apparently QS adopted ”whataboutism”: the strategy of responding to an accusation with a counter-accusation instead of a defense against the original accusation.
The primary factor affecting the performance of Korean universities in 2023 year’s rankings is the decision to reduce the weight of the Faculty-Student Ratio indicator from 20 percent to 10 percent. This indicator has traditionally been one of the strongest for Korean institutions. According to Ben Sowter, senior vice president at QS, the rationale behind this change is based on several recent developments in higher education. These include the advancement of learning technologies and the increasing use of teaching assistants—such as postgraduate students and postdoctoral fellows—in classrooms, who do not necessarily count towards the faculty numbers.
More and more, reports are emerging of prestigious universities choosing to opt out of the rankings game altogether.
One notable instance came in 2022 when the law schools of Harvard and Yale, followed by the University of California, Berkeley, and others, decided to withdraw from the US News & World Report rankings. As Yale’s law dean explained, “We have reached a point where the rankings process is undermining the core commitments of the law profession.”
Not long after, a similar move occurred within medical schools. Harvard, Stanford, Columbia, and the University of Pennsylvania all chose to pull out of the US News rankings, citing the same concerns about the negative impact of rankings on academic integrity and institutional values. A parallel development took place in China in 2022 when three esteemed universities—Renmin, Nanjing, and Lanzhou—removed themselves from all international rankings, citing a desire to preserve their autonomy.
The Story goes from Sublime to Ridiculous: India in Focus
The 2024 THE World University Rankings were released on October 10th, and once again, summarily ignored all Indian universities from the top 250. The older Indian Institutes of Technology (IITs) have been boycotting these rankings for years, citing concerns over their lack of transparency. However, the Indian Institute of Science (IISc), truly India’s world class research university in science with Nobel laureate physicist CV Raman as its first director, continued to participate in the rankings as they consistently maintained their position in the league streets ahead of Indian counterparts.
However, 2024 year’s autumn rankings were an aberration. Notably, 30% of the overall ranking was based on “Research Quality,” which was the central criterion. Below, one can see the breakdown of the different parameters and their respective weightages (Fig 1).
Top of Form
Bottom of Form
In the “Research Quality” parameter, IISc, which is undeniably the top research institution in India, was ranked 50th. ‘’This ranking raised significant questions about the methodology behind these rankings and how such an oversight could occur,’’ writes Achal Agrawal, founder of India Research Watch in her published article ”Basic Flaws in Times Higher Education (THE) World University Rankings” in Medium.. The list can be found in the table below, prompting one to wonder who is behind these rankings and how such discrepancies happen.
What left many shell-shocked and stopped in their tracks was Saveetha Dental College (SDC), Chennai, was first on the list of most-cited dentistry institutions in the world, in the QS World University Rankings and high in World University Rankings. As investigation reports published in India’s leading newspaper Hindu revealed that the institute was engaged in an unethical practice called extreme self-citation to boost its citation score, a common but fallible measure of research quality, and propel itself to the top of these and other ranking schemes.
How Reed College became the bellwether of Uprising against Ranking System?
In 1995 and 1996, Reed College, a small private liberal arts institution in Portland, Oregon, became the first U.S. college to refuse participation in higher education rankings. The college has maintained that stance ever since.
Colin Diver, former president of Reed College, commented on the recent withdrawals of prestigious institutions from rankings, saying, “You might dismiss Reed College opting out, but you can’t dismiss Yale Law School opting out. You can’t dismiss Harvard Medical School opting out.”
Diver’s remarks highlight a deeper, more conceptual critique of rankings. As he explained, his objection centers around what he calls “Best College” rankings, which take a range of educational performance criteria and distill them into a single, formulaic number. This number is then presented as a supposed definitive measure of relative quality. “I don’t care what formula you use, what data you use, or what criteria you apply,” Diver stated. “That approach is fundamentally flawed because there are so many different types of institutions.”
The Egregious Failings of the Ranking Systems
An old adage goes, “don’t trust any statistics that you haven’t falsified yourself”. Bernd Brabec, an anthropology researcher, cautions the limitations of these rankings are apparent due to their inherent biases, which is echoed in many critical analyses that are widely available and known. On its website, THE proudly emphasizes its “World Reputation Rankings,” stating that they “are created using the world’s largest invitation-only academic opinion survey — a unique piece of research”. However, a pertinent question arises: who exactly receives these invitations?
Global university rankings have become one of the most influential and widely cited benchmarks in higher education today. Every year, major ranking organizations such as QS World University Rankings, Times Higher Education (THE), and the Shanghai Ranking (Academic Ranking of World Universities, ARWU) release their lists, claiming to rank universities based on their performance across a variety of indicators. These rankings are often used by prospective students, faculty, governments, and industry leaders to measure and compare institutions of higher learning. The importance of these rankings in influencing university reputations, funding, and admissions cannot be overstated. Tens of thousands of students on both sides of Atlantic refer to the ranking to take an informed decision on their admission, move countries and spent billions. University donors take them seriously and journalists popularise them. The governments and university leaders use this data around the world to help support their strategic direction.
However, the politics behind these rankings—how they are constructed, how data is collected, and the implications of the results—are fraught with complexities, biases, and controversies. This feature delves into the “vicious politics” of global university rankings, exploring how these rankings are formed, their limitations, and how they impact universities, students, and global higher education policy.
Many experts in higher education contend that rankings can shift universities’ focus away from teaching and social responsibility, instead prioritizing the types of scientific research that align with the indicators used in ranking systems. ‘’There are also concerns that applying a narrow set of criteria to evaluate global universities, coupled with the intense pressure to appear in the top 200, leads to the homogenization of institutions,’’ says Hiren Raval, chief executive of C3S Business School, a leading business school in Spain. ‘’This trend reduces their ability to be responsive and relevant to their local contexts.’’
Additionally, rankings are often criticized for reinforcing the advantages held by the top 200 institutions, which has significant implications for equity in higher education.
Analyzing the big ticket ranking agencies such as QS World University Rankings and Times Higher Education, one can unpack the implications of their methodologies and the power dynamics they perpetuate.
It is not uncommon for individual universities to explain drops in their global rankings from year to year, or for changes in rankings to trigger strong protests. In some cases, small groups of universities, such as certain Indian Institutes of Technology and Chinese universities, have even resorted to ‘boycotting’ rankings. These institutions withhold information for the Times Higher Education (THE) rankings while demanding revisions to the methodology.
Jo Adtunji, editor of The Conversation UK and served in the expert committee to critically look at ranking systems convened by the United Nations University International Institute for Global Health, says that there is a conceptual problem with rankings. ”It’s not sensible to put all institutions in one basket and come up with something useful.”
Their expert group concluded that their methods are unclear and some seem to be invalid. ”We would not accept research with poor methods for publication yet somehow rankers can get away with sloppy methods.”
The experts noted that rankings were massively overvalued, and reinforced global, regional and national inequalities. And, lastly, that too much attention to ranking inhibited thinking about education systems as a whole.
Rankings have become a significant commercial enterprise. They generate revenue, attract students to the highest-ranked institutions, and can cause enrollment numbers to drop for universities that fail to make the list. Tied to a $4.4 trillion (£3.16 trillion) global education industry, rankings are often part of a broader package of services that contribute to the rising cost of education. Corporate interests play a key role in shaping research priorities. Research that challenges the status quo or offers critical perspectives—such as independent studies highlighting the risks of certain drugs, the environmental harm caused by large corporations, or the prevalence of rape culture on university campuses—is frequently sidelined or marginalized.
The Hegemony of U.S. and U.K. Universities
For decades, the U.S. and U.K. have dominated global university rankings, particularly those compiled by organizations such as QS and THE. American institutions such as Harvard, MIT, Stanford, and Princeton, alongside U.K. stalwarts like Oxford, Cambridge, and Imperial College London, consistently occupy the top spots. These rankings are often perceived as the definitive measure of academic excellence, and they play a significant role in shaping the global reputation of universities.
This dominance has not been without criticism. One of the major issues is that the metrics used by these rankings tend to favor large, well-established institutions that already benefit from significant financial resources, international recognition, and vast research output. As a result, universities in developing countries or those from non-English-speaking regions often struggle to climb the rankings, even if they offer high-quality education or significant regional impact.
Furthermore, the focus on research output—especially publications and citations—overemphasizes disciplines and institutions that are primarily research-driven, while neglecting teaching quality, local impact, and other factors crucial for a more holistic view of higher education.
Which are these priggish holier-than-thou ranking cult agencies and their clientele?
Ranking organizations are private, for-profit entities that generate revenue in various ways. They typically collect data from universities, which they then monetize; sell advertising space; offer consultancy services to both universities and governments; and organize fee-paying conferences.
Each ranking organization adopts its own methodology, but all ultimately generate an index or score based on the data they gather. However, the processes by which they derive these scores are often unclear. These organizations are not fully transparent about the criteria they measure or how much weight each component carries.
For example, Times Higher Education conducts a survey in which academics are invited to rate universities, either their own or others. The outcomes of such surveys can be heavily influenced by response rates, the backgrounds of the respondents, and their knowledge—or lack thereof—about the institutions they are evaluating.
While it is easy to produce a numerical score from such a survey, the validity of that score is questionable. Does it accurately reflect reality? Is it free from bias? If I work at a particular institution, might I be inclined—whether consciously or unconsciously—to rate it highly? Alternatively, if I am dissatisfied with my institution, I might rate it poorly. Either way, such surveys do not offer an objective or accurate reflection of an institution’s true standing.
Ranking organizations also use other indicators, such as the number of publications a university produces, which can be considered more objective. However, research has shown that biases exist in both the publication process itself and in what is actually counted by the rankings. Furthermore, ranking organizations tend to prioritize certain fields, particularly those in science, technology, engineering, and mathematics (STEM). They do not account for all types of research equally, nor do they disclose how they weight the various components they do measure.
The idea of ranking universities on a global scale is relatively new, emerging in the early 2000s with the rise of the internet and increasing access to data. In 2004, the QS World University Rankings (formerly known as the Times Higher Education-QS World University Rankings) introduced the concept of global rankings for universities, followed by the release of the Times Higher Education World University Rankings in 2006 and the Shanghai Ranking in 2003. These rankings, which aim to measure the academic performance of institutions, have grown in influence as universities around the world seek to improve their standing.
University rankings are seen as a tool for assessing the quality of institutions, providing prospective students with a “report card” of universities based on factors like research output, teaching quality, and internationalization. The idea that one can compare the “best” universities globally appeals to the increasing internationalization of higher education. Rankings have become an easy shorthand for understanding where an institution stands in the global higher education system.
QS World University Rankings
The QS World University Rankings is an annual list of the world’s top universities, first published by Quacquarelli Symonds in 2004. In 2024, the rankings included 1,500 institutions, with the Massachusetts Institute of Technology, Imperial College London, the University of Oxford, Harvard University, and the University of Cambridge occupying the top five positions. It’s important to distinguish the QS rankings from the Times Higher Education World University Rankings. From 2004 to 2009, the QS rankings were released in partnership with Times Higher Education under the name “Times Higher Education-QS World University Rankings.” However, in 2010, QS took over the full responsibility of publishing the rankings after Times Higher Education separated from QS and partnered with Thomson Reuters to develop its own ranking methodology. Additionally, the QS rankings were once published in the U.S. by U.S. News & World Report under the name “World’s Best Universities.” In 2014, U.S. News & World Report introduced its own global university ranking, titled “Best Global Universities,” with its first edition released in October of that year.
The QS World University Rankings are based on six key indicators: academic reputation (40%), employer reputation (10%), faculty/student ratio (20%), international faculty and international student ratio (10% each), and citations per faculty (20%). The first two indicators—academic reputation and employer reputation—are based on global surveys, where academic and industry leaders are asked to provide their assessments of universities.
The QS ranking has faced criticism for its heavy reliance on subjective indicators and reputation surveys, which can fluctuate over time, creating a feedback loop. There are also concerns about the global consistency and integrity of the data used to compile the QS rankings. The development and production of the rankings are overseen by Ben Sowter, QS Senior Vice President, who was ranked 40th in Wonkhe’s 2016 Higher Education Power List—a compilation of the 50 most influential figures in British higher education.
‘’The use of reputation surveys has been a point of contention,’’ asserts says Dr Aida Mehrad, head of academics at C3S Business School in Barcelona. ‘’Many of academicians among us, and I largely concur with them, argue that the reliance on subjective perceptions skews rankings toward institutions with historically strong reputations, particularly in the U.S. and the U.K. This bias perpetuates the dominance of a few elite universities while disadvantaging institutions from emerging economies, even if those universities may have strong research outputs or excellent teaching.’’
QS has acknowledged data-collection errors related to citations per faculty in some of its previous rankings. One issue concerns the differences between the Scopus and Thomson Reuters databases. While these two systems generally capture similar publications and citations for major global universities, Scopus includes more non-English language publications and journals with smaller circulation, particularly for less prominent institutions. This has led some critics to argue that citation averages are disproportionately influenced by English-speaking universities, which could disadvantage institutions where English is not the primary language.
Maria Fernanda Dugarte, dean and director of Institutional Affairs at C3S Business Schoo, believes that the emphasis on faculty/student ratio and internationalization can disproportionately benefit universities that attract significant international talent and resources, such as those in Western countries. ‘’This leads to a cycle where wealthier institutions—particularly those in the U.S. and U.K.—can continually bolster their rankings by recruiting top faculty and international students, creating an environment where universities with fewer resources may struggle to improve their standing.’’
Times Higher Education Rankings
From 2004 to 2009, Times Higher Education (THE), a vertical of Times newspaper in the UK, collaborated with Quacquarelli Symonds (QS) to produce the annual Times Higher Education–QS World University Rankings. During this period, THE published a ranking of the top 200 universities, while QS ranked approximately 500 institutions online, in book form, and through media partners. However, on 30 October 2009, THE parted ways with QS and teamed up with Thomson Reuters to create a new set of world university rankings called the Times Higher Education World University Rankings. The 2015/16 edition of these rankings included the top 800 universities globally, with the 2016/17 edition expanding to rank the top 980 universities.
On 3 June 2010, Times Higher Education unveiled the methodology it would use to compile the new rankings. This methodology introduced 13 distinct performance indicators, a significant increase from the six used in the previous rankings. After further consultation, the criteria were consolidated into five broad categories to generate the final rankings. THE published its first rankings based on the new methodology on 16 September 2010, a month earlier than usual. Alongside this, THE launched the THE 100 Under 50 ranking, which highlights universities under 50 years old, as well as the Alma Mater Index.
In 2010, The Globe and Mail referred to the Times Higher Education World University Rankings as “arguably the most influential.” Research conducted by professors at the University of Michigan in 2011 also showed that the early THES rankings played a disproportionate role in shaping the global hierarchy of research universities.
Essentially, what began as a small publication in London, originally known as The Times Higher Education Supplement, has transformed into a global commercial enterprise. In the early 2000s, it was seen as a niche curiosity, but today, it profits by continuously recycling data—much of it sourced directly from the universities themselves.
The ranking organizations have skillfully positioned themselves as both auditors and consultants to universities around the world. Now, commercial ranking firms offer universities paid “masterclasses” on how to improve their academic practices in order to perform better in the very rankings they produce.
‘’THE World University Rankings, while influential, face criticism for potential biases, overemphasis on research output, and a lack of comprehensive metrics, potentially disadvantaging universities outside the English-speaking world and those prioritizing teaching and community engagement over research,’’ says Dr Marc Sanso, head of academics in Aspire Business School at Barcelona. ‘’The rankings heavily rely on citations, which are more common in English-language publications and journals. This can disadvantage universities in non-English-speaking countries or those with strengths in disciplines (like the humanities) where books are the primary form of publication, which are not always covered by citation databases.’’
The focus on research output and citations can also lead to a skewed view of university quality, potentially overlooking institutions with strong teaching reputations or those that prioritize community engagement and outreach.
The THE World University Rankings use 13 performance indicators, grouped into five key areas: teaching (30%), research (30%), citations (30%), international diversity (7.5%), and industry income (2.5%). The research indicators in THE’s methodology are weighted heavily, reflecting the importance placed on a university’s research output and influence. Research quality is measured using citations, which is another indicator that has drawn criticism. Citation counts can be easily skewed by a few prolific researchers, which does not necessarily reflect the overall quality of teaching or student experience.
The weighting of international diversity, which includes international students and faculty, has been seen as a way of reinforcing the dominance of English-speaking countries, particularly the U.S. and the U.K. Critics point out that this global ranking system disproportionately rewards institutions that have the means to attract global talent, while institutions from smaller or developing countries that may not have the same international reach suffer.
More significantly, THE places significant emphasis on citations as a key metric for generating university rankings. However, using citations to measure educational effectiveness has been criticized in various ways, particularly for disadvantaging universities where English is not the primary language. Since English is the dominant language in most academic journals and international academic societies, publications and citations in non-English languages are less common. This has led to criticisms that the citation-based methodology is inadequate and fails to offer a comprehensive assessment. Another issue arises within disciplines like the social sciences and humanities, where books—rather than articles—are the primary medium for publishing research. Unfortunately, books are often not covered or are underrepresented in citation databases. Moreover, the rankings have also faced scrutiny over biases against universities in the Arab region, with scholars calling for new methodologies that address institutional disparities and ensure fair representation.
Additionally, THE’s reliance on reputation surveys for academic and employer reputation introduces similar biases as those found in the QS ranking system, further reinforcing the status quo of global academic powerhouses.
Further, Times Higher Education has been accused of favoring universities that excel in ‘hard sciences’ and produce high-quality research in these fields, often to the detriment of institutions focused on other subjects such as social sciences and humanities. For example, the London School of Economics (LSE) was ranked 11th in the world in 2004 and 2005 in the previous THE-QS World University Rankings, but saw a significant drop to 66th and 67th in the 2008 and 2009 editions. In response, in January 2010, THE acknowledged that the methodology used by Quacquarelli Symonds, the organization conducting the rankings on their behalf, was flawed, leading to bias against certain institutions like LSE.
A representative from Thomson Reuters, THE’s new partner, commented on the controversy, noting that LSE’s low ranking was a clear error, which was later corrected. However, after switching to Thomson Reuters as the data provider, LSE’s ranking continued to fall to 86th place, which was defended as a more accurate reflection of the university’s standing. Despite consistently ranking highly in national assessments, LSE has struggled to maintain a strong position in the global rankings, as have other institutions like Sciences Po, which have suffered due to inherent biases in the ranking methodology. For example, Trinity College Dublin’s rankings in 2015 and 2016 were significantly impacted by a data error, which went unnoticed, prompting criticism of the limited data verification process employed by the ranking organization.
The broader issue of who the rankings actually serve remains unclear. Many undergraduate students are not particularly concerned with the scientific research output of universities, and the cost of education is not factored into the rankings. This means that private universities in North America are often compared directly with public universities in Europe, where countries like France, Sweden, and Germany offer free higher education. This disparity further highlights the limitations and lack of context in global rankings.
In 2021, the University of Tsukuba in Japan was accused of submitting falsified data regarding the number of international students enrolled for inclusion in THE World University Rankings. The incident led to an investigation by THE and prompted the university to revise its data submission processes. However, the case also raised concerns among faculty members about the potential for manipulation within THE’s ranking system. The issue was even brought up in Japan’s National Diet on April 21, 2021.
In response to concerns over transparency, seven Indian Institutes of Technology (IITs)—Mumbai, Delhi, Kanpur, Guwahati, Madras, Roorkee, and Kharagpur—decided to boycott THE rankings starting in 2020. These institutions expressed dissatisfaction with the lack of transparency in the ranking process, highlighting ongoing issues with the reliability and fairness of global university rankings.
The THE rankings often fail to consider factors beyond research output and reputation, such as the quality of teaching, student-to-staff ratios, and the impact of universities on their local communities. Some argue that the rankings are too focused on a narrow definition of “excellence” and do not adequately reflect the diverse missions and values of different universities. Universities might prioritize research output and citations over other important aspects of higher education in an attempt to improve their rankings. This can lead to a focus on quantity over quality and a neglect of other important aspects of the university experience.
THE, like other ranking systems, uses reputational surveys to gauge the perceived quality of universities. These surveys have been criticized for being subjective and potentially skewed towards well-known institutions. THE does not disclose the exact methodology used to calculate rankings, which can make it difficult to assess the validity of the results. The rankings can influence university policies and resource allocation, potentially leading to a focus on research and a neglect of other important aspects of higher education. This can also lead to a focus on attracting high-profile researchers and publications, rather than on creating a positive learning environment for students.
Some experts argue that university rankings are not a reliable measure of quality and that they can even be harmful to higher education. They argue that rankings can lead to a focus on quantity over quality and a neglect of other important aspects of the university experience.
Shanghai Ranking (ARWU)
The Shanghai Ranking, or Academic Ranking of World Universities (ARWU), is perhaps the most research-focused of the major global rankings. It uses six indicators: quality of education (10%), quality of faculty (40%), research output (40%), and per capita academic performance (10%). Unlike QS and THE, the Shanghai Ranking does not rely on reputation surveys. Instead, it uses objective measures such as the number of Nobel laureates and Fields Medalists among alumni and faculty, publications in prestigious journals, and the volume of research citations.
While the Shanghai Ranking is considered to be more objective than QS and THE, it still faces criticism for privileging research output over other important aspects of university life, such as teaching, student satisfaction, and societal impact. The emphasis on elite achievements like Nobel Prizes also skews the ranking toward a handful of prestigious institutions, particularly those in the U.S. and Europe, while undervaluing universities that may contribute significantly to regional or local development.
U.S. News & World Report
U.S. News & World Report (USNWR) is an American media company that publishes news, consumer advice, rankings, and analysis. Founded in 1948 through the merger of the domestic-focused weekly newspaper U.S. News and the international-oriented World Report, the company quickly gained prominence. One of its flagship products, the Best Colleges Ranking, was first published in 1983 and has since become one of the most influential college and university rankings in the United States.
The flaws and failings of U.S. News & World Report rankings have been widely discussed and critiqued over the years. Both of these ranking systems, while influential, have come under scrutiny for various reasons.
The Best Colleges Rankings have sparked significant controversy and criticism from various education experts. Critics argue that the rankings rely heavily on self-reported data from institutions, some of which has been found to be fraudulent. These rankings also encourage institutions to manipulate data in an effort to improve their standing. By assigning a precise ordinal ranking to institutions based on questionable data, U.S. News creates a false sense of accuracy. Moreover, the rankings are said to contribute to the competitive admissions environment, often overemphasizing prestige while neglecting individual fit, especially by comparing institutions with vastly different missions on the same scale.
In 2022, the rankings were further scrutinized when Columbia University was dropped from second place to 18th after a report by Columbia’s mathematics professor, Michael Thaddeus, revealed that the university had misreported data to U.S. News & World Report. Despite this significant finding, other universities within the “national universities” category were not renumbered.
Why is it so Difficult to Rank?
Chris Bank, a former vice-chancellor of Newcastle University, wrote in his book ”The Soul of a University” that the phenomenon of ‘university world rankings’ is really just a global confidence trick. At the time, this was a minority opinion. Five years later, there is evidence that the tide is beginning to turn. ”This change should give pause for thought to all those university leaders who still fawn on the commercial rankers.”
The critique of “university world rankings” based on methodology is well-established and has been reiterated numerous times. At its core, the argument is simple: in order to create a ranking, one must make numerous arbitrary decisions between equally valid options, rendering the final result ultimately meaningless.
‘’ It is not difficult to construct a university ranking. What is needed is not so much any technical skill as enough blind self-confidence to tell the world that the arbitrary choices you have made in constructing your ranking actually represent reality.’’
The conceptual argument against rankings is even simpler than the methodological argument: all ‘university world rankings’ are conceived in sin. Any such ranking suffers from the original sin of purporting to capture something which there is no reason to believe exists: a one-dimensional ordering in terms of quality of all universities in the world.
The Methodologies behind the Rankings
The influence of global rankings on universities cannot be overstated. Institutions across the world invest considerable resources into improving their standings, with many universities viewing higher rankings as a way to attract better funding, more top-tier students, and world-class faculty. However, this has led to a range of negative consequences, including the homogenization of universities, the marginalization of teaching and community engagement, and the growing influence of market-driven metrics in academic decision-making.
While university rankings are widely used, their methodologies are not always transparent or universally agreed upon. Each ranking system employs different criteria and weightings, leading to varying results. The QS, Times Higher Education, and Shanghai rankings each have distinct approaches to assessing universities.
- Reputation and Prestige
One of the most significant impacts of global rankings is their role in shaping the reputation and prestige of universities. Higher-ranked universities are often perceived as more prestigious, which can attract more students, researchers, and funding. This creates a feedback loop where universities at the top of the rankings continue to receive more resources, while those at the bottom struggle to compete.
For example, institutions like Harvard, Oxford, and Stanford consistently top the rankings in all major systems. These universities have extensive networks, large endowments, and high-profile alumni, allowing them to perpetuate their top-tier status. Meanwhile, universities in less wealthy or developing countries, even if they have strong programs or regional influence, are often relegated to lower positions due to their smaller research outputs or less international visibility.
- Impact on Teaching and Student Life
Rankings also shift the focus away from teaching and student experience, prioritizing research output and other quantitative metrics. This has led to a situation where universities may focus more on publishing research papers and boosting citation counts to improve their rankings, rather than investing in high-quality teaching, student support services, or community engagement.
This can create a situation where universities are incentivized to produce research-intensive environments at the expense of a more holistic educational experience. Moreover, universities that cannot compete in research output may be overlooked by students, despite offering excellent teaching or strong programs in certain fields.
- Internationalization and Global Competition
The focus on international diversity in rankings has increased competition among universities for global talent. While internationalization can have positive effects—such as fostering a more diverse academic community—it also drives inequality. Wealthier institutions are better positioned to recruit international students and faculty, while universities with fewer resources may find it difficult to compete.
The global focus on rankings has also led to an intense “race” for position, where universities engage in tactics such as hiring star faculty, increasing research output, and offering incentives for international students, all with the aim of improving their global standing. This can contribute to a “winner-takes-all” system where only a few institutions dominate the top rankings, exacerbating inequalities within higher education.
- Policy Implications
Governments around the world are increasingly using global rankings to shape their higher education policies. High-ranking universities attract significant government funding, research grants, and international collaborations, which further reinforce their dominance. Countries like the U.S. and the U.K., which have a large concentration of top-ranked institutions, benefit from this system, while countries with less prestigious institutions are often left out of the global conversation.
This political dimension extends to international partnerships, where universities in higher-ranked countries are often prioritized for collaborations and funding. This can leave universities in developing countries at a disadvantage, even if they have valuable local expertise or innovative approaches to education.
The Emergence of Alternative World University Rankings
In recent years, the landscape of global university rankings has evolved, with the emergence of alternative rankings challenging the long-standing dominance of U.S. and U.K. institutions. Traditional rankings such as the QS World University Rankings and the Times Higher Education (THE) rankings have historically been dominated by a small group of elite universities from these two countries. However, new ranking systems like the Leiden Ranking, the Center for World University Rankings (CWUR), Round University Ranking (RUR), SCImago Institutions Rankings, U-Multirank, and Eduniversal are offering alternative perspectives, reflecting a more global and diverse approach to evaluating universities.
These alternative rankings not only serve as counterweights to the Anglo-American hegemony in higher education but also impact student admissions, faculty recruitment, and the direction of research and development globally. This feature will explore how these emerging rankings are influencing the higher education ecosystem, particularly how they are shaping student choices, faculty engagement, and global research collaborations. The discussion will highlight how these rankings are providing a more nuanced and comprehensive view of academic excellence and are challenging traditional power structures within the global academic landscape.
As a response to the limitations of traditional rankings, a number of alternative university ranking systems have emerged. These rankings seek to provide a more comprehensive assessment of universities, taking into account a wider range of factors and offering a more global perspective that challenges the hegemony of U.S. and U.K. institutions. Let’s explore these rankings in detail and examine their implications for student admission, faculty engagement, and research and development.
1. Leiden Ranking (Netherlands)
The Centre for Science and Technology Studies at Leiden University publishes a European and global ranking of the top 500 universities, focusing on the number and impact of Web of Science-indexed publications produced each year. These rankings compare research institutions by accounting for variations in language, academic discipline, and institutional size. Several ranking lists are released, each based on different bibliometric normalization methods and impact indicators, such as the total number of publications, citations per publication, and field-normalized impact per publication.
Unlike many other rankings, the Leiden Ranking uses bibliometric indicators to assess the research output of universities, specifically focusing on the number and impact of scientific publications. This ranking provides a detailed analysis of research performance based on citations, distinguishing itself from others that blend research with factors like teaching quality and internationalization.
The Leiden Ranking’s emphasis on research impact rather than just quantity challenges the traditional dominance of U.S. and U.K. universities by highlighting the contributions of institutions in other parts of the world. For example, institutions from Europe, Asia, and even Latin America have started to feature more prominently in the Leiden Ranking, which may lead students and researchers to consider universities outside the traditional powerhouses. This encourages a more globalized approach to higher education, where the focus shifts from prestige to the actual influence of research.
For faculty, the Leiden Ranking can help identify universities that are making significant contributions to scientific knowledge, which may influence where they choose to work. The emphasis on research quality also encourages universities to focus on producing impactful research, rather than merely increasing their publication output. This shift in focus helps foster more meaningful academic collaborations and innovations.
2. Aggregate Ranking of Top Universities (Meta Ranking)
The Aggregate Ranking of Top Universities (ARTU) is a meta-ranking that combines the results of several major university rankings to provide an overall view of global university performance. By blending rankings from QS, THE, the Shanghai Ranking, and others, ARTU aims to provide a more comprehensive and balanced perspective on university quality.
This meta-ranking approach offers a more nuanced view of universities’ strengths and weaknesses. By considering multiple rankings and methodologies, ARTU allows universities to showcase their performance in different areas, from teaching and research to internationalization and industry collaboration. It also provides prospective students with a broader set of data points to consider when choosing an institution, moving beyond the traditional dominance of U.S. and U.K. universities. For faculty, ARTU presents a more balanced view of potential academic environments, encouraging recruitment from a wider array of institutions worldwide.
For research and development, ARTU highlights the diverse strengths of universities across different regions, encouraging international collaborations that may have previously been overlooked due to the overwhelming influence of rankings dominated by a few countries.
3. Center for World University Rankings (CWUR) (UAE)
Based in the United Arab Emirates, the Center for World University Rankings (CWUR) publishes an annual ranking that evaluates universities based on factors such as education quality, alumni employment, faculty quality, and research output. Unlike other rankings, CWUR does not rely on subjective surveys of academic reputation. Instead, it uses objective indicators to assess the real-world impact of universities.
CWUR’s methodology includes measures such as the number of Nobel laureates and Fields Medalists among alumni and faculty, the number of papers published in top-tier journals, and the level of industry collaboration. This provides a more concrete measure of university performance, especially in research and innovation, and offers a clearer picture of the universities that are genuinely shaping global industries and advancing scientific knowledge.
By focusing on measurable factors, CWUR is able to offer a ranking system that is less influenced by the biases that often favor U.S. and U.K. institutions. It encourages a more global outlook and can help break the traditional dominance of these countries by highlighting the contributions of universities in Asia, the Middle East, and Europe. For students, CWUR offers a new way of evaluating institutions, based on tangible outcomes rather than reputation alone.
4. Round University Ranking (RUR) (Russia)
The Round University Ranking (RUR), based in Russia, evaluates universities across four key areas: teaching, research, international diversity, and financial sustainability. It is a global university ranking system that evaluates the performance of 750 leading universities worldwide. It uses 20 indicators spread across four key areas: teaching, research, international diversity, and financial sustainability. The ranking has a broad international scope and is designed to serve as a valuable tool for key stakeholders in higher education, including prospective students, academic representatives, and university administrators. Published by the independent RUR Rankings Agency based in Moscow, Russia, the RUR Rankings aim to offer a transparent and comprehensive framework for benchmarking universities. This system is intended for a wide audience, including students, analysts, and decision-makers involved in higher education policy and development, both at the institutional and national levels.
RUR’s methodology combines data from major international sources and provides a comprehensive assessment of global universities. It is one of the few rankings that places a significant emphasis on financial sustainability, which can be an important factor in a university’s long-term success.
By focusing on a broader set of indicators, including financial sustainability, RUR provides a more holistic view of universities, considering not just academic performance but also the institutional health that supports it. This allows institutions from a wider range of countries, particularly emerging economies, to be evaluated on more equal footing with traditionally dominant institutions.
For students, RUR offers a different perspective on what makes a university successful, moving beyond the traditional metrics of research output and teaching quality. Faculty members, especially in developing countries, may find RUR’s focus on financial sustainability appealing, as it highlights universities that are likely to provide long-term stability and opportunities for academic growth.
5. SCImago Institutions Rankings (Spain)
The SCImago Institutions Rankings (SIR) is an initiative based in Spain that evaluates universities and research institutions based on their research output, innovation, and societal impact. The rankings are based on data from the Scopus database and consider factors such as the number of publications, citations, and international collaborations. Since 2009, the SCImago Institutions Rankings (SIR) has published the SIR World Report, an international ranking of research institutions worldwide. The report is produced by the SIR comprising members from the Spanish National Research Council (CSIC), University of Granada, Charles III University of Madrid, University of Alcalá, University of Extremadura, and other educational institutions across Spain. The ranking evaluates institutions based on factors such as research output, international collaboration, normalized impact, and publication rate.
The SIR rankings are unique in that they also measure the societal impact of research, which provides a more comprehensive assessment of universities’ roles in addressing global challenges. This helps shift the focus from purely academic metrics to broader societal contributions, which is particularly important for universities in non-English-speaking regions.
The SIR rankings are also notable for their inclusion of universities from Latin America, Asia, and Africa, which often perform well in terms of research collaborations and innovation. This challenges the dominance of U.S. and U.K. institutions by recognizing the global reach and impact of universities in other parts of the world. For students, the SIR rankings provide an alternative to the traditional metrics, encouraging them to look at universities that are making a real difference in society, not just in academic prestige.
6. U-Multirank (European Union)
U-Multirank is a European Commission-supported initiative aimed at enhancing transparency regarding the various missions and performances of higher education institutions and research organizations. The feasibility study for U-Multirank was designed to help achieve the European Commission’s goal of providing clearer insights into the diverse roles of universities. The initiative was officially launched on 13 May 2011, at a press conference in Brussels by Androulla Vassiliou, the Commissioner for Higher Education and Culture. She stated that U-Multirank “will be beneficial to each participating higher education institution as a tool for planning and self-assessment. By offering students clearer information to inform their study choices, it introduces a new approach to ensuring more quality, relevance, and transparency in European higher education.”
Unlike traditional rankings, U-Multirank allows users to compare universities based on a range of indicators, including teaching, research, knowledge transfer, international orientation, and regional engagement. This flexibility allows students and faculty to prioritize the factors that matter most to them, rather than relying on a single ranking metric.
U-Multirank’s approach breaks away from the traditional rankings by offering a more personalized and detailed comparison of universities, which can be particularly useful for students looking for institutions that align with their specific academic interests or career goals. For faculty, U-Multirank highlights institutions that excel in specific areas of research or teaching, encouraging more targeted academic collaborations.
U-Multirank’s focus on regional engagement also challenges the hegemony of U.S. and U.K. universities by showcasing institutions that are having a significant local impact, particularly in Europe. This shift encourages a more localized perspective on higher education, where universities are seen as integral parts of their communities and regions.
7. Eduniversal (France)
Eduniversal, based in France, provides global rankings of business schools and universities, focusing on factors such as academic reputation, international diversity, and alumni outcomes. Unlike many other rankings, Eduniversal emphasizes the international mobility of students and faculty, as well as the internationalization of research. This university ranking is owned by the French consulting company and rating agency SMBG. It ranks masters and MBA in its 9 geographical regions (the five continents).
Eduniversal’s rankings are particularly important for business schools, where international experience and networking play a key role in the success of graduates. By highlighting institutions that excel in global engagement, Eduniversal encourages universities to strengthen their international partnerships and attract students from around the world.
For students, Eduniversal provides an alternative to traditional rankings by emphasizing global mobility and international experience. This helps challenge the dominance of U.S. and U.K. institutions by showcasing universities that offer strong international networks and career opportunities.
Don’t Chase the Holy Grail of Educational Excellence
Finally, ranking should not be always about educational excellence. Michelle Stack, who teaches at in the department of educational studies in University of British Columbia, and author of Global University Rankings and the Mediatization of Higher Education, when the universities chase the holy grail of ‘excellence the ’big capital’ calls the tune and some students go hungry.
The challenges facing higher education institutions—regardless of their global ranking—seem to be multiplying. In the United States, for instance, 157 colleges are currently under federal investigation for mishandling cases of sexual assault. All too often, students report that academic leadership is more focused on protecting the institution’s reputation than addressing the safety and well-being of students. However, the way an institution responds to issues like systemic violence is not factored into determining its standing as a world-class university.
Moreover, many top-ranked institutions have been caught attempting to manipulate the system to boost their rankings. Some universities have even encouraged applications from students they know will not be admitted, thereby increasing their selectivity and improving their rankings. The more students a university rejects, the higher its score for student selectivity, which also contributes to a better rank. Peer reputation surveys, along with surveys of employers and high school counselors, play a significant role in many ranking systems. In response, universities with substantial financial resources invest heavily in promotional materials to ensure their name reaches the right people at the right time—namely, those filling out the surveys that influence rankings.
Universities should serve and be accountable to the communities they belong to—whether local, national, or international—but many major global rankings prioritize one key stakeholder: corporate interests. What’s often overlooked in these rankings is the broader population—the 97 to 99 percent of people who are not part of the elite institutions at the top. While universities invest significant resources to improve their rankings, tuition fees continue to rise, and students’ unions across the world are reporting an increasing number of students relying on food banks. The growing wealth of a select few is being built on the deepening poverty of the many.
‘’By focusing on a narrow definition of excellence, rankings limit universities’ potential to be truly relevant to the communities that support them,’’ asserts Dr Dababrata Chowdhury (Daba) is currently a Reader in Digital Entrepreneurship at Canterbury Christ Church University, UK. ‘’These communities provide research participants, land, tax exemptions, and funding. However, major global rankings fail to assess how well universities address the needs and concerns of these very communities.’’
“While university rankings may offer useful information to stakeholders, they are also subject to significant methodological limitations and can have unintended consequences on institutional priorities and national higher education systems,” said Dr. Sophia Emerson, a well-known authority in higher education. “Rankings can incentivize universities to prioritize activities that are rewarded in the methodologies, potentially at the expense of other important missions like community engagement and undergraduate education,” warned Dr. Emerson.
Looking underneath for an Insight than scratching the surface
The politics behind global university rankings are complex and far-reaching. ‘’While these rankings offer a convenient way of comparing institutions, they also reinforce existing power structures, perpetuate inequality, and diminish the broader purposes of higher education,’’ says Dr P. R Datta, executive chair at Centre for Business & Economic Research (CBER), a London based seminar and research organisation. ‘’The reliance on subjective reputation surveys, research output, and international diversity rewards wealthy institutions, often at the expense of smaller or less prestigious universities.’’
The vicious cycle of global university rankings highlights the importance of critically engaging with these systems. Universities, students, and policymakers must be aware of the biases inherent in these rankings and push for more nuanced, inclusive, and holistic assessments of higher education institutions. In the end, university rankings should serve as a tool for improving higher education, not a mechanism for perpetuating inequality and reinforcing the dominance of a few institutions.
‘’The rise of alternative world university rankings represents a significant shift in the global higher education landscape,’’ says Hiren Raval, chief executive of C3S Business School. ‘’These rankings challenge the dominance of U.S. and U.K. universities by offering a more diversified and global view of academic excellence.’’
Indeed, these alternate ranking systems impact student admissions by providing a broader range of institutions to consider, based on factors that go beyond traditional prestige and reputation. They also influence faculty engagement by highlighting universities that excel in specific areas, such as research, teaching, and societal impact, rather than simply ranking institutions based on research output alone. Finally, these rankings promote more global and diverse research collaborations, helping to break the hegemony of the U.S. and U.K. in shaping the future of higher education. Through this evolution in university rankings, the world of academia is becoming more inclusive, offering opportunities for institutions from all regions to shine on the global stage.
“It is crucial for policymakers, institutional leaders, and other stakeholders to critically examine the underlying approaches of these ranking systems and develop a nuanced understanding of their strengths and weaknesses,” urged Dr. Emerson.
Author: Sarat C. Das, Editor, Manager