College Rankings Are Everywhere, but What Do They Really Tell You?
Author
Research Methodologist
September 2025
We live in a world that loves to measure. From professors to playlists to restaurants: these days nearly everything can be ranked, rated, and compared.
I knew this long before I became a quantitative social scientist. As a millennial, I came of age in the early days of online blogs, social media, and big data when sites like College Confidential and Naviance GPA calculators turned college admissions into a numbers game. Of course, U.S. News & World Report had already been ranking colleges for decades, but by the 2010s, the cultural message was clear: college was about winners and losers, and rankings told us which was which.
Parents, high-school students, journalists, and university leaders all watch the college rankings with a mix of anticipation and dread. Where does my school stand? Has it risen or fallen? What does that say about its quality? And, of course: how did my rival institution do and how can I rub it in if they dropped a few spots? Given the skyrocketing costs of higher ed tuition, it is not surprising that the stakes of “optimizing“ the college experience feel so high.
Last year, my NORC colleagues and I assessed five major college ranking systems. We asked a simple but important question: are these rankings really measuring what they aspire to measure? Our analysis highlighted several challenges to this goal, but three in particular stand out:
- Opaque methodologies. It’s not always clear why the various rankings systems include certain indicators or apply an arbitrary cut-off for institutional eligibility (a minimal enrollment of 1,000 students instead of 2,000). These decisions are consequential because they can change, and possibly narrow, the universe of institutions that students are exposed to. Because these methodological decisions change over time, it can also be hard to know whether a school’s movement up or down reflects changes in quality or changes in methods.
- Subjectivity baked in. Critical ranking components, such as reputational surveys, rely on subjective judgments of internal experts that are hard to compare in any rigorous way. But the most consequential subjectivity lies in the weighting: why, for instance, should affordability count for 20 percent of an institution’s “score” while another factor counts for far less? Though rankings often give good reasons for why experts chose certain weights—large weights for affordability and salary measures reflecting growing national concern with student debt—the importance of these weights varies from student to student.
- Fundamental issues with ranks themselves. Reducing complex, multidimensional institutions to a single number creates problems of interpretation. Rankings imply equal intervals between positions that don’t really exist: the underlying gap between #1 and #2 may not be the same as the gap between #2 and #3. Rankings rarely display uncertainty, even though communicating margins of error is best practice in virtually every other type of measurement.
In other words, while rankings wield enormous influence, they often rest on shaky measurement foundations.
But rankings don’t stop at the big players like U.S. News & World Report or The Wall Street Journal. Today there are hundreds of such lists, ranging from the whimsical (“most haunted campuses”) to the highly specialized (“best colleges for pre-law students”). In follow-on research, we’ve taken a closer look at another especially influential corner of this landscape: international rankings—those that extend beyond American universities.
International rankings have brought welcome attention to global comparisons and the role of U.S. universities on the world stage. With a much wider universe of institutions in play, these systems have pushed schools to think differently about research output, collaboration between faculty, and student exchange.
However, even innovation comes with trade-offs. International rankings often try to compare institutions across vastly different contexts, so they default to the lowest common denominator—indicators that are easy to collect globally, like publication counts or faculty-to-student ratios. The resulting data are more comparable, but not necessarily more meaningful to prospective students. And because these indicators are often “proxies” for similar indicators that are not available or even possible to measure, universities can and do find ways to game the system once they know what’s being measured.
One of the biggest issues remains the illusion of precision. Rankings present themselves as definitive hierarchies, but they rarely communicate uncertainty. Is School A really “better” than School B, or is that difference within the margin of error? Their ranks alone cannot tell you.
So where do we go from here? I believe the future lies in personalization. Rankings should serve the individual, not the other way around. That means acknowledging that what matters most in choosing a college—size, location, co-curricular opportunities, even the weather—is subjective, but still important.
Imagine a system that allows each student to define their own weights, trade-offs, and deal-breakers, much like a guidance counselor might help them do. With the right guardrails, AI could actually play a constructive role here, helping students explore fit without pretending there’s a single “best” college for everyone.
A world with more college rankings does not automatically mean better information for prospective college students. The real opportunity is to move from a fixation on winners and losers toward genuine insights that help people make thoughtful, informed decisions.
Suggested Citation
Barari, S. (2025, September 18). College rankings are everywhere, but what do they really tell you? [Web blog post]. NORC at the University of Chicago. Retrieved from www.norc.org.