
Colleges and universities are in the business of knowledge—creating and disseminating information, ideas, and facts. Some, such as community and small liberal arts colleges, almost exclusively disseminate ideas created by others, while larger universities aspire to be powerhouses in creating new knowledge. But virtually all of them work assiduously at preventing the general public from gaining much knowledge about their own successes and failures. Indeed, university public relations offices typically present a highly distorted account of achievements—reporting genuine advances in research, for example, but often ignoring and concealing knowledge of failures. Implicitly, universities spend lots of money on offices of disinformation. Accrediting agencies could provide good information, but don’t, and their detailed reports are rarely circulated to the public, kept private by the schools involved. Seldom does a college lose its accreditation simply because it is bad, with regional and subject area accreditors typically sharing little or no meaningful information.
Yet, it is abundantly clear that the quality of colleges and universities varies enormously, and that graduates of schools generally perceived to be of middling or poor quality are increasingly having trouble getting good entry-level jobs. New York Federal Reserve data suggest that roughly 41 percent of recent college graduates are “underemployed,” not working in jobs that require a college degree. Between 2010 and 2025, college enrollments nationwide actually declined, but the top schools experienced no decline and generally saw significantly increased numbers of applicants. There is a flight to quality, but how do we determine which schools are the best, especially when no national measures of student achievement are received from graduating college seniors who have taken tests similar to the ACT or SAT?
[RELATED: The Problem with College Rankings]
As usual, the competitive market economy provides a solution: rankings of schools by quality. Led by U.S. News & World Report (USNWR) decades ago, several organizations began ranking universities and their components, such as law schools and M.B.A. programs. Full disclosure: About 15 years ago, I started one of the major rankings, as featured in Forbes.
Opinions diverge markedly on what determines excellence in higher education, as indicated by the latest rankings. During my years in the ranking business, I found that merely changing the weights assigned to various perceived determinants of excellence could significantly alter the reported rank of any specific institution.
Let me give you an example.
[RELATED: Is Berkeley America’s Safest College Town? Depends on Who You Ask]
Suppose 10 factors are used in assessing the performance of a university—some of minor importance, five percent of the total ranking, others much more so, 15 or 20 percent. Sometimes, my brainy student assistant, Dan Garrett, now a finance professor at the Wharton School at the University of Pennsylvania, and I would play little games to see how much changing the weights used in determining variables could change rankings. Suppose I wanted to raise the ranking of my university, Ohio University (OHIO), relative to its traditional rival, Miami University in Ohio (Miami). Suppose OHIO had a good student retention rate between the freshman and sophomore years, but a mediocre four-year graduation rate, whereas Miami’s retention rate was so-so but had a good graduation rate. If we increased the weight placed on retention rates from five percent to 15 percent—but lowered the graduation rate weight from 15 to five percent—we very conceivably could raise the overall ranking of OHIO by over 100 ranks—from, say, 400 out of 600 schools total to maybe 280—while lowering Miami from 240 to 295. Voila! OHIO surpasses Miami in purported overall excellence! At least once, an obnoxious Forbes editor interjected some heated objection to the ranking of some favored school—I think his alma mater—which we largely ignored. Of course, Forbes was paying our salaries, so we had to show at least some feigned interest in the editor’s rants. And perhaps his perceptions of excellence were superior to ours.
Ultimately, rankings are based on individual perceptions of excellence. And with time, the original ranker, USNWR, has significantly shifted its emphasis. For example, in 2005, US News had no public university in its top 20 schools, but all eight Ivy League Schools. Now, the University of California at Berkeley, UCLA, and the University of Michigan are among the top 20—likely due in part to the changing weights assigned to factors used in assessing quality, which may reflect public criticism that the rankings overly favor values prized by elite private institutions.
Image: “Ohio University College Green” by Orchysterium on Wikimedia Commons