Two Problems with the New Doctoral Rankings

The National Research Council has finally issued its rankings of doctoral programs, with coverage appearing here, here, and here . Right now, everybody is trying to assimilate the results, which are more complicated than those in the 1995 report. The “Data-Based Assessment” runs to 282 pages, the “Guide to the Methodology” 57 pages, and each one contains numerous cautionary notes about the conclusiveness of the findings.
Still, however confusing and tentative the results, they bear tremendous authority and universities will plumb them for good news and trumpet them in years to come. For this reason, any criterion that plays a role in the rankings process has a powerful, long-term impact on post-graduate education and research in the United States. If the NRC used one, the logic goes, then it’s a settled norm, and universities looking to rise in the next version of rankings should consider it well.
It is troubling, then, to find two criteria implemented in the project that are, in truth, dubious measures of the quality of research in at least one area of graduate study, the humanities.


One is the variable “Diversity of the academic environment.” As part of the survey, researchers asked respondents questions about each program’s diversity and included the findings among the “dimensional measures” (as outlined in the statement of methodology which can be downloaded here).
The trouble with “diversity” starts with the restrictive definition. In the report it means three things: “the percent of faculty and the percent of students who are from underrepresented minority groups, the percent of faculty and the percent of students who are female, and the percent of students who are international (that is, in the United States on a temporary visa).”
In other words, diversity is understood entirely in terms of group identity. Nothing about income diversity, religious diversity, or ideological diversity. The insertion of “underrepresented” signifies, too, the exclusion of Asians from the diversity calculation.
With those kinds of diversity removed, the NRC may assert, “Diversity among the faculty has improved impressively since the 1995 NRC study.” New PhDs are 45 percent female (Inside Higher Ed reported recently that more women obtained doctorates than men in 2008-09). Minorities (excluding Asians) earned only 7.4 percent of PhDs in 1993, but 13.5 percent in 2006. Furthermore, while white males earned 12,867 doctorates in 1993, they fell sharply to 7,297 in 2006.
As always, such arrangements beg dozens of questions, such as whether a program which has more female doctoral students than male students gets a lower rating than one with a 50-50 breakdown. One can also ask whether the diversity rate correlates with the quality of research produced by faculty members and graduate students in a department.
The report itself acknowledges the latter question and answers unequivocally: “a program that is more diverse may be preferable to many students, although diversity bears only a tenuous relation with the usual measures of scholarly productivity.” That is, a program might score highly in “research activity” as measured by the number of publications produced and citations garnered from the members of the faculty while having a low diversity score. Moreover, “The diversity measures did not appear as major factors in determining the overall perceived quality of programs.”
While we join the NRC in applauding the rising representation of women and underrepresented minorities on campus, the trend says little about other scholarly measures of quality. Instead, the diversity variable applies to the social climate of a department, more specifically, to the perception of how “welcoming” and inclusive it is. And underlying that claim are assumptions common to diversity-thinking about sensitivity, role models, and disproportionate outcomes. That it now plays a significant role in the ultra-important rankings of graduate research programs shows just how thoroughly diversity has entered the evaluation arena.
The other troubling criterion in the study is that of faculty output. Research productivity stands first in the assessments, or rather, not simple productivity but a larger constellation of activities associated with research in a field. “Research activity is the dimensional measure that most closely tracks the overall measures of program quality,” the methodology statement says, and that includes publications, citations, grants, and honors and awards. There is no reason to argue with that, until we discover a glaring gap in the data. It lies in the evaluation of humanities departments, for while the NRC counted publications in humanities departments, it did not count citations. This is likely because the apparatus used to calculate citations in the sciences, Thomson Reuters (formerly Institute of Scientific Information), picks up only a portion of humanities journals and no humanities monographs.
Without any citation-tracking—and with grants and awards happening with much less frequency in the humanities than in the sciences—we are left with the production side of scholarship as pretty much the single measure of research activity in English, Spanish, and other humanities departments. The focus ends up gauging quantity, not quality. A faculty member who produced in the last 15 years only one monograph on Renaissance drama, but one that proved influential to dozens of scholars in the field, contributes less to a department’s rankings than a faculty member who produced four books on Romantic poetry that evoked hardly any attention at all.
This emphasis on sheer productivity is particularly risky in areas in which books and articles have flooded the field but evidence for their assimilation by other scholars is slim. In the fields of language and literature (as recounted here) productivity has exploded, annual production of scholarship rising from 14,000 items in the early-Sixties to more than 70,000 today. The vast majority of those items go generally unheeded. Literary monographs now sell only a few hundred units, almost all of them to academic libraries, and scholarly essays usually receive only a couple of citations in professional journals in subsequent years (according to Google Scholar, the best citation search of humanities journals).
This is to say that the NRC rankings method follows a dubious formula, namely, “output = value.” It rewards the mass-producer, the professor who churns out books and articles with imposing regularity. The professor who spends ten years doing archival research, testing ideas and interpretations in classrooms and in conversation with colleagues, rewriting sentences and paragraphs into eloquent prose, and disdaining the fashions of the moment becomes a liability.
Likewise, a graduate student who speeds toward matriculation makes the department look stronger than the graduate student who plods deliberately through a dissertation that requires many months of library work. As one note in a chart that displays faculty assessments of “student support and outcomes” puts it, “Time to degree has a negative weight reflecting that a shorter time is better.” A department that rushes graduate students to a humanities PhD in five years beats a department that holds them for eight years.
Unfortunately, the fact that some humanities fields demand broad generalist knowledge as well as research specialization doesn’t fit that calculation. On this measure, as with productivity in general, the NRC accepts the streamlined quantification of post-graduate study and faculty research. If humanities departments were wise, they would respond to the NRC report with effusive thanks and a polite request to recalibrate the rankings for their own fields, this time removing speed and output from the diagnostic, and adding a few other diversity elements to it as well.

Author

  • Mark Bauerlein

    Mark Bauerlein is a professor emeritus of English at Emory University and an editor at First Things, where he hosts a podcast twice a week. He is the author of five books, including The Dumbest Generation Grows Up: From Stupefied Youth to Dangerous Adults.

One thought on “Two Problems with the New Doctoral Rankings”

Leave a Reply

Your email address will not be published. Required fields are marked *