Last week on its “Working It Out” page, The Atlantic Monthly posed a far-reaching question:
Should each college be required to
prominently post consumer information for prospective students — a kind of
nutrition label for higher ed?
Writer Marty Nemko wonders whether colleges should have
to post consumer information, “externally-audited,” so that prospective
students know more about what they are getting into when they enroll. After all, given the huge investment of time
and money, not to mention the piles of information (most of it hyped) that
colleges produce about themselves, an independent report could only be a
benefit to college-bound kids. It might
also push colleges away from glossy attractions (a great gym etc.). But then, if the ensuing report focuses too
much on consumer information, such as employment prospects of recent graduates,
then colleges may respond by narrowing their offerings to market-driven majors,
leaving out much of the liberal arts.
Apparently, the question sparked a huge response,
by Senior Editor Derek Thompson, who noted the polar opinions of people who
wrote in. Some people asking for more
information from colleges “called on schools to publish specs like
debt-after-graduation and earnings-by-degree,” he reports. “In the anti camp, we heard most loudly from
people who consider the college experience unquantifiable and easily perverted
by simple metrics.”
People bring up student debt, graduation rates,
tight job markets, and “alumni satisfaction,” but one thing doesn’t come up,
and it’s the most important one of all for distinguishing one school from
another once the variables of cost are determined. Consumer reports like to evaluate
performance, and so what we need to find out in higher education is how much
students learn relative to their grades.
What do English majors at State U learn if they earn a B+ average over
their undergraduate career? What do
chemistry majors know after two years of study that yields a B?
The best thing to do would be to fashion pre- and
post-tests and compare “value-added” performance by different schools. We need the grades in order to account for
student effort and talent. If we include
only the tests in the discipline, and not correlate those scores with what the
university assigned to those students along the way, then we can’t be sure of
what standards the university applies.
If a student doesn’t do well on the tests and doesn’t do well in grades,
then we can’t blame the university. If a
student does well in grades but poorly on the tests, then that university
suffers from too much grade inflation.
If a student enters an English major with a certain amount of knowledge
and improves that amount negligibly over the next two years and still earns B+
grades, the school doesn’t have a rigorous curriculum. And so on.
This would fill out the “consumer report,” though it
may prove to embarrass a fair number of schools. And there’s another roadblock as well: who is
going to make up the tests in those value-heavy disciplines such as history,
English, ethnic studies . . .?