For yet another glimpse of what’s wrong with higher education, read “Teaching Them How To Think,” the story of George Plopper, associate professor in the Department of Biomedical Engineering at Rensselaer Polytechnic Institute. After attending a conference on teaching and learning in 2004, Plopper had an epiphany of sorts, and now uses Bloom’s Taxonomy to assess student learning in his two upper-level courses at RPI. According to the article, this “has dramatically changed his approach to teaching and to determining what his students learn. No longer content to lecture from the front of the room and convey a series of complicated facts about cancer biology and extracellular matrix interactions, Plopper now makes the process and expectations of learning an explicit part of the syllabus.” All this “has changed his teaching, and made assessment part of the learning process–for both himself and his students.”
If he has found a technique that has made him a better teacher, bully for him, but modern academe’s tacit and unquestioning acceptance of pseudo-scientific techniques like Bloom’s Taxonomy to “measure” and “assess” appropriate “student learning outcomes” is very bad news indeed. Such techniques have already choked the K-12 system and have now begun to put a stranglehold on higher education, stifling the autonomy of college teachers and subverting the aims of liberal education.
According to the article, Plopper now “asks his students to sort through the subject matter, digest it, and teach it to one another, and he puts students in real-world scenarios they might encounter as scientists.” Such exercises “force students to harness and analyze information in ways they never truly had to do when he asked them to attend his lectures, deliver a presentation of their own, and take a final exam.” An A in his class “is a very different A than it used to be. . . . An A carries a much higher expectation of your ability to think.” That’s what assessment is all about, says Pat Hutchings of the Carnegie Foundation for the Advancement of Teaching. “Assessment means asking if students are learning what I think I’m teaching.”
Well, no kidding. Assessment (formerly “testing” and “grading”) is central to the acts of teaching and learning, and part of what (I suspect) teachers have always been doing. Any teacher worth his salt determines the extent to which his students have mastered the subject matter. This is commonly done by testing their grasp of certain facts and information, evaluating the logic of their arguments, determining whether they have drawn reasonable conclusions based upon evidence or reasonable inferences, holding them accountable for their assertions, forcing them to examine and explain their premises, and even challenging their opinions. Teachers are paid to use their professional judgment to evaluate the performance of students in their classes.
But now, professional judgment is not enough. Driven by pressure from state education departments and accrediting bodies, and by the ongoing need to ensure student success rates and institutional survival, teachers in colleges and universities are increasingly expected to use learning outcomes assessments that essentially aim to reduce student learning to quantifiable standards. That’s what Plopper is said to do so well—and why he’s featured in Inside Higher Ed. According to the article, he “backs up this assertion by charting what his students do on a grid on which Bloom’s Taxonomy is mapped. He can identify, quantify and document the instances—whether in an exam, class discussions or a presentation—in which his students have demonstrated the kind of learning that goes beyond memorization up to higher order thinking” (emphasis added). In other words, Plopper illustrates that teaching and learning can be scientific too: He has identified specific problems of student learning, developed methods to solve them, applied those methods, studied the feedback and results, measured the outcomes.
How Can You Prove Good Teaching?
No one denies that teachers must assess how much their students are learning and the effectiveness of their teaching. As Kevin Carey, policy director of Education Sector, writes in the Chronicle of Higher Education (December 2010), “If all you have to offer is unusually good teaching, you’re out of luck. How can you prove it? How would anyone know?” And few would deny, after the appearance of such books as Higher Education? and Academically Adrift, that too many professors and administrators are failing to hold students accountable but are letting them slide through college without learning much. If Pat Hutchings of the Carnegie Foundation had her way, assessment would be treated “as a form of scholarship” that plays a central role in “tenure and promotion processes,” and it would be linked “more clearly to teaching and learning.”
But there’s no clear-cut evidence that this does or would produce greater student learning outcomes. The article points to Plopper’s use of “project-based learning” (the vogue phrase is “collaborative learning”) to illustrate how he’s linked assessment with teaching and learning. Project-based learning, “not terribly rare in the realm of educational practice, particularly at the K-12 level,” is said to be effective because it “requires students to take on more complicated, multifaceted tasks that require them to deploy different skills,” often as “members of a team.” The assumption is not only that group learning actually works, but that it is necessarily a good thing. Personally I never cared for group learning projects when I was an undergraduate, or thought that they were effective. Nor do Richard Arum and Josipa Roksa, authors of Academically Adrift, who conclude that group work and collaborative learning have not lived up to the hype.
Most professors will tell you that group learning does little to promote self-knowledge and the habits of thought and mind essential to liberal education. “Listening to one another, students sometimes change their opinions,” writes Professor Mark Edmundson of the University of Virginia, but what they generally can’t do “is acquire a new vocabulary, a new perspective, that will cast issues in a fresh light.” They must have their ideas driven back by continuous questioning; their minds must be turned upside down through engaging lectures and discussions, which they also can’t do. Most professors will also tell you that group learning annuls the individual, suppresses individual intelligence, and reinforces the belief that the group is always right, as Gilbert Highet reminds us.
The one collective emotion that group work almost certainly engenders is resentment. Good students resent being forced to work with indifferent and weak students who take advantage of them; indifferent students resent group work because they are forced to do something they don’t want to do anyway (but at least they get to sponge off the good students); perhaps the only students who like group work are the weak ones because it encourages safety in numbers. Those who push it do a great disservice to students by inflating the hopes of weaker students and making them think that they are as capable of performing as the better students. Of course the article dismisses such claims, pointing out that Plopper “has devised safeguards” which “are rooted in how he designed and assesses his courses” (emphasis added).
The allure of the outcomes and assessment movement is that it’s shrouded in such language—the language of science. However, the unstated assumption is that, if teaching and learning follow the pattern of science, then teachers and students will achieve demonstrable success in higher education. This thinking would be harmless if it weren’t so widespread. Assessment centers, centers for institutional research, and centers for teaching excellence (or something like them) have peddled these technique son campus for more than forty years—adding yet another layer to college and university bloat.
A Troublesome Movement
The movement is troublesome for many reasons, but for two in particular. First, it makes the final outcome the goal—the grade, passing the course, the degree—and negates the process itself. This in turn puts the burden for student performance almost exclusively on the backs of teachers. Until recently, it was generally understood that teachers shouldn’t be responsible for student learning outcomes because they have no control over students’ preparation for college, their motivation, their commitment to learning, their inherent intelligence or capability to succeed at the college level. Nor should they be. Not all students learn at the same rate or are equally adept in all academic subjects. But now the thinking is, so long as teachers assess their students properly, all they have to do is adjust their teaching to achieve the desired results.
That’s what grading rubrics can do, according to the June 2009 edition of the Advocate, the monthly journal of the National Education Association. A rubric is defined as an “assessment” or a “scoring tool that divides an assignment into its component parts and objectives, and provides a description of what constitutes acceptable and unacceptable levels of performance for each part.” Teachers are told to pass out the rubric before students start the assignment and make students turn them in with the papers (this way they will at least have to look at the criteria of the assignment). Students can even use rubrics to “assess themselves.” All students—but in particular, “non-traditional students”—really “appreciate” rubrics because “one of their challenges is understanding and interpreting academic language.” I understand the importance of making the criteria of an assignment clear, but using a rubric to identify every single criterion for students is simply another example of the kind of coddling and spoon-feeding that has become endemic in higher education. One of the challenges of being a student is precisely to figure out—i.e. understand and interpret—what you have to do to fulfill the requirements of an assignment, answer the question, do a thorough job. If necessary, you ask the professor for clarification. That promotes both critical thinking and personal responsibility. Apparently not anymore.
The only proof offered for the effectiveness of rubrics are comments from some of the newly converted. “I used to worry that I graded some papers differently from others, especially those that I graded first versus those at the end,” said one teacher. “I feel much more secure when I grade with rubrics that I’m being consistent in my grading. I know I’m using the same criteria for all the students.” Two things strike me about this statement. First, this person claims to be more “consistent in my grading” and to use “the same criteria for all students”—thus achieving objectivity and standardization—but he or she bases this claim on a feeling, the antithesis of what he or she is actually trying to achieve by using a rubric. If a rubric truly produced neutral grading, then this neutrality should be demonstrable by some objective and quantifiable standard.
Second, this person is apparently untroubled by replacing his or her professional judgment with a scoring tool. Not all papers can or should be graded the same because not all students think and read and write the same. That statement illustrates the outcomes and assessment mentality that the mechanization of method can solve the problem of poor student writing in an age of mass education. Probably the most honest comment among those listed came from a person who wrote: “My students love the rubrics. My teaching evaluations have gone up!” Don’t get me wrong. Rubrics are convenient—I use them myself—but let’s not fool anyone. They are a facade. They merely give the impression of objectivity and serious evaluation, but they are nothing more than checklists devoid of substantive criticism and judgment. Their real purpose is to save time.
Secondly, the movement attempts to standardize teaching, which stifles creativity, leads toward greater uniformity, and rewards mediocrity both in teachers and in students. It ignores that teaching is an art that requires dexterity and finesse and the ability of teachers to see and understand its relation to both individual students and liberal education as a whole. Imagine Socrates in The Republic charting his discussions with Glaucon and Adeimantus on a grid and mapping them according to Bloom’s Taxonomy, identifying, quantifying, and documenting whether they have demonstrated higher order thinking by using a grading rubric.
What Plato’s dialogues show is that the acts of teaching and learning are more subtle, delicate, and elusive than the outcomes and assessment movement would have us think. At its best, liberal education teaches students the grounds of knowledge: not what they know, but how they know. In Plato’s dialogues, Socrates generated active participation and higher order thinking among his students, not only because he demonstrated the intrinsic value of education, but because he knew that the process mattered most. He was never dogmatic or rigid, and therefore turned learning into that “peculiar dynamic state which people experience when they are fully immersed in an activity.” The Socratic method is the best example of total involvement, of doing and being at the same time, that is achieved when teaching is not boring for teachers, and learning no longer a chore for students. It raises insights and originates in students what they cannot originate themselves.
Socrates was a great teacher, writes philosopher and educator James P. Carse, because he initiated thinking in his students. He exposed the source and then he stepped back. “We know we have met such a teacher when we come away amazed not at what the teacher was thinking but at what we are thinking,” he adds. “We forget what the teacher is saying because we are listening to a source deeper than the teachings themselves.” This is the appropriate end of liberal education, the ultimate proof that it has succeeded, and why current modes of assessment and outcomes are misguided: You can’t assess becoming then being. The essence of teaching and learning is in the acts themselves, as former football coach Tony Dungy correctly put it: “It’s the journey that matters. Learning is more important than the test.”
We can agree with Kevin Carey when he writes, the “real debate shouldn’t be about whether we need a measuring stick for higher education. We need a debate about who gets to design the stick, who owns it, and who decides how it will be used.” But I doubt that this debate will produce assessments that can measure spontaneity, creativity, imagination, wit and wisdom.