Tag Archives: test

Harvard Botches a ‘Cheating’ Scandal

HArvard.png

By
Harvey Silverglate and Zachary Bloom

At first blush, the ongoing cheating scandal at Harvard
College appears to raise serious questions about academic integrity at that
fabled institution. If the allegations that 125 students inappropriately shared
notes and answers for a take-home exam in violation of the exam’s rules prove
true, the result will be a massive blot on Harvard’s near-perfectly manicured
public  image–especially now that top 
athletes have been implicated.

But let’s remember that because of the course’s confusing rules and guidelines concerning collaboration, no one, likely not even the
students themselves, can say right now whether their conduct was illicit. Worse
yet, we may
never know the truth, much less have a just verdict on the
propriety of the students’ actions, now that the case is securely in the hands
of the spooks haunting Harvard’s notorious Administrative Board.

Continue reading Harvard Botches a ‘Cheating’ Scandal

How Group Learning Invites Cheating

The most shocking thing about the Harvard cheating scandal was not that 125 students out of a class of 279 were found to have “committed acts of academic dishonesty” on an exam last spring, or even that the exam was for a course that was supposed to be an easy mark. It was that it happened at Harvard, the elite of the elite, where it is understood that only the smartest kids are accepted. Why would they have to cheat?

As the details became clear (at first, significantly enough, in the sports magazines), it developed that the course, Government 1310: Introduction to Congress, had the reputation of being a cinch to pass. But last spring the exam was harder.  It was a take-home open-book and open- Internet assignment over a weekend, but this time students were expected to write essay answers, not just select answers from multiple choices. And when the papers were graded, more than half were found to have given answers that were the same as another student’s, word for word.

When the facts became public, there was no joy in Cambridge. The stars of Harvard’s outstanding basketball team were among the large proportion of athletes taking the course. It remained unclear what punishment awaited the guilty as it could not be determined whether students had been collaborating on answers or plagiarizing outright from the Internet or each other.

Generosity Was the Excuse

One indignant Harvard student maintained that collaboration was “encouraged, expected.” That attitude also seemed to apply at Stuyvesant High School, New York City’s outstanding school, where a similar scandal was revealed. This time 140 students were involved, all receiving help from a classmate using his cell phone to send answers to his friends and those he wanted to become his friends.  The tests (the system was applied to several of them) were the prestigious Regents exams, important factors in college acceptances.  Ironically, the admitted aim of most Stuyvesant students, who face stiff competition getting into Stuyvesant and maintaining high grades once they get there, is to be admitted to Harvard.

Continue reading How Group Learning Invites Cheating

Harvard’s Cheating Scandal

Yesterday Harvard
University announced its investigation of about 125 undergraduates who are
believed to have improperly collaborated on a take-home final examination last
spring. It is tempting to use this case to generalize about an Ivy League sense
of entitlement, declining student morals in general, or perhaps the failure of
Harvard and other universities to teach character and a sense of honor to its
students along with their academic subjects. For now, though, we should focus on
the specifics of this cheating incident, or at least what we know of them,
since many of the precise details of the scandal have yet to emerge:

1. The class in question,
“Introduction to Congress,” enrolled more than 250 students. If
Harvard’s suspicions are correct, this means that half the class thought they
could get away with violating a specific instruction in the exam itself:
“[S]tudents may not discuss the exam with others–this includes resident
tutors, writing centers, etc.” Most college cheating rings are relatively
small groups of trusted friends. Not this one.

2. The cheating appeared
to be careless and blatant. A graduate-student teaching fellow grading the
exams uncovered the alleged collaboration on noticing that several of them
contained the exact same words or strings of ideas in answering some of the
exam questions. The students allegedly involved didn’t bother to disguise what
they were doing very artfully (surprising for clever Harvardians)–because they
thought they could get away with it.

3. Many students didn’t
like the class very much. According to Harvard Crimson reporter Rebecca D.
Robbins
, Harvard’s “Q Guide” of student course evaluations gave “Introduction
to Congress” a score of 2.54 out of a possible 5. Robbins noted that the
average score for social-science courses at Harvard was 3.91. Some of the
student evaluators took the course to task for lack of organization and
difficult exam questions. One student wrote that she and about 15 other
students, most of whom had stayed up all night working on the exam, gathered at
a teaching fellow’s office for clarifications a few hours before the deadline because
they didn’t understand one question worth 20 percent of the grade. “On top
of this, one of the questions asked us about a term that had never been defined
in any of our readings and had not been properly defined in class, so the TF
had to give us a definition to use for the question,” the student wrote.

None of this excuses in
the slightest what went on last spring. Students found to have collaborated on
that exam deserve not just to be suspended for a year–which is apparently
Harvard’s maximum punishment. However, there’s a lot here we just don’t know.

Common Core Standards Can Save Us

reading anderson.gif

 

It’s no secret that
most high school graduates are unprepared for college. Every year, 1.7 million first-year
college students are enrolled in remedial classes at a cost of about $3 billion
annually, the Associated
Press
recently reported. Scores on the 2011 ACT
college entrance exam
showed that only 1 in 4 high school graduates
was ready for the first year of college.

Continue reading Common Core Standards Can Save Us

Bad News for the University of Texas

On March 14, Washington Post reporter Daniel de Vise, in his piece “Trying to assess learning gives colleges their own test anxiety,” reported that the University of Texas at Austin ranks very low in achievement of student learning. “For learning gains from freshman to senior year,” writes de Vise, “UT ranked in the 23rd percentile among like institutions. In other words, 77 percent of universities with similar students performed better.” The Post obtained this data through a public records request. The standardized test was conducted by the Collegiate Learning Assessment.

Continue reading Bad News for the University of Texas

It’s Not the Test’s Fault

Cross-posted from National Association of Scholars.

test taking.jpg

Cross-posted from National Association of Scholars.

Fall 2011 has seen some major milestones for the SAT/ACT optional movement. DePaul University, for instance, initiated its first admission cycle sans test requirement. Clark University announced last month that it will offer test-optional admissions for the incoming class of 2013.

In his new book released this fall titled SAT Wars, sociologist Joseph A. Soares of Wake Forest University hails the success of test-optional admission policies. Wake Forest was the first of the top 30 U.S. News schools to go test-optional and is one of the most vocal cheerleaders of the movement through its blog Rethinking Admissions.  According to Soares, adopting policies that allow applicants to opt out of reporting their scores has successfully resulted in diversifying these campuses by race, gender, ethnicity, and class (groups he claims are excluded unfairly for underperforming on standardized tests) without compromising overall academic quality.

By all appearances, requirements for standardized testing in higher ed admissions is on the long and ragged road out the door.  To date nearly 850 colleges and universities (40% of all accredited, bachelor-degree granting schools in the country) have already bidden farewell to the test requirement in some form or another. 53 of these institutions are currently listed in the top tier on the “Best Liberal Arts Colleges” list published by U.S. News and World Report including Bowdoin, Smith, Bates, Holy Cross, and Mount Holyoke Colleges. Even some of U.S. News’ high ranking national universities, such as Wake Forest University, Worcester Polytechnic Institute and American University, are categorized as test-optional.  It now seems likely that this trend will only gain in popularity and momentum in the coming years.

So is the SAT-optional movement a good thing?I have always loathed standardized tests myself, once conferring with my second grade teacher because I was certain that my scores were insufficient and that I was falling behind my peers.  It turned out that to be in the 94th percentile really was a good thing even if it was less than 100 – my eight-year-old mind just couldn’t comprehend this at the time.

Yet even after my elementary school pep talk on the nature of scaled grading, I always had this lingering feeling that standardized test scores were somehow an unfair representation of what I could do.  Perhaps I simply fell into the category of being a “poor” test taker, getting easily muddled by my own bubble filling perfectionism and the time constraints required by these acronymic tests.  Or maybe it was because I could never wrangle up enough motivation to spend my free time studying methods for optimizing my score.  And most of all, like any “free-thinking” member of my generation educated by the New Jersey public school curriculum of the 90s, it may have been because I was contentedly assured of being so much more than a number.

One would think given these facts that I would be all for the enforced disappearance of the SAT in favor of the new “holistic” entrance requirements offered by test optional schools. But like a wised-up adult now grateful that her mom made her eat vegetables as a child, I find myself in the curious position of lending support to this once bemoaned exam.

My reason for this change of heart is simple.  We need basic universal testing methods to separate out the prepared prospective students from the unprepared.

In his 2011 work, Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies, Howard Wainer uses the available statistical data to conclude that institutions considering SAT-optional policies should proceed with caution.

Making the SAT optional seems to guarantee that it will be the lower scoring students who withhold scores.  And these lower scoring students will also perform more poorly, on average, in their first-year college courses, even though the admissions office has found other evidence on which to offer them a spot.

For example, Wainer found that at Bowdoin College, a school at the forefront of test-optional admissions, students in the entering class of 1999 who chose not to report their SAT scores tested 120 points lower, on average, than those students who submitted scores with their application.  This gap does sound large at first glance, but when considering students who typically have combined scores of 1250 and above in the traditional math and verbal categories, does that 100-120 point spread really matter when deciding whether a student is college-ready?

Clearly, admissions administrators at schools like Bowdoin and Wake Forest don’t consider it to be a problem.  And they might be somewhat justified in this assessment, even if – as Wainer found – the non test reporting students tend to have lower college GPAs then their test reporting peers.  Not everyone should be getting As in college and there are plenty of middling students in solid programs who can still benefit from a college education.

But would these higher ranked institutions really want to admit students who score 200 or 300 points below the institutions’ averages?  Likely not, as the continued penchant for test-optional schools to purchase the names of high-test scorers indicates.  The test-optional philosophy of admissions might sound warm and fuzzy on the surface, but for many of these schools this still appears to be a numbers game; one that perpetuates the value of high scorers and high rankings, now precariously balanced with a goal of attaining the oh-so-necessary badges of inclusion and diversity (yet more statistics to tout).

Most of the students profiled by these SAT-optional schools to prove the success of their new admissions policies are ones who were already at the top of their high school classes and who would have been accepted to any number of decent schools, even with their horrifyingly “low” test scores.  Often colleges are willing to overlook mediocre scores if an applicant is salutatorian, captain of the volleyball team, or editor of the newspaper–achievements indicative of a certain level of discipline and focus.  And if what these test-optional schools claim is true–that there are students out there who are great fits for their campuses and who have everything in their applications except for a specific score range–the schools should have had the courage to admit (and maybe even recruit) them anyway, bad scores included.

It takes courage to admit low scoring applicants because doing so all but guarantees lowering the SAT averages of these institutions and thereby risks knocking them down a few pegs on many of the popular college ranking lists that use test scores of incoming freshman as a major factor in their rank calculations.  Now, with these new non-reporting admissions options, some schools do not consider themselves obligated to factor in the scores of their test-optional applicants, thus allowing their middle 50% SAT range to represent only test reporting students (presumably the best of their enrollment pool).  Just look at what the oft reoccurring footnote No. 9 on the U.S. News “Best Colleges List” has to say:

SAT and/or ACT may not be required by school for some or all applicants, and in some cases, data may not have been submitted in form requested by U.S. News. SAT and/or ACT information displayed is for fewer than 67 percent of enrolled freshmen.

If these schools truly believe that the tests are biased or inaccurate representations of student preparedness, then why should they care how their test medians rank or if they recruit the highest scorers for their incoming classes?

Apparent hypocrisy aside, my suspicion is that the schools profiled most frequently on this issue, and the debates surrounding their choice to step away from standardized tests, cover up the true harm the test-optional movement has on academe as a whole.  For it seems to pose the most danger not to its leaders, many of whom still selectively accept students over the 80th percentile, but to the large number of other schools who are realistically following suit to lower their admissions standards and raise enrollment to make ends meet.  A 100-point spread might not mean all that much to students with scores of 1250+, but it can definitely make a world of difference in schools whose means are already well below that threshold.  The hard truth is that at some point being a well-rounded person ceases to compensate for not possessing quantifiably provable verbal and math skills.

And unlike what Soares and his cohort claim, I think most would agree that high school GPA does not ensure the same universality of assessment offered by tests such as the SAT because high school curricula are not created equal.  Although I grew up in a school district where we started learning how to write research papers in the third grade, some of my college classmates never had to write more than a single double spaced page at a time, and some were never required to read a book cover to cover in the course of their entire K-12 educations.

On the larger trend, we are not talking about straight A students at challenging high schools who happened to have the flu on test day, or who can’t afford to take test prep classes, or who don’t work well under pressure, as much as the test-optional proponents want us to believe this to be the case.  For the majority of those nearly 850 accredited institutions, this movement is about admitting students who are not prepared and quite possibly not capable of benefiting from a college level education.

Accepting students to college when they are not ready for college level course work is irresponsible and inexcusable.  It is time to get beyond the top schools in this discussion and consider the havoc test optional policies may wreak on the vast majority of higher ed institutions.  What seems like only a minor performance disparity outweighed by the benefits of “diversity” at schools like Wake Forest could spell the end to professional academic standards at lower ranking but still respectable institutions.

It also might be time for the proponents of test-optional admissions to stop and consider that maybe it really isn’t the test’s fault after all.  Low-scoring but worthy students ready to tackle college coursework are probably the exception rather than the rule. Admissions officers should use individual discernment and admit such students, when deserved, with full knowledge of how they scored. This is exactly why we have people, not mathematic algorithms, make admissions decisions in the first place.

More broadly, if certain groups are genuinely disadvantaged by these tests and underperform as researchers such as Soares and organizations like The National Center for Fair and Open Testing claim, we should continue to place emphasis on innovative solutions for K-12 reform instead of dispensing with standardized testing altogether.  The chances are that the most notable demographic gaps in the test results reflect a deficiency in education quality or testing support, both areas we can improve over time through reform, more than any inherent flaw with the objective test itself.  Not to mention that one of the primary methods used, including by the test skeptics listed above, to identify policy weaknesses and demographic disparities is the analysis of standardized test scores.  Without any form of universal achievement testing we risk missing demographic weaknesses altogether and could neglect the urgency to find solutions where legitimate problems exist.

The tests will never be perfect or comprehensive, but they continue to offer the most assured universal assessment of college preparedness, especially when considered alongside the many other factors traditionally used in admissions decisions.  To say that it is the test’s fault is both a juvenile and a nearsighted excuse. We do need to rethink college admissions, but implementing policies that let in more, not fewer, unprepared students is heading in the wrong direction – one that has no future in mind.

Check Out This Alternative to College

students test results.jpgInstitutions from charter schools to the White House are pushing hard for more young people to go to college, but with almost half of students at four-year colleges destined to leave without a degree, a counter-trend is starting to take hold: a loose coalition of people in the credentialing, training, and grant-making businesses are working to build an alternative to college for young people who are not academically inclined. The new paradigm centers around the National Career Readiness Certificate (NCRC) developed during the 1990s by ACT, the non-profit organization far better known for its SAT-style college-entrance exam. The NCRC and its assorted components and supplements, collectively known as WorkKeys, offer a path to employment success outside the conventional college track.

Scores on the WorkKeys assessments certify to prospective employers that job applicants have mastered enough specific, nationally recognized mental and interpersonal skills to qualify for the jobs they are seeking, no matter where they went to high school, what courses they took, or whether they had any college experience at all. In short, ACT’s NCRC strives to make bypassing college a viable, indeed an optimal choice for those who are either unlikely to succeed academically, or who are just turned off by the prospect of years of higher education. Alternatively, the test can help them get decent jobs while they pursue further specific training that could hoist them into even better ones.

Continue reading Check Out This Alternative to College

Cheating is the New Normal

A well-publicized cheating scandal at Great Neck High School featured a criminal entrepreneur taking SAT tests for college-bound high school students. My colleagues in the Academy tell me cheating is endemic with papers written by “service” organizations and plagiarism a national contagion. Teachers are routinely engaged in “scrubbing” various tests in an effort to increase the ratio of passing grades. The Atlanta school system was recently indicted for changing, student grades in an effort to improve the schools’ performance profile.

These stories invite the obvious question: Are conditions worse now than earlier?

Continue reading Cheating is the New Normal

Why Do Multiple-Choice Tests Lie All the Time?

On Inside Higher Ed today “MathProf,” an anonymous poster, raised an original objection to multiple-choice tests: they are packed with lies. He said one student “pointed out to me that multiple-choice tests are inherently deceptive, featuring wrong answers deliberately designed to appear plausible. Is this really the skill we want to teach and reward: not knowledge, not reasoning, but the ability to choose the most acceptable answer in a forest of deliberately plausible lies?” Point taken. We here at Minding the Campus are opposed to lies, forests of plausible lies in particular, but the way out seems clear: let’s just make sure that all possible choices on these tests are correct. Every student will therefore get the same perfect score, a decisive boost for equality as well as truth in testing.

Applying ”Freakonomics” to Final Exams

One of my colleagues here at the University of Texas–Austin, the economist Daniel Hamermesh, recently complained in his New York Times “Freakonomics” blog about the common practice in many departments of assigning no final exams. I wish he had applied his own craft to this situation. The lack of final exams is merely one symptom of a general collapse of expectations. The average number of hours spent studying has fallen to twelve hours a week, according to a recent book. Why are college teachers expecting so little effort from their students? They are responding (in an economically rational way) to the incentives created by the modern research university. Teaching is a distraction from highly rewarded activities (research and administration). Insofar as teaching is rewarded at all, the measure of ‘good’ teaching consists solely of student evaluations, which (to put it mildly) are not improved by increasing students’ workload (including the assignment of final exams).
Some teachers continue to care about teaching and put high expectations on their students, from a sense of professional duty and the intrinsic enjoyment of being catalysts for learning. However, the system does its best to de-select such dinosaurs, favoring instead those who can bring in funds and raise institutional prestige through publication. Until we change the incentive structure, final exams (and other accoutrements of serious learning) will continue to be an endangered species.

Cut the Sniping—It’s a Great Book

The sniping has begun about Richard Arum and Josipa Roksa’s great new book Academically Adrift. Predictably, people are saying the test instruments used (especially the Collegiate Learning Assessment or CLA but also the National Survey of Student Engagement or NSSE) are imperfect, they look at only a small number of relatively anonymous schools, etc. These complaints on the survey have some validity, but the reality is the higher education community has not collected the data or developed the test instruments that could allow for a broader wider test. Why, for example, don’t we have a test of general knowledge, something of an extension of the Adult Civic Literacy Test developed by the Intercollegiate Studies Institute, that is administered widely at the beginning and end of the college careers of students at any institutions receiving (or whose students receive) federal grant or loan money? Why aren’t the NSSE results published for the hundreds of schools using it? Or, why not at least administer the National Assessment of Educational Progress exam given to 17 year olds again to 21 or 22 year olds near the end of their college career? Higher education has fought transparency and accountability, so researchers have to use the limited information available.

Basically, Arum and Roksa argue that students work little in college and consequently learn little. Most of us who have been in higher education for decades know that this is true, even when we don’t want to admit it. But why? You don’t have to read very far in Academically Adrift to find the answers. Below are a series of quotes either from the authors or from sources they cite, one from each of the first 10 pages of the book:

Continue reading Cut the Sniping—It’s a Great Book

Students Who Learn Little or Nothing

I can’t recall a book on higher education that arrived with so much buzz, and drew so much commentary in the first two days after publication. The book is Academically Adrift: Limited Learning on College Campuses, by Richard Arum, and Josipa Roksa (University of Chicago Press). Arum is a professor of sociology and education at New York University and Roksa is an assistant professor of sociology at the University of Virginia.
Inside Higher Ed reported on the work yesterday, hailing it, if that’s the right word, as “a damning new book… asserting that many college students graduate without actually learning anything.”
After looking at data from student surveys and transcript analysis of 2300 students around the country, the authors concluded that 45 percent of students “did not demonstrate any significant improvement in learning” in their first two years of college, and 36 percent showed the same lack of significant progress over four years. Students improved on average only 0.18 standard deviations over the first two years and 0.47 over four years. “What this means,” Inside Higher Ed reported, “is that A student who entered college in the 50th percentile of students in his or her cohort would move up to the 68th percentile four years later—but that’s the 68th percentile of a new group of freshmen who haven’t experienced any college learning.”
Since our copy of the book arrived only today, we haven’t finished reading it, but we assume that its huge welcome in educational circles has a lot to do with the many books and articles deploring the lack of study on our campuses, the large number of college grads working at low-level jobs, books arguing that partying is the main activity of a great many collegians, and articles such as Peter Sacks’ here reporting on the all too common disengaged and academically tone deaf college students of today. We will have more to say later about Academically Adrift.

The Underperformance Problem

On average black students do much worse on the SAT and many other standardized tests than whites. While encouraging progress was made in the 1970s and early 1980s in improving black SAT scores and reducing the black/white test score gap, progress in this direction came to a halt by the early 1990s, and today the gap stands pretty much where it was twenty years ago. Whereas whites and Asians today average a little over 500 on the math and reading portions of the SAT, blacks score only a little over 400 — in statistical metric a gap of a full standard deviation. Only about one in six blacks does as well on the SAT as the average white or Asian.
This state of affairs is well known uncomfortable though it may be to bring up in public. Less well known is what in the scholarly literature is called “the underperformance problem.” Once in college blacks with the same entering SAT scores as whites and Asians earn substantially lower grades over their college careers and wind up with substantially lower class rankings. This gap in grade performance, moreover, is not reduced by adding high school grades or socio-economic status to the criteria for matching students. Blacks equally matched with whites or Asians in terms of their entering scholastic credentials and socio-economic backgrounds simply do not perform as well as their Asian and white counterparts in college. And the degree of underperformance is often very substantial.
This is contrary to what many people have been led to believe. Standardized tests are “culturally biased,” it is said, and do not fairly indicate the abilities or promise of racial minorities growing up outside the dominant white, middle-class, Anglo-Saxon culture. Often this claim is bolstered by reciting items on long outdated verbal tests asking for the meaning of words like “regatta” or “cotillion” that only upper-class whites are likely to know. The implication is usually that those from minority cultures will do better in college in terms of grades than their test scores would predict. The “cultural bias” argument, however, is not only questionable on its face — since the clearly non-Anglo Saxon Asians do better than whites on most standardized tests of mathematical abilities including the SAT, while the equally non-Anglo Saxon Ashkenazic Jews outperform everyone else on tests of English verbal ability — but fails to account for the fact that in terms of grade performance blacks in college consistently do worse, not better, than their standardized test scores would predict. Standardized tests such as the SAT and ACT overpredict, not underpredict, how well blacks will do in college, and in this sense the tests are predictively biased in favor of blacks, not against them.

Continue reading The Underperformance Problem

Faking Your Way Through Harvard–Almost

Here’s how easy it is to find out whether Adam Wheeler, the 23-year-old who allegedly faked his way into Harvard, was the preternaturally accomplished young scholar he said he was: Google. That’s how I spent a productive half-hour after I found Wheeler’s resume posted on the New Republic‘s website. Wheeler had submitted the resume when he applied for a literary internship at the magazine last fall (he did not get the job). That was either just before or just after he abruptly left Harvard during his senior year to avoid a disciplinary proceeding for allegedly getting himself admitted as a transfer student in 2007 (from MIT, he said) on the basis of forged transcripts, forged SAT scores, and forged letters of recommendation–and also for bilking Harvard out of $45,000 in financial aid, research money, and cash prizes for plagiarized student essays. He is now facing criminal prosecution on 20 counts of fraud, larceny, and identity theft.
So I typed into Google’s search box the title of one of the three lectures that Wheeler, who claimed to know classical Armenian, said on his resume that he had delivered to a meeting of the National Association of Armenian Studies and Research in 2009: “From Parthia to Robin Hood: The Armenian Version of the Epic of the Blind Man’s Son (Koroghlu)” The lecture was real enough, except that it had actually been delivered by James R. Russell, a professor of Armenian studies at Harvard. Russell had also delivered another of the esoterically titled Armenian-themed lectures that Wheeler attributed to himself: “The Rime of the Book of the Dove: Zoroastrian Cosmology, Armenian Heresiology, and the Russian Novel.”
Moving on, I Googled the titles of the four books that Wheeler said he had co-authored with Marc Shell, a professor in Harvard’s English department (Wheeler was an English major). Again, the books are real—Shell lists them on his own Harvard website–but they’re the sole work of Shell, with no credit given to co-authors. Shell had evidently captured Wheeler’s imagination, because Wheeler also stated on his resume that he had delivered three lectures at Shell’s Seven Days Work Educational Foundation on Grand Manan Island in New Brunswick in 2009 (a busy lecture year for Wheeler!). I admit that my Google search didn’t unearth any sources for those lectures—which deal with famous authors of the English Renaissance including Thomas More, Shakespeare, and Andrew Marvell—but a visit to the Seven Days Work Foundation’s website (which took less than five minutes to find) led me to wonder how Renaissance poets and playwrights could have fit into the 2009 conference, which was devoted to the ecology and economy of Grand Manan, where Shell has a residence and an interest in the local culture.

Continue reading Faking Your Way Through Harvard–Almost

An Educator for Indians and Capitalism

In the year 2000, American Indian Public Charter School in Oakland, CA, was one of the worst-performing middle schools in the state. Not a single student tested above the fiftieth percentile on state or national exams in math, and only eight percent of sixth-graders and 17 percent of eighth-graders passed that bar in reading (the rate for seventh-graders was zero.) Class attendance rates hovered around 65 percent. Junk lined the hallways, trash and rubbish cluttered the sidewalks and alleys outside. Neighbors called the school “the zoo.”
In Year 2008, American Indian Public Charter School had the highest test scores of any public school in Oakland. It ranked fifth among middle schools across the state.
What happened? A new principal arrived, Ben Chavis. His story appears in a recent book by Chavis and Carey Blakely entitled Crazy Like a Fox: One Principal’s Triumph in the Inner City
According to Chavis, among other things, the school was trapped in a culturalist fantasy. In an effort to instill racial pride and respect American Indian tradition, school leaders developed a curriculum that included courses in drumming and bead-making. The school day started late because they believed “American Indians couldn’t get up early in the morning.” The first hours brought everybody together for a session in which students and teachers discussed their feelings and interests and worries. Meanwhile, truancy, vandalism, and failure continued.
When Chavis took office, it all changed. He substituted “culture” classes with basic math and reading coursework oriented on explicit disciplinary standards. He extended the school year. He assigned detention freely for slight infractions, including a saturday detention period. He gave out financial awards for perfect attendance. He brought local drug dealers and thugs into the school to meet the students and promised them $5 for every absent student they found on the streets and returned to campus. He implemented a four-part education model made up of 1) family, 2) accountability, 3) high expectations, and 4) free market capitalism. In fact, he says, he insisted on “a free market capitalistic mind-set in our students and staff.” And he didn’t complain that the school needed more money.
There is much more to tell about the year-by-year progress of the school, including the firing of incompetent and lazy staff as well as the expulsion of what can only be called a racial pathology destroying the school until Chavis took over. It is a remarkable story of a man of solid work-ethic values and entrepreneurial vision working miracles.

That ”Hate America” Test

Candace de Russy’s January 7 post here, “Hate-America Sociology,” understandably attracted a lot of attention. It cited a 10-question Soc 101 quiz at an unnamed eastern college, complete with accusatory leftish questions and some simple-minded answers by a student who drew a mark of 100 for agreeing with the politics of his professor.
A few readers, and many more at other sites that linked to us, asked if the test and answers are authentic. I am satisfied that they are. The material came with assurances from Dr. de Russy, a former professor and trustee at the State University of New York. I know the college involved and have a copy of the test with answers filled in. I talked with the source for the story, who cannot be identified because of privacy concerns and fear of retaliation.
The blog Progressive Scholar saw nothing wrong with the test (“I don’t understand, what is the problem with this exam?”) Dr. de Russy replied, stressing what she saw as the “unremitting bias” of the test. Its point of view, she wrote, is “entirely anti-capitalist, anti-white, anti-male. No other perspective is included, even as a hypothetical.”
Readers who come across other politically loaded exams should send them to us at editor@campusmind.org or Minding the Campus, the Manhattan Institute, 52 Vanderbilt Avenue, New York, NY 10017.

Gaming The College Rankings

Test prep pioneer Stanley H. Kaplan, who died this week at the ripe old age of 90, was a living embodiment of the roller coaster changes that have roared through the college admissions scene over the last three decades. He also set the stage for students, and later colleges and universities, to game the system.
Kaplan began his career intent on showing how the SAT, designed in such as way as to preserve the elitist nature of U.S. higher education, could become a vehicle for broadening access. In doing so he helped unleash forces leading to the current situation in which working the system is the norm for both institutions and applicants alike. Stanley Kaplan got the car rolling, climbed aboard and had one heck of a ride.
I first met Stanley Kaplan at an academic conference in the 1980s. He was the last person I would have picked out of the crowd as a test prep baron whose name was anathema to college admissions officers. He was short, gentle and avuncular in manner and, as I recall, dressed in what seemed to be battle fatigues. He was a born educator who wrote in his autobiography that “while other children played doctor, I played teacher.” It was Stanley Kaplan the teacher who began tutoring students for the New York State Regents exams in the basement of his Brooklyn apartment and giving them a shot at higher education.

Continue reading Gaming The College Rankings

The SAT And Killing The Messenger

Average scores on the SAT dipped a bit for high school seniors who graduated in the class of 2009, and the usual suspects—our friends at the National Center for Fair and Open Testing (FAIR) are already using the lower scores to attack the whole idea of standardized testing, a platform that includes not only the SAT but also the No Child Left Behind Act, with its emphasis on improving test results.
The falloff from last year’s average scores was actually minimal: from 502 last year to 501 this year (out of a range from 200 to 800) on the critical-reading section of the SAT, no change from last year’s average math score of 515, and a one-point drop in the average score for the writing portion of the test, from 494 to 493.
What was striking about the score changes was the “widening” (as the press called it) of the score gap between male and female test-takers and between whites and Asian-Americans on one hand and blacks, American Indians, and Hispanics on the other. Average combined scores for whites fell by two points from last year, but it fell by four points for African-Americans, and it also slipped over last year for Indians and Hispanics. The biggest winners were Asian-Americans, whose average total combined score for all three parts of the SAT was a soaring 1635, compared with 1509 for all seniors in the class of 2009. Outstanding Asian math scores (587 was the average) accounted for most of the difference. Furthermore, males in the class of 2009 scored 27 points higher on average than females on the SAT this year, compared with 24 points higher last year. Again, the difference was largely due to far higher male scores on the math portion of the test.

Continue reading The SAT And Killing The Messenger

The End Of Merit-Based Admission

Students applying for college admission now face a new reality—the SAT is increasingly optional at our colleges and universities. The test-optional movement, pioneered by FairTest, a political advocacy group supported by George Soros and the Woods Fund—now list 815 schools that do not require SAT scores. That number may seem impressive, but it includes institutions that arguably should not be dependent on SAT scores at all, such as culinary institutes, seminaries and art schools.
Surprisingly, the National Association for College Admissions Counseling (NACAC) has joined the critics of the SAT. Its September 2008 report, lauded by the New York Times and Inside Higher Education, encouraged “institutions to consider dropping the admission test requirements if it is determined that the predictive utility of the test or the admission policies of the institution (such as open access) support that decision and if the institution believes that standardized test results would not be necessary for other reasons such as course placement, advising, or research” (italics in original).
If that sounds like a less than full-throated endorsement of the anti-testers, the reluctance to speak plainly is understandable. The SAT and ACT, the group now says, had been “interpreted by some as indications of the mental capacity of the individual test-taker as well as of the innate capabilities of ethnic groups.” Yet, when referring back to the SAT’s early years, they acknowledged its value as a tool for measuring the “academic potential of seniors at public high schools from all over the country who had not been specifically prepared” for admission to the nation’s top colleges.

Continue reading The End Of Merit-Based Admission

Score One For Yale

Yale made a sound decision yesterday. It said applicants must report all SAT scores, not just the highest of the three or four that some would-be Yalies take. That was the long-term policy of the College Board until last June, when Board officials announced they would let test-takers decide which scores to report. The stated reason was to reduce stress: if the student wasn’t up to par on testing day, he or she could always get tested again. But the policy also masked a financial reality—students from wealthier families could keep taking the test until they got the result they wanted; students from less well-off families often couldn’t. The predictive value of the test is marred by re-testing. And some who criticized the June decision pointed out that the Board had a financial stake: it stood to make more money by allowing unreported extra tests. Yale got it right. The class advantage of repeat test-takers will continue, but the fact of that advantage will now be clear and taken into account.

Peter Salins In The New York Times

Peter Salins’s October 15 essay here , “Does the SAT Predict College Success?,” attracted attention from many quarters, including the New York Times. Today the Times’s op-ed page published a fresh version of the Salins piece, which reported that at the State University of New York (SUNY), the colleges that decided to require higher SAT scores for admission significantly boosted their graduation rates. The Times did not have room for a full identification of Salins, a fellow at the Manhattan Institute and former provost of SUNY.

New Questions About The LSAT Validity?

A just-released study from the University of California-Berkeley’s law school points out that the Law School Admissions Test, a sort of SAT for applicants to law school, focuses lopsidedly on takers’ cognitive skills while overlooking key non-cognitive traits possessed by successful lawyers. And no, that doesn’t mean an aptitude for ambulance-chasing or filing phony class-action suits.
Instead, the 100-page report, prepared by former Berkeley law professor Marjorie Schultz and Berkeley psychology professor Sheldon Zedeck, asserts that the LSAT, which includes sections on reading comprehension and legal reasoning, “does not measure for skills such as creativity, negotiation, problem-solving or stress management.” Schultz and Zedeck pointed out that while one’s score on the LSAT correlates well with success as a first-year law student, it doesn’t correlate well with one’s future success as a lawyer. They had earlier identified 26 different non-cognitive traits that they said did correlate with future success in the legal profession: “negotiating skills, problem-solving and stress management,” as the Wall Street Journal’s law blog summed them up. After identifying those traits, in interviews with thousands of successful California lawyers, the pair’s research team developed methods for measuring them in law school applicants, via biographical, personality, and “situational judgment” modeled on employers’ personality tests for prospective employees.
There is little doubt that good lawyering can depend as much on how lawyers interact with their clients and argue in courtrooms as on the grade they got in first-year constitutional law. Obviously lawyers need more than sheer cognitive facility to deal with ill-tempered judges or hold troubled clients’ hands—and testing people skills may well be a useful supplement to testing cognitive skills. Still, it’s hard not to conclude from leafing through the Schultz-Zedeck study that its authors have overemphasized the softer side of law. Jeffrey Brand, dean of the University of San Francisco School of Law, delivered a touchy-feely anti-LSAT manifesto in this vein to the Recorder, a legal newspaper in San Francisco: “We need lawyers with the kind of skill sets that the world needs — like empathy, persuasiveness and the willingness to have the courage to do the right thing — which the LSAT does not measure.” This ignores the fact that lawyers are also expected to win their cases—which means knowing something about the law.

Continue reading New Questions About The LSAT Validity?

Who’s Acing The GREs?

Who are the smartest graduate students? You’ve probably already guessed that one: physicists. Second down in the brains ranking are mathematicians, then computer scientists, then economists and practically any sort of engineer. Such are the results of an analysis made in 2004 by Christian Roessler, a lecturer in economics at the University of Queensland, in Australia, of the mean scores on the Graduate Record Examination of Ph.D. candidates in 28 different academic fields. Roessler’s findings, recently linked on the Carpe Diem blog of Mark J. Perry, an economics and finance professor at the University of Michigan-Flint’s business school, and also by education blogger Joanne Jacobs. Roessler derived his rankings by looking at doctoral candidates’ average scores in 2002 on the three components of intellectual ability tested by the GRE: quantitative, verbal, and analytical (the analytical section, then a multiple-choice test like the quantitative and verbal sections, has since been replaced by a written test of analytical reasoning).
And if physicists (No. 1), mathematicians (No. 2) computer scientists (No. 3), economists (No. 4), and engineers (Nos. 5, 6, 7, 8, 12, and 13) are the smartest young people, judging from their test scores, to enter graduate programs that will train them to conduct scholarly research and teach the next generation of scholars in their fields, who are the dumbest? The answer to that question may well be easy to guess, too: grad students in communication (No. 26), education (No. 27), and public administration (No. 28). The dismal mean scores for doctoral candidates at education schools (467 in verbal ability, 515 in quantitative)—giving new meaning to the adage “Those who can’t, teach”–prompted a commenter on Jacobs’s blog to write, “The fact that the dimmest bulbs in our colleges self-select themselves as being the ones who should influence the education of future generations explains many of the edu-fads we see, as well as our continued failure to improve educational outcomes across disadvantaged populations….”
By contrast, the physicists, mathematicians, computer scientists, economists, and engineers consistently scored on average either above 700 or close to it in quantitative ability, although their verbal scores tended to be mediocre (the top-ranking physicists, for example, scored only 536 on average in verbal ability, while civil engineers, ranked at No. 13 on Roessler’s list, scored a mere 469, just two points higher than the educators). The scientists tended, however, to make up for lost verbal points by their high scores—typically above 600—on the analytic component of the GRE, a feat the educators, testing on average at 532 in analytic ability, could not match.

Continue reading Who’s Acing The GREs?

Downgrading SATs Makes Sense

Many conservatives are groaning over a major new report from a commission of higher education luminaries calling on colleges to de-emphasize the SAT for college admissions.

The catcalls from the right erupted after the National Association of College Admission Counseling suggested that colleges should rethink their reliance on the SAT for admissions. Wrongheaded, de-evolutionary, politically correct in the extreme, and void of common sense, the critics said the NACAC report is a frontal attack on academic standards and will lead to the ruin of American higher education.

We’ve heard the dire warnings before, countless times. And countless times the cries that the sky is falling have been wrong.

The defense of the SAT as the linchpin of the college admissions process contains at least two major propositions, both of questionable merit.

Continue reading Downgrading SATs Makes Sense

Top Five Law School Ranking Scams

The Shark provides a list of the top five Law School “Admissions Innovations” of 2008, with analysis.
The ludicrous Baylor case is ranked one, but I hadn’t heard of several of the others. Take #3

University of Michigan Law School’s Wolverine Scholars Program admits University of Michigan undergrads who have at least a 3.8 GPA and agree not to take the LSAT
a. How it works:
i. There is no LSAT score to report to U.S. News which is fine, and the 3.8 GPA will boost the median GPA of Michigan’s entering class.
b. How much it matters:
i. Median LSAT – 12.5% of school’s total score.
ii. Median undergrad GPA – 10% of school’s total score.

See how clever law schools really are. Read the rest.