Tag Archives: score

Harvard Botches a ‘Cheating’ Scandal


Harvey Silverglate and Zachary Bloom

At first blush, the ongoing cheating scandal at Harvard
College appears to raise serious questions about academic integrity at that
fabled institution. If the allegations that 125 students inappropriately shared
notes and answers for a take-home exam in violation of the exam’s rules prove
true, the result will be a massive blot on Harvard’s near-perfectly manicured
public  image–especially now that top 
athletes have been implicated.

But let’s remember that because of the course’s confusing rules and guidelines concerning collaboration, no one, likely not even the
students themselves, can say right now whether their conduct was illicit. Worse
yet, we may
never know the truth, much less have a just verdict on the
propriety of the students’ actions, now that the case is securely in the hands
of the spooks haunting Harvard’s notorious Administrative Board.

Continue reading Harvard Botches a ‘Cheating’ Scandal

How Group Learning Invites Cheating

The most shocking thing about the Harvard cheating scandal was not that 125 students out of a class of 279 were found to have “committed acts of academic dishonesty” on an exam last spring, or even that the exam was for a course that was supposed to be an easy mark. It was that it happened at Harvard, the elite of the elite, where it is understood that only the smartest kids are accepted. Why would they have to cheat?

As the details became clear (at first, significantly enough, in the sports magazines), it developed that the course, Government 1310: Introduction to Congress, had the reputation of being a cinch to pass. But last spring the exam was harder.  It was a take-home open-book and open- Internet assignment over a weekend, but this time students were expected to write essay answers, not just select answers from multiple choices. And when the papers were graded, more than half were found to have given answers that were the same as another student’s, word for word.

When the facts became public, there was no joy in Cambridge. The stars of Harvard’s outstanding basketball team were among the large proportion of athletes taking the course. It remained unclear what punishment awaited the guilty as it could not be determined whether students had been collaborating on answers or plagiarizing outright from the Internet or each other.

Generosity Was the Excuse

One indignant Harvard student maintained that collaboration was “encouraged, expected.” That attitude also seemed to apply at Stuyvesant High School, New York City’s outstanding school, where a similar scandal was revealed. This time 140 students were involved, all receiving help from a classmate using his cell phone to send answers to his friends and those he wanted to become his friends.  The tests (the system was applied to several of them) were the prestigious Regents exams, important factors in college acceptances.  Ironically, the admitted aim of most Stuyvesant students, who face stiff competition getting into Stuyvesant and maintaining high grades once they get there, is to be admitted to Harvard.

Continue reading How Group Learning Invites Cheating

Common Core Standards Can Save Us

reading anderson.gif


It’s no secret that
most high school graduates are unprepared for college. Every year, 1.7 million first-year
college students are enrolled in remedial classes at a cost of about $3 billion
annually, the Associated
recently reported. Scores on the 2011 ACT
college entrance exam
showed that only 1 in 4 high school graduates
was ready for the first year of college.

Continue reading Common Core Standards Can Save Us

It’s Not the Test’s Fault

Cross-posted from National Association of Scholars.

test taking.jpg

Cross-posted from National Association of Scholars.

Fall 2011 has seen some major milestones for the SAT/ACT optional movement. DePaul University, for instance, initiated its first admission cycle sans test requirement. Clark University announced last month that it will offer test-optional admissions for the incoming class of 2013.

In his new book released this fall titled SAT Wars, sociologist Joseph A. Soares of Wake Forest University hails the success of test-optional admission policies. Wake Forest was the first of the top 30 U.S. News schools to go test-optional and is one of the most vocal cheerleaders of the movement through its blog Rethinking Admissions.  According to Soares, adopting policies that allow applicants to opt out of reporting their scores has successfully resulted in diversifying these campuses by race, gender, ethnicity, and class (groups he claims are excluded unfairly for underperforming on standardized tests) without compromising overall academic quality.

By all appearances, requirements for standardized testing in higher ed admissions is on the long and ragged road out the door.  To date nearly 850 colleges and universities (40% of all accredited, bachelor-degree granting schools in the country) have already bidden farewell to the test requirement in some form or another. 53 of these institutions are currently listed in the top tier on the “Best Liberal Arts Colleges” list published by U.S. News and World Report including Bowdoin, Smith, Bates, Holy Cross, and Mount Holyoke Colleges. Even some of U.S. News’ high ranking national universities, such as Wake Forest University, Worcester Polytechnic Institute and American University, are categorized as test-optional.  It now seems likely that this trend will only gain in popularity and momentum in the coming years.

So is the SAT-optional movement a good thing?I have always loathed standardized tests myself, once conferring with my second grade teacher because I was certain that my scores were insufficient and that I was falling behind my peers.  It turned out that to be in the 94th percentile really was a good thing even if it was less than 100 – my eight-year-old mind just couldn’t comprehend this at the time.

Yet even after my elementary school pep talk on the nature of scaled grading, I always had this lingering feeling that standardized test scores were somehow an unfair representation of what I could do.  Perhaps I simply fell into the category of being a “poor” test taker, getting easily muddled by my own bubble filling perfectionism and the time constraints required by these acronymic tests.  Or maybe it was because I could never wrangle up enough motivation to spend my free time studying methods for optimizing my score.  And most of all, like any “free-thinking” member of my generation educated by the New Jersey public school curriculum of the 90s, it may have been because I was contentedly assured of being so much more than a number.

One would think given these facts that I would be all for the enforced disappearance of the SAT in favor of the new “holistic” entrance requirements offered by test optional schools. But like a wised-up adult now grateful that her mom made her eat vegetables as a child, I find myself in the curious position of lending support to this once bemoaned exam.

My reason for this change of heart is simple.  We need basic universal testing methods to separate out the prepared prospective students from the unprepared.

In his 2011 work, Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies, Howard Wainer uses the available statistical data to conclude that institutions considering SAT-optional policies should proceed with caution.

Making the SAT optional seems to guarantee that it will be the lower scoring students who withhold scores.  And these lower scoring students will also perform more poorly, on average, in their first-year college courses, even though the admissions office has found other evidence on which to offer them a spot.

For example, Wainer found that at Bowdoin College, a school at the forefront of test-optional admissions, students in the entering class of 1999 who chose not to report their SAT scores tested 120 points lower, on average, than those students who submitted scores with their application.  This gap does sound large at first glance, but when considering students who typically have combined scores of 1250 and above in the traditional math and verbal categories, does that 100-120 point spread really matter when deciding whether a student is college-ready?

Clearly, admissions administrators at schools like Bowdoin and Wake Forest don’t consider it to be a problem.  And they might be somewhat justified in this assessment, even if – as Wainer found – the non test reporting students tend to have lower college GPAs then their test reporting peers.  Not everyone should be getting As in college and there are plenty of middling students in solid programs who can still benefit from a college education.

But would these higher ranked institutions really want to admit students who score 200 or 300 points below the institutions’ averages?  Likely not, as the continued penchant for test-optional schools to purchase the names of high-test scorers indicates.  The test-optional philosophy of admissions might sound warm and fuzzy on the surface, but for many of these schools this still appears to be a numbers game; one that perpetuates the value of high scorers and high rankings, now precariously balanced with a goal of attaining the oh-so-necessary badges of inclusion and diversity (yet more statistics to tout).

Most of the students profiled by these SAT-optional schools to prove the success of their new admissions policies are ones who were already at the top of their high school classes and who would have been accepted to any number of decent schools, even with their horrifyingly “low” test scores.  Often colleges are willing to overlook mediocre scores if an applicant is salutatorian, captain of the volleyball team, or editor of the newspaper–achievements indicative of a certain level of discipline and focus.  And if what these test-optional schools claim is true–that there are students out there who are great fits for their campuses and who have everything in their applications except for a specific score range–the schools should have had the courage to admit (and maybe even recruit) them anyway, bad scores included.

It takes courage to admit low scoring applicants because doing so all but guarantees lowering the SAT averages of these institutions and thereby risks knocking them down a few pegs on many of the popular college ranking lists that use test scores of incoming freshman as a major factor in their rank calculations.  Now, with these new non-reporting admissions options, some schools do not consider themselves obligated to factor in the scores of their test-optional applicants, thus allowing their middle 50% SAT range to represent only test reporting students (presumably the best of their enrollment pool).  Just look at what the oft reoccurring footnote No. 9 on the U.S. News “Best Colleges List” has to say:

SAT and/or ACT may not be required by school for some or all applicants, and in some cases, data may not have been submitted in form requested by U.S. News. SAT and/or ACT information displayed is for fewer than 67 percent of enrolled freshmen.

If these schools truly believe that the tests are biased or inaccurate representations of student preparedness, then why should they care how their test medians rank or if they recruit the highest scorers for their incoming classes?

Apparent hypocrisy aside, my suspicion is that the schools profiled most frequently on this issue, and the debates surrounding their choice to step away from standardized tests, cover up the true harm the test-optional movement has on academe as a whole.  For it seems to pose the most danger not to its leaders, many of whom still selectively accept students over the 80th percentile, but to the large number of other schools who are realistically following suit to lower their admissions standards and raise enrollment to make ends meet.  A 100-point spread might not mean all that much to students with scores of 1250+, but it can definitely make a world of difference in schools whose means are already well below that threshold.  The hard truth is that at some point being a well-rounded person ceases to compensate for not possessing quantifiably provable verbal and math skills.

And unlike what Soares and his cohort claim, I think most would agree that high school GPA does not ensure the same universality of assessment offered by tests such as the SAT because high school curricula are not created equal.  Although I grew up in a school district where we started learning how to write research papers in the third grade, some of my college classmates never had to write more than a single double spaced page at a time, and some were never required to read a book cover to cover in the course of their entire K-12 educations.

On the larger trend, we are not talking about straight A students at challenging high schools who happened to have the flu on test day, or who can’t afford to take test prep classes, or who don’t work well under pressure, as much as the test-optional proponents want us to believe this to be the case.  For the majority of those nearly 850 accredited institutions, this movement is about admitting students who are not prepared and quite possibly not capable of benefiting from a college level education.

Accepting students to college when they are not ready for college level course work is irresponsible and inexcusable.  It is time to get beyond the top schools in this discussion and consider the havoc test optional policies may wreak on the vast majority of higher ed institutions.  What seems like only a minor performance disparity outweighed by the benefits of “diversity” at schools like Wake Forest could spell the end to professional academic standards at lower ranking but still respectable institutions.

It also might be time for the proponents of test-optional admissions to stop and consider that maybe it really isn’t the test’s fault after all.  Low-scoring but worthy students ready to tackle college coursework are probably the exception rather than the rule. Admissions officers should use individual discernment and admit such students, when deserved, with full knowledge of how they scored. This is exactly why we have people, not mathematic algorithms, make admissions decisions in the first place.

More broadly, if certain groups are genuinely disadvantaged by these tests and underperform as researchers such as Soares and organizations like The National Center for Fair and Open Testing claim, we should continue to place emphasis on innovative solutions for K-12 reform instead of dispensing with standardized testing altogether.  The chances are that the most notable demographic gaps in the test results reflect a deficiency in education quality or testing support, both areas we can improve over time through reform, more than any inherent flaw with the objective test itself.  Not to mention that one of the primary methods used, including by the test skeptics listed above, to identify policy weaknesses and demographic disparities is the analysis of standardized test scores.  Without any form of universal achievement testing we risk missing demographic weaknesses altogether and could neglect the urgency to find solutions where legitimate problems exist.

The tests will never be perfect or comprehensive, but they continue to offer the most assured universal assessment of college preparedness, especially when considered alongside the many other factors traditionally used in admissions decisions.  To say that it is the test’s fault is both a juvenile and a nearsighted excuse. We do need to rethink college admissions, but implementing policies that let in more, not fewer, unprepared students is heading in the wrong direction – one that has no future in mind.

Make-Believe Grades for Real Law Students

Almost every morning, after taking a shower, I get on the scale to see if I have lost some of the extra weight that I do not want or need. I have tried many ways of shedding the pounds, with diet and exercise at the top of the list. The pounds refuse to disappear. After reading Catherine Rampell’s piece, “In Law Schools, Grades Go Up, Just Like That,” in the New York Times, I realized that there is a simpler way. A slight adjustment to the scale, so that the measuring starts at minus 15 pounds rather than zero, could bring instant relief. I could truthfully — if not honestly — say that according to the scale, I was now less than 175 pounds.
This droll reverie faded to disappointment as I pondered the implications of adjusting law school grades in the fashion recounted in the Times. Grades entered on students’ transcripts at law school were adjusted upward several semesters later. The article told of law schools abandoning traditional grading standards to give their students an edge in the tough job market. Thus, each school’s scale was adjusted to give the appearance that students did better than they actually did. The schools named were Loyola, Georgetown, NYU, Tulane and Golden State Universty. When I thought further about these modifications, I was reminded of other instances where expected objectivity gave way to subjective judgments. The New York State Board of Regents, for instance, has begun the practice of determining acceptable grades by assigning a passing grade to a raw score. The raw score required for passage is only arrived at after the tests are rated. Such a system allows the Board of Regents to crow about an 80% passage rate, notwithstanding the fact that the classification was entirely contrived. It has the feel of issuing traffic citations on the basis of a quota and claiming there is an epidemic of bad drivers.
The fact that this practice of grade adjustment has developed in law schools is, I believe, a much more serious matter. The rule of law, when properly applied, embodies honesty, fairness and impartial justice. The behavior of adjusting grades to conceal the truth does damage to our expectation of the rule of law. That the law itself is made to yield to the dollar is particularly troublesome. As a professor quoted in the Times article put it, “if somebody’s paying $150,000 for a law school degree, you don’t want to call them a loser in the end. So you artificially call every student a success.” The result is a perverse version of the golden rule: “he who has the gold rules.”

Continue reading Make-Believe Grades for Real Law Students

The Achilles Heel of the U.S. News Rankings

In 1983 U.S. News & World Report came up with what Ben Wildavsky, a former education editor at the magazine, described as “a journalistic parlor game.” The magazine had just conducted a successful survey of U.S. leaders to identify the most influential Americans. Why not, the editors asked, use a similar approach to identify the country’s top colleges and universities?

So U.S. News sent surveys to college presidents around the country asking them to pick ten colleges that provided the best undergraduate education in their particular academic niche. The magazine published the results in 1983 and again in 1985, and by 1987 the project had morphed into a free-standing guidebook entitled America’s Best Colleges. “No one imagined that the rankings would become what some consider the 800-pound gorilla of American higher education,” recalled the late Alvin Sanoff, the longtime managing editor of the rankings project.

The gorilla continues to stalk U.S. higher education. Last week Daniel de Vise of the Washington Post reported that “a small but determined” group of college presidents in the Washington-Baltimore area is now boycotting the “peer assessments” questionnaire that U.S. News & World Report sends them every year as part of its process of updating its college rankings. Their protest follows a report last year that another group of college presidents across the country had pledged to do likewise.

It’s easy to understand why college presidents don’t like U.S. News butting into their affairs in the first place and might be inclined not to cooperate at all (as Reed College has done). John Burness, the former communications chief at Duke University, probably spoke for most of higher education when he observed in 2008 that the precision that U.S. News ascribes to its rankings “is, on the face of it, rather silly.”

Continue reading The Achilles Heel of the U.S. News Rankings

Gaming The College Rankings

Test prep pioneer Stanley H. Kaplan, who died this week at the ripe old age of 90, was a living embodiment of the roller coaster changes that have roared through the college admissions scene over the last three decades. He also set the stage for students, and later colleges and universities, to game the system.
Kaplan began his career intent on showing how the SAT, designed in such as way as to preserve the elitist nature of U.S. higher education, could become a vehicle for broadening access. In doing so he helped unleash forces leading to the current situation in which working the system is the norm for both institutions and applicants alike. Stanley Kaplan got the car rolling, climbed aboard and had one heck of a ride.
I first met Stanley Kaplan at an academic conference in the 1980s. He was the last person I would have picked out of the crowd as a test prep baron whose name was anathema to college admissions officers. He was short, gentle and avuncular in manner and, as I recall, dressed in what seemed to be battle fatigues. He was a born educator who wrote in his autobiography that “while other children played doctor, I played teacher.” It was Stanley Kaplan the teacher who began tutoring students for the New York State Regents exams in the basement of his Brooklyn apartment and giving them a shot at higher education.

Continue reading Gaming The College Rankings

The SAT And Killing The Messenger

Average scores on the SAT dipped a bit for high school seniors who graduated in the class of 2009, and the usual suspects—our friends at the National Center for Fair and Open Testing (FAIR) are already using the lower scores to attack the whole idea of standardized testing, a platform that includes not only the SAT but also the No Child Left Behind Act, with its emphasis on improving test results.
The falloff from last year’s average scores was actually minimal: from 502 last year to 501 this year (out of a range from 200 to 800) on the critical-reading section of the SAT, no change from last year’s average math score of 515, and a one-point drop in the average score for the writing portion of the test, from 494 to 493.
What was striking about the score changes was the “widening” (as the press called it) of the score gap between male and female test-takers and between whites and Asian-Americans on one hand and blacks, American Indians, and Hispanics on the other. Average combined scores for whites fell by two points from last year, but it fell by four points for African-Americans, and it also slipped over last year for Indians and Hispanics. The biggest winners were Asian-Americans, whose average total combined score for all three parts of the SAT was a soaring 1635, compared with 1509 for all seniors in the class of 2009. Outstanding Asian math scores (587 was the average) accounted for most of the difference. Furthermore, males in the class of 2009 scored 27 points higher on average than females on the SAT this year, compared with 24 points higher last year. Again, the difference was largely due to far higher male scores on the math portion of the test.

Continue reading The SAT And Killing The Messenger

When Campuses Became Dysfunctional

In recent years the stakes for entrance to the nation’s most prestigious colleges and universities have risen to absurd heights, with students (or, their families) not only now paying significant sums for private school tuitions (or the entry cost into good school districts, namely expensive housing), SAT training, and coaching for application writing, but increasingly specialized services such as student “branding” – in which students (or, their families) hire “branding” professionals to develop a marketing strategy for “selling” a student to the top universities – and even such morally damnable practices as anonymously informing schools about the reprehensible qualities of competitors who apply to the same university. Clearly things have gotten out of control, but there are very few people – whether inside or outside the university system – who are willing or even desire to rock the boat by pointing out the absurdity of the current state of affairs.
The reason for this conspiracy of silence is that the current system benefits those who are best positioned to take advantage of the root causes for these absurdities: namely, families with the background, wherewithal and education to know how to “game” the system, and the elite colleges and universities whose denizens benefit in all sorts of financial and professional ways from their placement at these exceedingly small number of desirable schools. A confluence of interest bonds these financial and cultural elites in their ambition to maintain the current arrangement, namely a desperation on the parts of the families to put their children in a position to succeed, and the desperation on the parts of these elite institutions to be the exclusive grantors of the imprimatur for such success. In our profoundly competitive world order, in which increasingly few people can hope to emerge as the “winners” in a system that ruthlessly winnows out those who will not join the small club of the international elite – financial, political and cultural – all stops must be removed, all measures pursued, all efforts expended.
In compensation for their success, students are privileged to join an elite group of similarly-situated peers who harbor the same ambitions of worldly success and achievement. They are simultaneously thrown together as colleagues and competitors, a condition that will continue to define their relationships throughout their college years and beyond. The elite institutions are populated by star professors and a steady stream of noteworthy dignitaries, intellectuals, artists, public intellectuals, and so on: exposure to this class – as well as to the future incarnation of these winners in the form of their classmates – constitutes a considerable share of the education that takes place on today’s campuses, namely a socialization in success, the learned capacity to emulate their predecessors who have successfully navigated the shoals of hyper-competitive globalization and emerged as its leaders and beneficiaries.

Continue reading When Campuses Became Dysfunctional

The End Of Merit-Based Admission

Students applying for college admission now face a new reality—the SAT is increasingly optional at our colleges and universities. The test-optional movement, pioneered by FairTest, a political advocacy group supported by George Soros and the Woods Fund—now list 815 schools that do not require SAT scores. That number may seem impressive, but it includes institutions that arguably should not be dependent on SAT scores at all, such as culinary institutes, seminaries and art schools.
Surprisingly, the National Association for College Admissions Counseling (NACAC) has joined the critics of the SAT. Its September 2008 report, lauded by the New York Times and Inside Higher Education, encouraged “institutions to consider dropping the admission test requirements if it is determined that the predictive utility of the test or the admission policies of the institution (such as open access) support that decision and if the institution believes that standardized test results would not be necessary for other reasons such as course placement, advising, or research” (italics in original).
If that sounds like a less than full-throated endorsement of the anti-testers, the reluctance to speak plainly is understandable. The SAT and ACT, the group now says, had been “interpreted by some as indications of the mental capacity of the individual test-taker as well as of the innate capabilities of ethnic groups.” Yet, when referring back to the SAT’s early years, they acknowledged its value as a tool for measuring the “academic potential of seniors at public high schools from all over the country who had not been specifically prepared” for admission to the nation’s top colleges.

Continue reading The End Of Merit-Based Admission

Score One For Yale

Yale made a sound decision yesterday. It said applicants must report all SAT scores, not just the highest of the three or four that some would-be Yalies take. That was the long-term policy of the College Board until last June, when Board officials announced they would let test-takers decide which scores to report. The stated reason was to reduce stress: if the student wasn’t up to par on testing day, he or she could always get tested again. But the policy also masked a financial reality—students from wealthier families could keep taking the test until they got the result they wanted; students from less well-off families often couldn’t. The predictive value of the test is marred by re-testing. And some who criticized the June decision pointed out that the Board had a financial stake: it stood to make more money by allowing unreported extra tests. Yale got it right. The class advantage of repeat test-takers will continue, but the fact of that advantage will now be clear and taken into account.

When College Rankings Are A Marketing Ploy

As author of a major college guide, I try to approach college admissions issues from the point of view of what’s best for college-bound high school students and their parents. I speak with lots of such students and their parents every year, and the one topic that is guaranteed to come up is: What should we make of the annual U.S. News & World Report college rankings?

Here’s what I tell them.

First, understand the real agenda of college rankings. The main reason that U.S. News compiles and publishes rankings is not to enrich the quality of U.S. higher education but to sell magazines. And there is nothing wrong with this. Americans love rankings, whatever the topic, and (for reasons discussed below) these rankings can be somewhat useful.

But keep in mind that static lists do not sell magazines. If the rankings were the same every year, no family would need to by the updated list for younger brother or sister. Since both the absolute and the relative quality of major colleges and universities evolve only over long periods of time, the best way to generate churn in the rankings is to change the formula. Which is what U.S. News does every year – for reasons both sound and dubious.

Continue reading When College Rankings Are A Marketing Ploy

Peter Salins In The New York Times

Peter Salins’s October 15 essay here , “Does the SAT Predict College Success?,” attracted attention from many quarters, including the New York Times. Today the Times’s op-ed page published a fresh version of the Salins piece, which reported that at the State University of New York (SUNY), the colleges that decided to require higher SAT scores for admission significantly boosted their graduation rates. The Times did not have room for a full identification of Salins, a fellow at the Manhattan Institute and former provost of SUNY.

New Questions About The LSAT Validity?

A just-released study from the University of California-Berkeley’s law school points out that the Law School Admissions Test, a sort of SAT for applicants to law school, focuses lopsidedly on takers’ cognitive skills while overlooking key non-cognitive traits possessed by successful lawyers. And no, that doesn’t mean an aptitude for ambulance-chasing or filing phony class-action suits.
Instead, the 100-page report, prepared by former Berkeley law professor Marjorie Schultz and Berkeley psychology professor Sheldon Zedeck, asserts that the LSAT, which includes sections on reading comprehension and legal reasoning, “does not measure for skills such as creativity, negotiation, problem-solving or stress management.” Schultz and Zedeck pointed out that while one’s score on the LSAT correlates well with success as a first-year law student, it doesn’t correlate well with one’s future success as a lawyer. They had earlier identified 26 different non-cognitive traits that they said did correlate with future success in the legal profession: “negotiating skills, problem-solving and stress management,” as the Wall Street Journal’s law blog summed them up. After identifying those traits, in interviews with thousands of successful California lawyers, the pair’s research team developed methods for measuring them in law school applicants, via biographical, personality, and “situational judgment” modeled on employers’ personality tests for prospective employees.
There is little doubt that good lawyering can depend as much on how lawyers interact with their clients and argue in courtrooms as on the grade they got in first-year constitutional law. Obviously lawyers need more than sheer cognitive facility to deal with ill-tempered judges or hold troubled clients’ hands—and testing people skills may well be a useful supplement to testing cognitive skills. Still, it’s hard not to conclude from leafing through the Schultz-Zedeck study that its authors have overemphasized the softer side of law. Jeffrey Brand, dean of the University of San Francisco School of Law, delivered a touchy-feely anti-LSAT manifesto in this vein to the Recorder, a legal newspaper in San Francisco: “We need lawyers with the kind of skill sets that the world needs — like empathy, persuasiveness and the willingness to have the courage to do the right thing — which the LSAT does not measure.” This ignores the fact that lawyers are also expected to win their cases—which means knowing something about the law.

Continue reading New Questions About The LSAT Validity?

Who’s Acing The GREs?

Who are the smartest graduate students? You’ve probably already guessed that one: physicists. Second down in the brains ranking are mathematicians, then computer scientists, then economists and practically any sort of engineer. Such are the results of an analysis made in 2004 by Christian Roessler, a lecturer in economics at the University of Queensland, in Australia, of the mean scores on the Graduate Record Examination of Ph.D. candidates in 28 different academic fields. Roessler’s findings, recently linked on the Carpe Diem blog of Mark J. Perry, an economics and finance professor at the University of Michigan-Flint’s business school, and also by education blogger Joanne Jacobs. Roessler derived his rankings by looking at doctoral candidates’ average scores in 2002 on the three components of intellectual ability tested by the GRE: quantitative, verbal, and analytical (the analytical section, then a multiple-choice test like the quantitative and verbal sections, has since been replaced by a written test of analytical reasoning).
And if physicists (No. 1), mathematicians (No. 2) computer scientists (No. 3), economists (No. 4), and engineers (Nos. 5, 6, 7, 8, 12, and 13) are the smartest young people, judging from their test scores, to enter graduate programs that will train them to conduct scholarly research and teach the next generation of scholars in their fields, who are the dumbest? The answer to that question may well be easy to guess, too: grad students in communication (No. 26), education (No. 27), and public administration (No. 28). The dismal mean scores for doctoral candidates at education schools (467 in verbal ability, 515 in quantitative)—giving new meaning to the adage “Those who can’t, teach”–prompted a commenter on Jacobs’s blog to write, “The fact that the dimmest bulbs in our colleges self-select themselves as being the ones who should influence the education of future generations explains many of the edu-fads we see, as well as our continued failure to improve educational outcomes across disadvantaged populations….”
By contrast, the physicists, mathematicians, computer scientists, economists, and engineers consistently scored on average either above 700 or close to it in quantitative ability, although their verbal scores tended to be mediocre (the top-ranking physicists, for example, scored only 536 on average in verbal ability, while civil engineers, ranked at No. 13 on Roessler’s list, scored a mere 469, just two points higher than the educators). The scientists tended, however, to make up for lost verbal points by their high scores—typically above 600—on the analytic component of the GRE, a feat the educators, testing on average at 532 in analytic ability, could not match.

Continue reading Who’s Acing The GREs?

Downgrading SATs Makes Sense

Many conservatives are groaning over a major new report from a commission of higher education luminaries calling on colleges to de-emphasize the SAT for college admissions.

The catcalls from the right erupted after the National Association of College Admission Counseling suggested that colleges should rethink their reliance on the SAT for admissions. Wrongheaded, de-evolutionary, politically correct in the extreme, and void of common sense, the critics said the NACAC report is a frontal attack on academic standards and will lead to the ruin of American higher education.

We’ve heard the dire warnings before, countless times. And countless times the cries that the sky is falling have been wrong.

The defense of the SAT as the linchpin of the college admissions process contains at least two major propositions, both of questionable merit.

Continue reading Downgrading SATs Makes Sense

Top Five Law School Ranking Scams

The Shark provides a list of the top five Law School “Admissions Innovations” of 2008, with analysis.
The ludicrous Baylor case is ranked one, but I hadn’t heard of several of the others. Take #3

University of Michigan Law School’s Wolverine Scholars Program admits University of Michigan undergrads who have at least a 3.8 GPA and agree not to take the LSAT
a. How it works:
i. There is no LSAT score to report to U.S. News which is fine, and the 3.8 GPA will boost the median GPA of Michigan’s entering class.
b. How much it matters:
i. Median LSAT – 12.5% of school’s total score.
ii. Median undergrad GPA – 10% of school’s total score.

See how clever law schools really are. Read the rest.

Does The SAT Predict College Success?

One of the hottest debates roiling American campuses today is whether the SAT and other standardized tests should continue to play a dominant role as a college admissions criterion. The main point of contention in this debate is whether the SAT or equivalent scores accurately gauge college preparedness, and whether they are valid predictors of college success, most particularly in comparison with high school grades. Behind this ostensible concern is the expressed fear that over-reliance on collegiate admissions tests will reduce “access” to college on the part of low-scoring applicants, many of them from poor or minority families and, thus, risk making American colleges and universities less demographically diverse.
First, let me address “access” and diversity: According to the most recent (2007) data, 45 percent of all colleges or universities, and 66 percent of public ones, have no admissions criteria at all. In the public sector – which accounts for three-quarters of all higher education slots – among the 34 percent of schools with some kind of admissions screen, 69 percent accept more than half of their applicants. Even among the remaining somewhat selective institutions, the majority either do not require admissions test scores or they accept most low-scoring applicants, with the result that the average verbal SAT for all college applicants is 532, and that for the math SAT is 537 (both out of a potential score of 800).
Second, regarding the sincerity of the most vociferous admissions test opponents: Virtually all of the schools calling for abandonment or down-grading of SATs and comparable admissions test have always been highly selective – and intend to remain so. There should be absolutely no confusion on this score. These places have no intention of becoming academically more diverse, meaning they are not planning to admit academically inferior poor or minority students. As predominantly rich institutions, they have an army of admissions officers able to pore over every applicant’s high school transcript and other evidence of academic ability to keep recruiting the best and brightest students, even absent admissions tests. Actually, even with their “test-optional” policies, they will have access to most applicants’ SAT scores anyway, because academically strong applicants will continue to take the tests to keep all their collegiate options open. If one were inclined to take a conspiratorial view of these institutions’ motives, one might suspect that they were mounting this concerted campaign to assure that America’s public colleges and universities remain unselective, derailing the rising admissions aspirations of those ambitious public institutions that threaten to cut into their current monopoly of gifted high school graduates.

Continue reading Does The SAT Predict College Success?

Abandoning The SAT: Why?

Fewer and fewer high school students are taking the SAT exam these days—possibly because fewer colleges are requiring the submission of SAT scores as part of the admissions process. According to the National Center for Fair and Open Testing (FairTest), an organization that admittedly opposes standardized tests, only 46 percent of graduating seniors in the high school class of 2008 had taken the SAT even once. That compares to the 47.5 percent of graduating seniors in 2005 who had taken the test, according to FairTest.

FairTest’s numbers are corroborated by news reports about the colleges and universities, many with top rankings, that are abandoning the SAT and its rival test, the ACT, in droves. Just a few days ago, Wake Forest University in North Carolina and Smith College in Massachusetts announced that they would no longer require their applicants to submit their scores on either the SAT or the ACT. The two well-regarded institutions added their names to an estimated 750 four-year colleges and universities that now regard the submission of SAT/ACT scores as optional. They include an array of top liberal-arts colleges such as Bard, Bowdoin, Mount Holyoke, Middlebury, and Wheaton. Among the very most selective schools, Harvard and Yale still require applicants to submit SAT scores, but at Princeton the scores are optional.

And now the prestigious University of California system, whose 220,000 students come from the top 12.5 percent of their high school graduating classes by a measure that combines SAT scores and high-school grades, has announced a plan, approved by the UC faculty and awaiting ratification by UC President Mark G. Yudof, that would eliminate the current requirement that prospective UC freshmen take the SAT II, a subject-specific achievement exam in such fields as U.S. history that is taken in addition to the SAT’s core aptitude tests in math, verbal skills, and reasoning. A proposal to drop mandatory submission of SAT scores entirely has been floating around the UC system since 2001. Large state universities, in contrast to small liberal arts colleges, have generally held the line on mandatory scores submission, but if California makes the scores optional, it is likely that many other public institutions will follow suit.

Continue reading Abandoning The SAT: Why?

Abandoning The SAT – Fraud or Folly?

What are we to make of the decision by a growing number of “highly selective” colleges to scrap the Scholastic Aptitude Test (SAT) as a criterion for college admission, something brought to our attention recently when another pair of semi-elite schools (Smith and Wake Forest) joined these ranks? The New York Times story of May 27 reporting on the Smith/Wake Forest developments explains the matter thus: “The number of colleges and universities where such tests are now optional… …has been growing steadily as more institutions have become concerned about the validity of standardized tests in predicting academic success, and the degree to which test performance correlates with household income, parental education and race.” If this is really what is driving the SAT defectors, they are deceiving themselves and misleading the public.

Let’s begin with predictive validity. Among the countless studies done on this subject over the years, not a single one has failed to find a high correlation between SAT scores and academic performance in college, as measured by grades or persistence. On a personal note, during my ten years as Provost of SUNY, I had my institutional research staff repeatedly review the relationship between SAT scores and academic success among our 33 baccalaureate campuses and their 200,000 + students, and found – as all the national research has confirmed – a near perfect correlation. SUNY schools and students with higher SAT profiles had higher grade point averages and markedly higher graduation rates.

The other claim of test critics is that high school grade point averages are equal to or better than SATs as predictors of college performance. This, too, is inaccurate. Looking at all U.S. high school graduates in any given year, we find the distribution of grade point averages (GPAs) is remarkably uniform – and invariably bell-shaped – across the nation despite enormous local and regional differences in high school quality or curricula. There is statistically no way that such similar high school GPA profiles could accurately reflect the highly variable academic abilities of the American high school graduating cohort. If there is any truth at all to the claims of SAT defectors in this regard, it is that among their own students – most of whom have graduated from academically superior public or private schools – SATs and high school GPAs are highly correlated. Analysts have pointed out, however, that if high school GPAs were to more generally replace SATs as the primary admissions criterion to get into top colleges, grade inflation would very likely erode the predictive validity of GPAs even at privileged public or private high schools.

Continue reading Abandoning The SAT – Fraud or Folly?