Testing education: an attack on the Scholastic Aptitude Test unwittingly reveals the failure of American schools

Testing education: an attack on the Scholastic Aptitude Test unwittingly reveals the failure of American schools

Paul Chance


An attack on the Scholastic Aptitude Test unwittingly reveals the failure of American schools.

THIS MONTH, thousands of American adolescents will undergo a rite of passage as grueling as anything an Australian aborigine could dream up. Armed only with a couple of number 2 pencils, they will do battle with a fearsome beast known as the SAT.

The SAT (short for Scholastic Aptitude Test) is said to measure a student’s verbal and mathematical reasoning and comprehension. Nearly two million people, most of them high school seniors, will pay $12 and sacrifice a Saturday morning for the privilege of agonizing over something like 200 multiple-choice questions. They submit to the three-hour ordeal because more than 1,500 colleges and universities in this country require that applicants submit scores from the test. Few schools accept or reject students solely on the basis of SAT scores, but most colleges do use the test, along with high school grades and other criteria, to decide who shall pass through their ivied gates.

The SAT and its publisher, the Educational Testing Service (ETS), have been the objects of considerable abuse in recent years. One example is The Case Against the SAT, by James Crouse and Dale Trusheim of the University of Delaware. Crouse is a professor of educational studies and sociology and Trusheim is assistant director of institutional research and strategic planning there.

Crouse and Trusheim do not like the SAT, but their attack is different from that of other critics. They don’t challenge ETS’s claim that the SAT measures important abilities that are related to success in college. “Rather,” they write, “we argue that despite its ability to predict educational success, the SAT is unnecessary.”

Their argument runs something like this: If colleges chose applicants on the basis of a coin flip (heads you’re in, tails you’re out), we could expect many mistakes to be made. That is, the coin toss would admit many weak students and reject many qualified applicants. The challenge admissions officers face is finding criteria that work better than flipping a coin.

The SAT, Crouse and Trusheim admit, is better than coin flipping. But, they add, so are other readily available criteria. Even ETS says that high school grades are the best single predictor of college grades. Senior-class rank also predicts college success. Requiring the SAT makes sense, Crouse and Trusheim argue, only if using the SAT means making fewer admissions errors than would be made if one relied solely on high school performance. Can colleges predict student success better with the SAT in combination with, say, class rank than they can with class rank alone? The answer is yes, say Crouse and Trusheim, but the improvement is so small it is meaningless.

Crouse and Trusheim base their answer on an analysis of various SAT studies. In one case, they examined data from nearly 2,800 high school seniors who had taken the SAT. They compared decisions based on class rank alone with those based on rank and SAT scores and found that the overlap between the decisions ranged from just under 84 percent to more than 98 percent. In other words, in over 83 percent of the cases, the same decision would have been made without SAT scores.

Even with this considerable overlap, the SAt will reduce the number of errors colleges make. But Crouse and Trusheim say that the reduction is very small. They argue that if freshman grades are used as a measure of success, using both class rank and SAT scores means only one to three fewer errors for every 100 applicants than would have been made using class rank alone. If graduation from college is used as the standard, adding the SAT makes even less difference.

ETS counters that even small improvements in admissions decisions can be important if they reduce the waste of resources on students who will not succeed. Crouse and Trusheim reply that the benefits of improved prediction from the SAT have to be weighed against the cost of requiring the SAT. Twelve dollars times 1.8 million students is a lot of cost.

ETS also claims that the value of other admissions criteria might deteriorate if the SAT were dropped. If high school grades were used alone, for example, there might be increased pressure on teachers to give students high grades. Such grade inflation might mean that grades would lose much of their value in predicting college performance. Crouse and Trusheim acknowledge that some sort of test may be necessary to maintain the integrity of high school grades, but they counter that the test need not be the SAT.

How, then, should colleges select students? Crouse and Trusheim favor replacing the SAT with a battery of standardized achievement tests. They admit that such tests may be no better at predicting college success than the SAT is. But Crouse and Trusheim say that achievement tests would have important advantages over the SAT. Among other things, they claim such tests would: ] certify a student’s mastery of a particular content area, such as history, physics or literature. Colleges would know how much a student knows about a subject, rather than how well he or she might learn the subject. ] encourage high schools to offer more rigorous courses. Schools would have to offer chemistry, for example, if colleges required a chemistry test for premed students. ] encourage high school students to take more difficult courses and work harder at them. The idea is that if students know they will have to demonstrate a knowledge of biology and French, they will take these courses rather than shop and driver training, and they will try to get as much out of the courses as they can.

Unfortunately, Crouse and Trusheim provide no proof that their proposed tests would have these effects. But what makes their proposal interesting is that it is not aimed at improving admissions decisions but at improving education. The unstated assumption underlying their proposal is that learning is an intrinsically unpleasant activity that students must be forced to endure.

The validity of this assumption is unlikely to face serious challenge. “If you don’t learn this,” parents and teachers tell their charges, “you won’t do well on the test.” The test may be a weekly quiz, the SAT or one of Crouse and Trusheim’s achievement tests. We assume that few students will learn unless they are made to do so, and tests are a way of making students learn.

But is learning really so distasteful? It is generally acknowledged that more learning takes place in the first six years of life than at any other period. Yet, except in some overly zealous kindergartens, young children are rarely forced to learn. No one threatens to punish babies who do not demonstrate an acceptable level of imitative play by the age of 6 months. No one gives a 1-year-old a test on walking or warns that failure to walk will mean staying in for extra lessons. Yet until children start school they soak up knowledge like a desert soaks up rainwater.

There is probably nothing wrong with using tests as a way of finding out what students have learned. But tests are no longer merely the measure of learning; they have become the reason for learning. This should tell us that there is something very wrong with how we educate young people, but we don’t see it.

COPYRIGHT 1988 Sussex Publishers, Inc.

COPYRIGHT 2004 Gale Group