Op/Ed: Does high-stakes testing improve education?

By Michael T. Rock, Director, Unionville-Chadds Ford Board of Education

MichaelRock

Michael T. Rock

As the debate on No Child Left Behind and the Common Core heat up and as Pennsylvania’s school districts gear up for implementation of the Keystone exit exams it is time to take stock of what we know about the impact of high stakes testing. Supporters see tests as a way to close the gap in performance between American students and their counterparts in other industrialized countries, better prepare students for the 21st century, and improve the performance of poor performing schools. Detractors decry a narrowing of the curriculum, time spent teaching to the test, and a focus on the wrong issue. The former urge staying the course the latter encourage students to opt out.

What’s so distressing about the public debate is how detached it is from a large body of rigorous empirical research on the impact of high stakes testing. That research yields substantial evidence on the impact of testing on student achievement; on the persistent gap between American student test scores in math and reading and those of our major competitors; on graduation and dropout rates; and on SAT scores, college admission, college persistence, and labor market outcomes.  While the literature on this topic is large and unwieldy, readers can get an excellent introduction to it by consulting Dee and Jacob (Brookings Papers on Economic Activity, 2010 and Holmes et al (Review of Educational Research 2010).

The impact of NCLB on student achievement has been mixed at best. Simple trend data suggest that NCLB probably increased fourth grade math scores, particularly for minority students and students from low income families. There is little evidence of trend improvement in reading. More rigorous statistical studies confirm that the benefits of NCLB are limited to fourth grade math scores.

A similar story characterizes the impact of NCLB on the gap between U.S. student scores on standardized international math and science (TIMSS) and reading (PIRLS) tests and those of our major competitors. While there is some evidence that the gap is closing in math, the gain is quite modest (1.35% over pre-NCLB test scores).  There is no evidence that NCLB improved reading scores.

There is very little evidence that exit exams improve student achievement. This carries over to student performance on standardized international tests. But there is a clear and convincing downside to these tests. They increase dropout rates, delay graduation, and increase rates of GED attainment, particularly for non-whites and those who live in high poverty areas.

The findings on the impact of exit exams on SAT scores, college enrollment, college completion, employment and earnings are equally bleak. With one exception (students who fail to receive a diploma because they failed the exit exam have lower college enrollment rates) there is no consistent association between exit exams or the rigor of those exams and student SAT scores, college enrollment, college completion, employment, or earnings. There is also not much evidence to suggest that employers attach any significance to high school diplomas earned in schools with exit exams.

Why has testing produced such meager results? One answer is that it places too much of a burden on testing while ignoring a broader literature on how the world’s best performing schools come out on top (http://mckinseyonsociety.com/how-the-worlds-best-performing-schools-come-out-on-top/).

Michael T. Rock is a Unionville Chadds-Ford School District School Board Director and Samuel and Etta Wexler Professor of Economic History at Bryn Mawr College.

   Send article as PDF   

Share this post:

Related Posts

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.