国产视频

In Short

What We Can (and Can鈥檛) Learn from the Early SIG Results

Release the  data! The U.S. Department of Education has  revealed some of the  from its research on the effectiveness of the School Improvement Grant (SIG) program, or rather, the one-time, $3 billion infusion to the SIG program included in the 2009  (ARRA). The  program, which was re-tooled by the Obama administration, has supported intensive  鈥 up to $2 million per school 鈥 in over 1,300 of the nation鈥檚 chronically low-performing schools.

The sliver of data released this week includes 2009-10 and 2010-11 test data from about 730 of the 831 highest priority SIG schools, those categorized into Tier I or Tier II.[1] Here are the highlights (H/T to  and ):

  • Two-thirds of schools showed gains in math, and two-thirds in reading in the first year of the SIG program (2010-11)
  • 25 percent of schools saw double-digit gains in math, and 15 percent in reading
  • 40 percent of schools saw single-digit gains in math, and 49 percent in reading
  • 28 percent of schools saw a single-digit decrease in math, and 29 percent in reading
  • 6 percent of schools saw a double-digit decrease in math, and 8 percent in reading
  • 26 percent of schools had posted math improvements the year prior to entering SIG, but declined once they received SIG funding; this happened for 28 percent of schools in reading
  • 28 percent of schools had posted math declines the year prior to entering SIG, but improved once they received SIG funding; this happened for 25 percent of schools in reading
  • A larger proportion of  posted gains in the first year of the SIG program, compared to middle and high schools, and they were less likely to see declines
  • Rural schools appear to fair as well as schools in suburban and urban areas

But can we say that 鈥渢here鈥檚 dramatic change happening in these schools鈥 as Secretary Duncan claimed? Not so fast. Clearly, the Department didn鈥檛 read Matt DiCarlo鈥檚  of when you can 鈥 and cannot 鈥 make policy claims based on test data.

First, the Department doesn鈥檛 clarify whether any of these increases or decreases in test scores are statistically significant. Given inherent measurement error in any assessment and the fact that it is unclear if the Department is using proficiency rates (less accurate) or actual test scores (more accurate) to calculate these gains and losses, statistical significance cannot be assumed.

Second, the Department doesn鈥檛 clarify whether they are using cross-sectional or longitudinal data. In other words, were the gains or declines based on individual student growth (i.e. a student taking the 3rd grade test in math improved when taking the 4th grade math test) or were they based on comparing this year鈥檚 crop of 3rd graders in math to last year鈥檚 3rd graders? My money is on the latter, which limits how we can interpret the data as the results aren鈥檛 fully comparable from the pre-SIG year to year one of the turnaround program.

Third, the Department doesn鈥檛 explain whether or how the researchers took into account other non-school factors that could affect student achievement. Without at least addressing these issues, it is impossible to know whether changes in student performance were even attributable to changes in school leadership or culture (i.e. the SIG program) rather than conditions in the economy or students鈥 home lives. And the Department doesn鈥檛 explain how they controlled for other policies at the school-level that could influence test scores. As chronically low-performing schools, the SIG interventions are unlikely to be the only improvement strategy or program at work in these schools.

These are huge caveats to the SIG data, but that鈥檚 not to say that ED鈥檚 findings aren鈥檛 important. They are. But more details are sorely needed to really make an accurate assessment of the program.

To begin with, the Department of the Education must disaggregate the data into the four turnaround models. More significantly, changes in student proficiency rates on standardized tests are only one possible outcome of the SIG program 鈥 and perhaps not the most important outcome to track. The Department plans to release student and teacher attendance data, enrollment in advanced courses, and other 鈥渓eading indicators鈥 for the SIG schools next year, but what about data relating to school leadership, school culture, and parent involvement?

While more difficult to quantify, these areas are also essential components of school turnarounds. Secretary Duncan to it in releasing these early results: 鈥淲hat鈥檚 clear already is that almost without exception, schools moving in the right direction have two things in common; a dynamic principal with a clear vision for establishing a culture of high expectations, and talented teachers who share that vision, with a relentless commitment to improving instruction.鈥 However, the data attached to Duncan鈥檚 statement failed to mention the effect sof leadership or teaching in SIG schools.

Predictably, analysts 鈥 notably Bellwether鈥檚  鈥 have already interpreted the early results as a failure of the entire SIG effort. But without more convincing and complete data, it really is too early to make definitive judgments about the program. This nuanced, wait-and-see approach may not be as satisfying, but in an effort as important as improving our nation鈥檚 worst schools, it is the right approach to take.



[1] To learn more about the SIG schools, including where they are located, how much money they received, and which improvement model 鈥 transformation, turnaround, restart, or closure 鈥 they selected, check out this handy-dandy  from .

More 国产视频 the Authors

anne-hyslop_person_image.jpeg
Anne Hyslop

Policy Analyst, Education Policy Program

What We Can (and Can鈥檛) Learn from the Early SIG Results