Brett van Zuiden

Measure schools on student growth

School accountability frameworks measure schools on the aggregate performance of current students on standardized tests rather than the growth students demonstrate between years. As a result, “school quality” measures can fluctuate up or down based on the preparedness of incoming students even if the actual instructional quality remains constant.

Every spring, students across the country take high-stakes, summative assessments to demonstrate what they‘ve learned. These assessments are the single largest factor in how schools are measured, graded, and compared to one another, but there is a key structural flaw in how individual student results are aggregated into school-level results: school results vary based on incoming student preparedness.

This is not a theoretical concern - this exact situation is playing out in some of our schools at Summit. In California, we are able to request from the state the prior test scores of our current students, including tests taken before the students enrolled with us. This allows us to better tailor the supports we give incoming students and to measure student growth over time; it also has pointed out that the proficiency levels of some of our schools’ incoming classes are low and falling. The students are starting further behind when they first walk in our doors; even if our schools do a great job helping students grow, it’s almost certain these schools’ grade level aggregates will decline. As a result, these schools will be under greater scrutiny and face contentious charter renewals.

Here’s what this looks like in practice - let’s say for a particular middle school, only 40 out of the 100 incoming sixth graders scored “proficient” on the tests they took just four months before. If that middle school does an amazing job, they can help 10 (16%) of the below-proficient students close the gap, so by the end of sixth grade 50 out of 100 students test as proficient. I would point to this as a great school, even though half its students are “below proficient”!

But we can go a step further: each year a new batch of sixth graders rolls in, and say in year two now the number of incoming sixth graders scored “proficient” when they were in fifth grade decreases to 25 out of 100. Seeing the greater need, the teachers at the school work extra hard and are able to help 15 (20%) of the below proficient students close the gap, so by the end of the second year 40 out of 100 students test as proficient. I would point to this as a school that is doing a commendable job serving high-need students and getting better over time.

Unfortunately, that perspective is the opposite of how this school would be seen in state accountability systems that look at simple aggregates of grade level results. In California for example, this school would be seen as a low performing school (because only 40% are proficient) that is getting worse (because last year 50% were proficient). This school would likely be put under strict oversight, the administration and staff would be told that they were failing kids, and if it were a charter school it would likely be shut down. Despite the fact that this middle school is exactly the type of school we need more of: a school that’s not only helping students close the gap, but one that’s actually getting better over time!

The solution is straightforward and easy for states to implement: measure schools based by individual student growth, not single-year snapshots. Celebrate the schools where students make greater than one year’s worth of growth and put pressure on those that cause students to fall behind in the first place. The current system harms the very schools that are doing the most to help.

Putting aside the debate about whether high-stakes summative assessments are the right way to measure school quality, fixing this structural flaw in school accountability metrics would go a long way to making sure we’re measuring schools based on what we care about most: helping students grow.