Outcomes, Impacts and Processes

Type: Article
Topics: District & School Operations, School Administrator Magazine

January 01, 2023

How a three-pronged measurement strategy can improve schools’ performance

Measuring school performance has been a key element of state and federal accountability systems for more than 20 years. But these measures typically have limited value for local leaders who actually have to manage school performance.

That’s because the measures often include a narrow range of student outcomes, fail to identify the school’s contribution to these outcomes and provide no diagnostic information about what is happening in the school that might explain the results.

Ninety-nine percent of superintendents in a June 2022 survey conducted by Data Quality Campaign and AASA said that state data could be more useful.

Meanwhile, many educators and administrators say they are drowning in data. School districts’ administrative systems produce rivers of data — much more than a superintendent would ever have time to examine carefully. In this torrent, identifying what is and isn’t important can be a real challenge, one that district leaders typically need to undertake with limited analytic support.

Man speaking at table
Brian Gill, director of the Mid-Atlantic Regional Educational Laboratory, points to the District of Columbia Public Schools as one of several districts that measure students’ social-emotional learning. PHOTO BY RICH CLEMENT/MATHEMATICA

The stakes are high. Data that are misinterpreted can be useless … or worse. They can mislead educators into identifying the wrong problems and implementing the wrong solutions, undermining efforts to promote achievement and equity. (See related story, right.)

Complementary Factors

We can do better. School performance measures can be useful management tools if they provide information that is rich, fair and diagnostic. Indeed, to effectively manage schools, central offices should seek valid and reliable data on each of three complementary elements — outcomes, impacts and processes. This is because superintendents need:

  • Rich information on student outcomes in each school to assess which schools are serving the students with the greatest needs;
  • Fair information on how much each school impacts student outcomes to assess which schools might need intervention to improve their performance; and
  • Diagnostic information about internal processes to assess what intervention is needed in each school.

Let’s consider each of the three kinds of school performance measures in greater depth and look at some examples that show how each can be useful for different purposes, and how they complement each other.

Outcomes. Reading and math proficiency are critical, but a complete picture of a school’s performance requires a wider array of measures of how students are doing. As every educator knows, the purposes of schooling encompass much more than reading and math proficiency. Schools exist to promote a range of skills, knowledge and attitudes that will help students flourish over a lifetime and participate effectively in our democracy.

Not all of these things are easy to measure, but tools are increasingly available to measure many of them. For example, a U.S. Department of Education report identifies more than 180 survey and assessment scales for measuring students’ civic knowledge, skills and attitudes.

The District of Columbia Public Schools is one of many school districts that measures its students’ social emotional learning, which is important for their well-being and predictive of their long-term success. Recognizing the importance of SEL, the district aims for all of its students to be “loved, challenged and prepared,” and staff worked with my colleagues at the Mid-Atlantic Regional Educational Laboratory to create survey-based indicators of progress toward that goal.

Any of the outcomes that schools seek to promote can be examined with an equity lens that focuses particular attention on students with the greatest needs or those from historically disadvantaged groups.

Impacts. Even if you have a broad and rich set of outcomes telling you how students are doing in each school, that doesn’t tell you if the school is effective at improving these outcomes. Student outcomes are affected by many factors outside of school as well as by the performance of the school itself. And the advantages and disadvantages that students have outside of school vary widely across schools.

A fair measure of school performance therefore requires statistical adjustments that can help to distinguish a school’s impact from the influence of factors outside the school’s control.

In recognition of this difference between outcomes and impacts, states have included measures of student growth, or value-added, alongside proficiency rates in their accountability systems. By accounting for students’ prior achievement, these measures help to identify a school’s impact on its students’ achievement. They level the playing field for schools serving students who enter with very different starting points, family resources and outside advantages. They also can help identify the schools that are doing especially well at promoting equity by improving the outcomes of disadvantaged students.

Even so, only a few states and districts have recognized that the statistical approaches used to measure value-added or student growth can be adapted and applied to other student outcomes as well. In other words, we can measure a school’s impact on the likelihood that its students will graduate or the social-emotional learning of its students, just as we can use student-growth percentiles or value-added models to measure a school’s impact on its students’ test scores.

My colleagues and I have worked with state education agencies in Louisiana and the District of Columbia to measure the promotion power of high schools in increasing the likelihood that their students will graduate, the likelihood that they will enroll and persist in college and even their earnings in adulthood. New York City does something analogous to this with its School Quality Snapshots by reporting how the educational attainment (including graduation and postsecondary enrollment) of a school’s students compares to that of similar students elsewhere in the district.

Calculating a school’s impact is likely to be possible only by states and large districts because large numbers of schools are needed and the statistical methodology is complex.

If you work in a small district, you might ask your state agency to calculate the impact for all schools statewide. This is one of the ways that state data could become more useful to district leaders.

Processes. Even if you have a wide range of rich data on how the students in a school are doing, and you have student-growth percentiles, value-added results and promotion power data that identify a school’s impact, you still need diagnostic information about what is happening in the school to determine what supports or interventions the school might need.

Understanding processes in a school might begin with routine surveys of staff and students to assess school climate — as large numbers of districts already do. Climate surveys can shed light on school leadership, professional culture, student and staff relationships, and safety, giving the central office an important window into how staff and students view the school’s environment.

In the District of Columbia Public Schools and many other districts, the same surveys used to assess students’ social-emotional learning include questions about school climate. Maryland has deemed school climate so important it now requires climate surveys in all schools statewide and includes the results in its accountability measures.

Student and staff surveys are not the only ways to measure processes in schools. Direct observations of schools and classrooms by expert observers long have been conducted in British schools. Similarly, New York City incorporates a School Quality Review with in-person inspection and observation to assess the quality of instruction, school culture and leadership. And Maryland is now developing “expert review teams” to do similar work in measuring school processes to support improvement.

Inspections, observations and interviews are labor-intensive, but a cheaper window on what is happening in schools might come from data in districts’ existing electronic systems. The sudden shift to remote learning during the pandemic turbocharged the movement of day-to-day classroom materials and student work into centralized learning management systems, which means that many districts now have access to vast amounts of classroom data that previously would have been isolated in paper records in individual classrooms. These data can address the “opportunity to learn” measures that are increasingly recognized as important.

In the Pittsburgh Public Schools, the learning management system allows the district to track whether students are logging in, opening their course materials and completing their assignments — measures that are strongly predictive of chronic absenteeism and course failure, but that can be monitored daily.

Putting It Together

Organizing the streams of school and district data into complementary groupings of outcomes, impacts and processes can help district leaders make decisions that will meet the specific needs of each school. That can make the difference between a district that is data-drowning and one that is data-driven. n

Brian Gill is a senior fellow on K-12 education policy at Mathematica and director of the Mid-Atlantic Regional Educational Laboratory for the U.S. Department of Education. @BrianGill_edu

Author

Brian Gill

Director

REL Mid-Atlantic

Don’t Let Statistics Become ‘Damned Lies’
 

Measures of performance in any field need to be reliable, valid and robust. Watch out for performance measures that bounce around randomly, that provide biased information or that are easily inflated. Four things to watch out for:

Beware of measures that change dramatically from year to year (or week to week).

Measures for very small numbers of students are especially susceptible to randomly bouncing up and down (unreliable). “Fishing expeditions” are likely to mislead. If you look at data on a lot of subgroups in a lot of schools, you will probably find data that looks bad — and it might be just a fluke. Be suspicious of outliers, which often are wrong. Before drawing strong conclusions, look for other data to verify what you’re seeing.

Beware of measures with low participation rates, even if total numbers are large.

Results from pandemic-era assessments might not be trustworthy if substantial numbers of students didn’t take the tests. The number of test takers might be large enough to produce an average that is reliable but could paint a biased (invalid) picture of the school because test takers were quite different from students who didn’t take the tests.

Surveys of parents often are plagued by a similar problem because parents who respond are unlikely to be typical. This type of mistake led to famously wrong predictions of presidential election results by organizations that polled millions of voters. Dewey didn’t defeat Truman!

Beware of inferences that data can’t support.

Attributing student outcomes entirely to a school’s impact is one example of reading too much into the data. Outcomes and impacts are different (as discussed in the main article). Attributing a school’s impact to the principal is another mistake. Even though principals do have large effects, measuring an individual principal’s effects on student outcomes is well-nigh impossible because the principal’s effects are indirect (through teachers) and operate over long periods of time.

Climate surveys can provide much better information than student outcomes about a principal’s performance.

Beware of fragile measures — and try not to break them.

Campbell’s Law says that attaching consequences to a measure undermines its validity by encouraging people to inflate it. Some measures are more easily inflated than others and not well-suited to having high stakes attached.

For example, giving school principals bonuses for improving social emotional learning might be unwise because students could be encouraged to inflate responses on SEL surveys. The diagnostic value of SEL measures might depend on making sure they do not have stakes attached.

—  Brian Gill

The author suggests these informational resources relating to the subject of his article.

Advertisement

Advertisement


Advertisement

Advertisement