Feature

Using Student Performance Data Humanely

The danger of losing perspective on teaching and learning and the value of test scores by Carl K. Chafin
For as long as there have been standardized tests that provide “objective” data about student performance, there has been an understandable, though often misguided, inclination to use that data to judge the performance of schools, teachers and students. Paralleling the rise of high-stakes statewide achievement testing in recent years, that practice has taken on an even greater importance as school leaders, the news media, parents and the community at large have become believers in the power of test data.

In this era of accountability the pressures on principals and teachers to improve the scores of their students are overwhelming and at times debilitating. The fear of being identified publicly as a “poor performing” or “underperforming” school is real.

We work in an atmosphere where we are reminded of our inadequacies, be they real or imagined, on an almost daily basis. Accordingly, our best efforts sometimes go unrecognized amidst the prevailing themes of students who are not learning, teachers who are not teaching and schools that are malfunctioning or broken. In that context it is not surprising we find ourselves almost paralyzed in anticipation of receiving our latest set of scores.

Losing Perspective

The presumed solution to our basic malaise is to generate objective performance data through the mandated testing program and then to use it to weed out the incompetent and the ineffective. Getting those scores up thus becomes the key not only to improvement and success, but to professional and institutional survival as well. However, as with most things in life, it is not that simple.

We are in danger of losing perspective about how performance data can be used legitimately to help us do a better job with our students. If this happens, and one could argue that in many places it already has, students, teachers, administrators, indeed the entire enterprise of public education, will be seriously harmed.

The issues related to the use of student performance data are many and complex. Three aspects in my experience are most problematic and carry the greatest potential for causing harm.

Sharp Limits

No. 1: The failure to understand the true nature and limitations of the data generally produced by standardized assessments.

Despite all of the technical expertise, statistical analysis and stringent administration standards, these instruments are imprecise measures of student learning. They are misunderstood and misused because of the power of parsimony—that is, they reduce the complex to the simple. The performance of an individual student, a class or a grade level, a school and even a school division can be expressed and understood by a single number. That is powerful stuff indeed. In a sound-bite world, test data is the “cut to the chase” method of determining who is getting it and who is not, who is teaching effectively and who is not, which schools are performing well and which are not.

What more could we ask for? Test data are clean, straightforward, easily understandable by all, will fit in a relatively small space in the newspaper and best of all they are objective.

Unfortunately, what is lost in the translation from complex to simple is a deeper understanding of the true strengths or weaknesses of our students, teachers, programs and schools. As a result, we are not able to effectively capitalize on the strengths or develop thoughtful strategies for addressing the weaknesses. The simplicity of the test-score message often results in a superficial, short-sighted response that is designed to appease the critics but not improve teaching and learning.

Kept in their proper place, the results of these assessments can provide useful insight and contribute to the improvement of instruction. However, in the complex world of teaching, learning and schooling they are just one piece of the puzzle. Their greatest value may be in providing clues about performance that help us ask the right questions and guide us toward potentially important and fruitful areas upon which to focus our improvement efforts. Assessments can best be appreciated as a starting point in the analysis, not the final answer.

Knowledgeable Use

No. 2: The failure on the part of administrators, both at the district and school levels, to truly get to know a particular set of data before attempting to use it.

This problem can manifest itself in several scenarios. The first is in light of the discussion above. Once we come to appreciate the limited value of the standardized test results we mistakenly dismiss them entirely after a cursory review that identifies areas where we did well and areas where we need to improve. We would do better to investigate and discover meaningful trends and patterns that are generally present but not readily apparent than skim the obvious off the top, present it to our teachers and community, emphasize the positive, downplay the negative, absorb the blows and quickly move on.

Another scenario reflecting the other end of the spectrum in the use of test results is the overzealous administrator who is going to get to the bottom of the matter, solve the crime and bring the non-performers to justice. With this approach the computer is at once our best friend and our worst enemy.

The data come to us like the prepared food upon which we have grown so dependent. Just pop it in the microwave for three minutes and you have a complete meal ready to eat. Our data come prepackaged, sorted and summarized by student, class, grade, school, subject, a variety of score-types and statistics, charts and graphs—just warm it up. All we have to do is review the evidence, identify the strong and the weak, praise the strong for their efforts and push the weak to try harder.

Generally that review is done hastily as the community, the school board and the superintendent clamor for results and an answer to the question “How did we do?” As a result, the analysis is superficial, the story is written prematurely and the winners and losers determined for another year. Given this scenario, the potential for harm is great.

The answers to questions like “What do these results mean?” or “What do these data tell us about how we are doing?” are not self-evident and generally not easily discovered. But the color graphs and pie charts and other summary results that we receive make it appear as though the answer is staring right at us. The key to discovering the meaning in a set of test data is to tear it apart, to disaggregate it down to the finest level of detail that can be achieved—if possible all the way down to individual student responses to individual items to look for which wrong answers were selected when a question was answered incorrectly.

This detailed analysis produces the basic building blocks for making sense out of the results. The idea is to look for patterns in the disaggregated data and use those to put the results back together into a more comprehensive understanding of their meaning and the implications that has for improving instruction.

Big-Picture View

No. 3: The failure to use the student performance data in a meaningful and professional way with teachers.

This results from the misunderstandings about the limitations of test data and the misguided efforts to understand them as described above. Consequently the most genuine and well-intentioned efforts often can create an atmosphere of fear and anxiety in a school or school division that is ultimately counterproductive to making legitimate use of the insight that can be gained from the use of such data. Efforts to realize the greater goals of more effective teaching and improved student learning can be derailed by short-sighted efforts to fix the blame.

Our analysis of the test data and use of the results of that analysis should be guided by the assumption that, regardless of how good or bad, the results are not what they are because teachers and students are not trying or do not care. We should start by believing that the people we have entrusted to work with our students are doing their best, trying their hardest and want their students and their school to perform well.

We then have to break the habit of telling teachers what something means and consequently what they need to do about it. If we cannot only make the assumption that teachers want to improve but appreciate that they are intelligent and insightful enough to participate in the analysis process, then we set the stage for a collaborative process of analysis that leads to a collegial and professional dialogue with teachers, individually and collectively.

The greatest payback in terms of improved student performance is when administrators genuinely ask teachers to help them understand why their students performed the way they did on a particular measure. For teachers this process produces a better understanding of how their students are performing and their teaching can be improved. For the administrator this process results in a big-picture understanding of how the school is performing, how students are achieving and how instruction can be improved.

Over the nearly 25 years that I have worked with school administrators on the use of test data, the ones who have dealt most effectively with the issues discussed above have almost without exception employed the following strategies in their approach:

  • They take it upon themselves to become their building’s test expert rather than delegating this responsibility to anyone else.

  • They call on the expertise of someone who knows the assessment program to help them become knowledgeable about the nature of the tests themselves.

  • They focus especially on specific skill areas tested, how the skills are tested, the relative importance of each of the areas tested, as well as the score reports and how to interpret them.

  • They study each individual student’s score report and make handwritten notes and prepare lists of strengths and weaknesses, looking for patterns by specific skill area across students tested.

  • They facilitate discussions with teachers, mostly in small groups, where they teach teachers what they have learned about the tests and the results, and they engage the teachers in a collaborative process that addresses two questions: Why do you think the results are what they are? and What do you think we should do about it?

  • They follow up regularly to see that what was agreed should be done is actually being done.

A Fact of Life

These are certainly not all of the examples of strategies used to positive effect by administrators, but these are a few that I have seen used that directly address the meaningful and humane use of student performance data.

We teach and learn in a world of accountability where data drives decisions and standardized assessments of student performance are facts of life. Whether we think this is good or bad or some of both, the fact is it probably will not change significantly in the foreseeable future. Under the circumstance, if we can respect these tests for what they are and what they are not, examine the results thoroughly, carefully and in meticulous detail and use them with teachers in a professional way, we will at least reduce the fear, minimize the harm done and maybe even learn something that can help us work more effectively with our students.

Carl Chafin is research and planning specialist with the educational planning firm Eperitus, 211 West Broad St., Richmond, VA 23220. E-mail: cchafin@eperitus.com. He previously was an assistant superintendent in Chesterfield County, Va.