President's Corner

Picture This


Sometimes it’s hard for school leaders to paint a clear picture for stakeholders about what is going on in education. We use professional jargon; our conversations about accountability are filled with terms like “formative assessment,” “summative assessment” and “benchmarks.” We then add acronyms like ACT, SAT, PISA and a plethora of others that reference individual state tests.


NeudeckePatricia Neudecker

We may know the meaning of these terms and the purpose of each approach, but our stakeholders may not. In this era of increased accountability, how do we help our public understand the purpose and meaning of these education concepts?

You may have noticed from my previous columns that I like to use stories and analogies to illustrate a point. This column is no different.

As educators we know that attaching new learning to something that is commonly understood is an effective teaching strategy. I suggest we use the game of golf, then a conversation with a dentist, to bring clarity to the conundrum of testing, grading and accountability.

After a round of golf, ask a golfer, “How did you do?” The answer may vary depending on the performance of the other golfers in the party or the standard set for the course.

A golfer who scores 100 can compare his or her score to the other scores in the foursome. If the score of 100 is the lowest of all the scores, our golfer may answer, “I did great — best score in my group!” But if par for the course is 72 and our golfer compares the score of 100 to the expected score, he or she may answer the question in a very different way, such as, “Better than my last game, but I have a way to go.”

The same is true in education. To understand individual performance, students and parents must appreciate the difference between using a student’s grade to compare him or her to other students and using a grade as a comparison to an expected standard. Parents may be more accustomed to comparing their students’ grades to those of their classmates, yet we all know that a comparison to an expected standard is a better indicator of real perform­ance, just as in the game of golf.

Additionally, we can use a conversation with a dentist to illustrate the importance of using multiple factors to determine a school’s performance.

Imagine the conversations you could have if you announced you were going to assess the success rate of all the dentists in your community based on the total number of cavities of their patients, and that the results would be published.

Most likely, dentists would respond with statements such as “Well, some patients had lots of cavities before they came here!” or “I can only do what I can do in my practice. I can’t control what patients do outside of my office,” or “A quality dental practice is based on a lot more than just the number of cavities.”

I hope you would hear “Well, what about the patients who are doing better now than when they first came to me? Their cavities are still in the count, but they haven’t had any new ones.”

The same analogy can be applied to education. Accountability for school performance is an accepted reality for all educators, and that responsibility is not disputed. The indicators used to measure performance and to determine the quality of each school, however, must be carefully constructed. Single indicators, while valid, convey an incomplete story. We must question a methodology that does not capture the total effectiveness of a school.

As superintendents and educational leaders, I believe we can tell our story better and create a clearer picture for our publics. I also believe it is our responsibility to write a new story. Get the picture?

Patricia Neudecker is AASA president for 2011-12. E-mail: