In August, Phi Delta Kappa released the 40th annual PDK/Gallup Poll of the Public’s Attitudes Toward the Public Schools. The results were clear: The public is in agreement with school system leaders and education scholars on how to move forward on education reform to best serve our nation’s students.
Additionally, the survey reports that public schools are connecting well with their local communities. In the highest rating in 15 years, more than 7 in 10 parents gave a grade of A or B to the school attended by their oldest child.
On the same day, Education Next, a publication of the Hoover Institution on War, Revolution and Peace at Stanford University, released the Education Next-PEPG Survey of Public Opinion. This survey painted a more negative picture of the public’s opinion of our schools, prominently featuring a finding that Americans think less of their schools than of their police departments and post offices.
Survey Disparities
How could two surveys with similar questions and audiences reach such different conclusions? It stems from the basic structure of the surveys themselves. Anyone familiar with the social sciences and the basics of constructing surveys knows that how a question is worded, its location in the sequential order, and the provided responses, among other factors, can have an impact on the responses.
Consider this question: McDonald’s is looking to gauge customer satisfaction with healthy options on its menu. On a scale of 1 to 5, with 1 being excellent and 5 being poor, how would you rank the healthy dining options at McDonald’s?
How might your response change if this question were preceded by a question referencing Subway or a local salad bar? Might it cause you to rank McDonald’s slightly lower, given the healthier perception usually bestowed on Subway and salad bars? Would your response change if the introduction to the question included mention of a new McDonald’s initiative to encourage aerobic activity?
How might the findings change if the list of potential responses included a “don’t know” option? What if the survey only provided a scale of 1 through 3? Would those who originally responded with a 4 be inclined to take the new middle response of 2 or drop to the less favorable rating of 3?
Looking further at how the context and order of questions can affect responses, consider the following question: Please rank your policy priorities from highest to lowest: universal preK, affordable higher education and teacher licensing.
The context of the surrounding questions could influence your response. If the question appears on a survey all about universal preK, odds are that you are more likely to rank universal preK higher than if the question appeared on a more general survey touching on a variety of subjects. What if the question was preceded by another question detailing the social benefits of universal preK? The simple fact is that the order and/or context of questions can lead to a specific train of thought or momentum that influences your response.
Both surveys asked their responders for their opinion on how reauthorization of the Elementary and Secondary Education Act should proceed. The available responses: renew as is or with minimal changes; renew with major changes; or don’t renew/let it expire. The PDK survey included a “don’t know” response.
Both surveys reported almost identical levels, roughly 25 percent, of “do not renew” responses. Seventy-seven percent of the Education Next responders supported some level of ESEA reauthorization, as did 58 percent of PDK responders. Another 17 percent of PDK responders replied “don’t know.”
How were the “don’t know” responses captured in the Education Next survey? The math is clear: Almost identical “do not renew” responses means that the 17 percent who answered “don’t know” on the PDK survey likely fell under the “renew the legislation” category on the Education Next survey. Such a finding is a false positive. Undecided responders do not necessarily prefer a renewal. The “don’t know” option helps avoid the incorrect conclusion that those who don’t know would prefer a renewal over letting the legislation expire.
Skewed Responses
A similar pattern emerged when responders were asked to grade public schools. Side-by-side comparison indicates that the lack of a “don’t know” response — especially in the grading of public schools — seems to drive those who would reply “don’t know” into the lower-grade categories.
Roughly 20 percent of respondents on both surveys gave America’s public schools an A or B. Eighty percent of Education Next responders awarded schools a C or lower, compared to 62 percent of PDK responders. The difference was the 16 percent of PDK responders who chose “don’t know.” Without such an option on the Education Next survey, responders had to choose an actual grade on the other survey, and it was almost always a C, D or F. It skews the public response toward a more negative view of public education.
I’m not discrediting the Education Next survey, which appeared in the quarterly journal’s Fall 2008 issue. Rather, I’m illustrating how the structure and organization of a survey can shape the results we read and analyze.
A survey’s results are linked to its inputs, including the order, structure and wording of both the questions and provided responses. In this case, the simultaneous release of the two surveys that asked similar questions to similar audiences and yielded disparate results makes the comparison even more eye opening.
Noelle Ellerson is a policy analyst with AASA. E-mail: nellerson@aasa.org