Punchback: Answering Critics

Beware of Advocates Bearing Polls


Public opinion polling has a spotty history, particularly when advocates are involved. Back in 1948, the Chicago Daily Tribune, which was a strong supporter of Thomas Dewey and had called Harry Truman a “nincompoop,” ran its infamous “Dewey Defeats Truman” headline. Among many mistakes, the polling supporting that prediction was conducted a week before the election. But in the intervening time, two independent candidates’ popularity faded and the bulk of their votes went to Truman.

William MathisWilliam J. Mathis

In more recent times, pollsters got it wrong in the 2008 New Hampshire primary, proclaiming Barack Obama the winner over Hillary Clinton. Sampling errors and a flood of first-time voters were tagged as causes for the miss. Just eight years earlier, pollsters called Albert Gore the presidential winner, then retracted, and Florida vote counting is now part of American history.

The quality and usefulness of public opinion polling varies widely. Within education, polling can be a great help to superintendents, boards of education and legislators in gauging public sentiment on local, state and national issues. But school leaders should be cautious when either reading or conducting a poll.

Vested Interests
Organizations with vested interests frequently attempt to advance their agenda through polling. Political candidates may release poll results showing them doing well to give the appearance of momentum or a late surge. In education policy, the phenomenon is well illustrated by a series of state-level surveys about school vouchers sponsored by the Friedman Foundation for Educational Choice (now known as the Foundation for Educational Choice) based in Indianapolis, Ind. Each state’s survey results were published by the Friedman Foundation in a state-specific report that claimed, among other things, that voters in that state were more likely to elect candidates who supported vouchers.

In December 2008, two professors from the University of Houston, Jon Lorence and A. Gary Dworkin, independ-ently analyzed the first 10 Friedman survey reports. (Five additional state reports have since been published.) The results, published as the “Think Tank Review Project,” found the pro-voucher Friedman results suspect in several major ways:

•  Representative samples. Seven of the 10 Friedman reports did not provide enough information to determine whether the sample validly represented the population. In Montana (one of the three states in which sampling information was provided), only 25 percent of the likely voters completed the telephone survey. When nonrespondents differ in important ways (e.g., support for public education) from those who agree to complete the survey, the results suffer from bias. This is likely the case with many of the Friedman surveys.

•  Slanted questions. The Friedman surveys included questions with loaded phrasing such as the following: “If a private school offered the best education for a particular child, would you favor allowing parents the option of using public funds to send their children to private schools?”

Compare this to the neutral language used in the annual, independent Phi Delta Kappa/Gallup poll, which tends to show the public opposing vouchers: “Do you favor or oppose allowing students and parents to choose a private school at public expense?”

Should we be surprised the Friedman wording generated a more pro-voucher response?

•  Differences between the results and the -write-up. Even with the questions’ slanted wording and the possible sampling bias, only one of the 10 states had a clearly “strongly favorable” response (Georgia at 41 percent), but this result was largely hidden because the Friedman reports combined “strongly favorable” and “somewhat favorable” responses to present a more positive picture.

On the other hand, 58 percent of the Maryland interviewees were either strongly or somewhat opposed to choice, but this finding was overlooked in the write-up.

•  Knowledge and coaching. Knowledge of the survey issues also is a problem for education polling. Terms such as vouchers and tax credits may not be fully understood by the average voter reached by telephone. If the poll-taker gives a short explanation to help the respondent, this can influence the results. For instance, consider the bias if a poll on ability grouping offered either of the following two definitions: “Targeting appropriate curriculum to students with different skill levels” versus “Providing a less-challenging curriculum to students believed to be less able.”

Selective Usage
Sound polling can be valuable to school administrators and other educational policymakers. Local polls, for instance, can help gauge community satisfaction and hone school goals, and the annual Phi Delta Kappa/-Gallup poll can yield rich conversations at the board table.

But advocacy polls should be examined carefully and used sparingly, if at all. Otherwise, you may be the subject of the incorrect policy decision or the embarrassing headline.

William Mathis, a retired superintendent, is managing director of the Education and the Public Interest Center at the University of Colorado, Boulder. E-mail: wmathis@sover.net