Feature Pages 17-25
It may sound preposterous, but school systems are soliciting feedback from students at all grade levels using formal protocols
BY SCOTT LaFEE
| Ron Ferguson, a professor with the Harvard|
Graduate School of Education, is a leading
authority on surveying students to evaluate
No one spends more time watching teachers at work than their students, so it logically follows no one is in a better position to evaluate their performance.
“Students know what’s working and not working for them in terms of learning at school,” says Dayna Scott, executive director of Project Voyce, an education advocacy group working in Denver’s inner-city schools. “It seems so simple and intuitive. Students are the clients. They are the reason that the education system exists. Why not ask them for input?”
A dozen years ago, Harvard University economist Ronald Ferguson did just that, launching early, seminal research using classroom-level surveys to measure student engagement and classroom learning conditions. Ferguson was focused on narrowing racial achievement gaps and improving instruction and did not envision the use of the surveys in the context of teacher evaluation. His research became the basis of the Tripod Project, which uses surveys for professional assessment.
At the time, the idea was considered extraordinary. However, Ferguson’s surveys proved effective at capturing differences between classrooms in student engagement and classroom learning conditions. Skeptics said students were too young, immature or erratic to fairly and reliably evaluate their instructors, especially if high-stakes issues such as pay and promotion were on the line. What possible actionable insights could a 3rd -grader offer about effective teaching?
As it turns out, the answer is they can lend quite a bit, according to Ferguson and others. Ferguson’s research and the subsequent development of a finely tuned student perception survey has helped lay the foundation for a rising movement to systematically incorporate student feedback into the formal teacher evaluation process.
“A really good student survey can measure what you want to measure,” says Ferguson, a senior lecturer in education and public policy at the Harvard Graduate School of Education and the Harvard Kennedy School. “It can reveal what’s happening inside classrooms. I’m not sure there’s a better way to calibrate the effectiveness of teachers.”
Accountability is a mantra among education reformers, and nowhere more so than in assessing teacher quality where traditional tools like classroom visits by principals and informal hallway conversations have more lately been combined with standardized test scores and other quantifiable measures of student growth.
No one thinks these tools are wholly sufficient, least of all teachers who say the nature and complexity of what they do cannot be fully captured by occasional observers or accrued test-score data.
Ferguson began his research in the early 2000s at a small Ohio school district confounded by uneven student achievement. The usual assessment tools hadn’t resolved the puzzle. Ferguson decided to ask students through anonymized surveys about what was happening in their classrooms.
His findings, described in scholarly papers, lectures and conferences, were revealing. Students took note of teachers who seemed to care, who made them work hard but were smart and fair. They castigated teachers who couldn’t explain lessons, who were arbitrary with rules or who appeared to wish they were somewhere else altogether.
Regardless of race, socioeconomic status and other divergent demographics, Ferguson found the students’ answers to be serious and remarkably consistent. They recognized good – and lousy – teaching, and they responded accordingly.
“Project Voyce conducted its first student perception surveys four years ago in an 800-student high school with a 98 percent free and reduced lunch Hispanic student body,” says Brian Barbaugh, the group’s co-founder. “There were more than 1,200 open-ended questions in which students were free to ‘talk trash’ about their teachers. Out of that total, there were exactly zero trash talk responses. Lesson learned: If you give students buy-in, give them the respect that is often missing, they'll respond with respect.”
In 2009, the Bill & Melinda Gates Foundation funded a massive project called the Measures of Effective Teaching, or MET, which studied 3,000 volunteer teachers in seven cities. Among the project elements was a survey of tens of thousands of students asking them about their educational experiences, comparing those answers to test scores and other measures of teacher effectiveness.
The MET researchers concluded that students were better than trained adult observers at evaluating teachers. Their perceptions clearly identified teacher strengths and areas for improvement, and reflected the values of the teacher. Equally important, student perceptions had predictive validity. They forecast with reliable consistency how students would fare on standardized tests and other measures of achievement.
Published in September 2012, the MET study has strongly buttressed the push for student feedback.
Scores of school districts, mostly large, have launched high-profile pilots or programs to survey student perceptions, albeit with limited application and aspiration. No one yet is making student feedback a significant factor in determining high-stakes issues.
“I think it’s something that we have to introduce into the process, initially with low stakes, so that teachers can see what the data looks like and see what they think of it and begin to trust it,” said Shael Polakow-Suransky, chief academic officer of the New York City Department of Education, in an interview with gothamschools.org, echoing a broad sense of caution among administrators.
New York City’s interest is driven, at least in part, by a 2010 law overhauling teacher evaluation procedures. State advisory groups also are driving change in places like Rhode Island and Colorado while the push elsewhere is coming from local district groups (in Utah and New Jersey) or activists (in Massachusetts).
A question all parties to the process want to know is what guarantees student feedback will be meaningful and useful? The MET study (accessible at www.metproject.org) identified four basic requirements:
• Measure what matters. That is, the survey questions should reflect the theory of instruction that defines expectations for teachers in that school and district;
• Have the tools and resources to ensure accuracy of results;
• Develop methods that ensure reasonably consistent results; and
• Support improvement of both teaching quality and student achievement based upon those results.
The MET study focused on Ferguson’s research and his product, The Tripod Survey, which perhaps is the most widely used student sampling tool. Others have since emerged, among them YouthTruth, My Student Survey and iKnowMyClass.
While some districts have created their own original surveys or adapt existing ones to their needs, Ferguson believes most will ultimately opt for an experienced commercial or university-based provider.
“It takes years to develop a really effective survey,” he says, noting the current Tripod survey is in its 16th iteration. “It’s a lot more complicated than it looks, more difficult than doing something like administering state exams. You have to match teachers and students. You have to make sure surveys are done in a way that’s not totally disruptive, that’s conducted confidentially and effectively, in the right classrooms at the right time with the right kids.”
School Administrator asked administrators in three school districts that use or plan to use student surveys to talk about their experiences.
Boston Public Schools
| Ross Wilson, assistant superintendent for teacher|
and leadership effectiveness in the Boston Public
Schools, meets with staff to discuss the district's
student evaluation survey.
Student surveying began in Boston in 2006, initiated by a citywide group of elected high school student leaders called the Boston Student Advisory Council. They wanted a voice in teacher evaluations.
Strong resistance by teachers and others soon prompted the student council to switch to a more modest, voluntary process in which teachers could tweak their work based on student observations only they saw.
At the same time, the student leaders waged a campaign of consultation and collaboration with key stakeholders, including the teachers’ union, to ultimately implement mandatory student feedback. That effort was slow and deliberate, according to Ross Wilson, Boston’s assistant superintendent for teacher and leadership effectiveness.
“We all worked on the policy, with multiple committees, for several years,” he explains. “We designed our own survey so that everybody, even opponents, would be engaged and challenged. There was a lot of constructive feedback.”
In 2011, a state task force adopted a new teacher evaluation framework. Among the multiple metrics were mandatory student and parent feedback. All school districts were expected to implement the new teacher performance standards by the 2013-14 school year.
While the state has promised assistance to all of Massachusetts’ 350 school districts, it’s not clear how well that effort is going. Wilson says no one-size-fits-all approach exists. Progress is being made in his 58,300-student district. Two years ago, with plenty of informal feedback experience in their back pocket, the Boston Public Schools conducted a pilot program using the Tripod survey to evaluate some high school teachers.
“We started with just high school students and the information just going to the teachers,” Wilson says. “Now we’re moving to K-8. We’ve gotten some good results, but there are a lot of logistical and practical challenges, such as how we deliver the survey fairly and effectively. We hope to do it twice a year, in October or November and then again in May. But that means taking time away from other classroom activities. There is always some give and take.”
The oft-voiced concern that students, particularly the very young, will not understand the survey or take it seriously is less worrisome. Age-appropriate questions and smart analyses can overcome those issues, Wilson says.
“I was surprised last year by the pilot results. Kids from all grades took the survey and they all took it seriously. Some critics believe some students will be vindictive, but we found quite the opposite,” he adds. “We learned a lot about what was happening in classrooms and, more broadly, in schools. We learned how teachers engage their students and how students themselves think about the learning process and what it means to them.”
Anchorage School District
Almost two decades ago, the Alaska legislature passed a law requiring all school districts to “provide an opportunity” for students and parents to participate in teacher evaluations. The mandate was embedded in the state’s school performance standards but with wide latitude for compliance.
The 50,000-student Anchorage School District, the largest in the state, opted to use surveys. Once a year, on a schedule established by the district, students in grades 3 through 12 are offered the chance to take a classroom survey. Participation is voluntary. It can be done with paper-and-pencil or completed online.
The first option is the choice of either a school administrator or an individual teacher. When a class completes a survey, a student collects the paper forms from classmates and delivers them to the principal’s office. The survey is confidential and not personally identifiable. The data are processed by the district and a report is given to the teacher’s principal or supervisor for review.
The same survey form is available online to students, parents, community members and others for most of the year. The survey link is opened in October and closed in May. The district promotes it in e-mails and school newsletters and at public meetings.
“We make it widely available, but there’s no organized, determined effort to ensure every student fills out a survey for every teacher,” says Todd Hess, the district’s chief human resources officer. “We went with the methodology we have, based on limited results with other methods. We want it to be convenient and available. We remind people that it’s there.”
| Todd Hess (right). Anchorage School|
District's chief human resources officer,
meets with Glen Nelson, executive
director of elementary education, to
discuss students' responses on a formal
Perhaps not surprisingly, the response rate is low, Hess says. The survey is not a formal, weighted part of the district’s teacher evaluation process, which primarily emphasizes standardized measurements of student achievement and growth.
“Principals are expected to review the survey information and factor it into their conversations with teachers and in the annual evaluation process,” says Heidi Embley, executive director of communications for the Anchorage district.
In fact, the district uses the student surveys more as a measure of overall “climate and connectedness.” In grades 3-4, for example, questions focus on student feelings about relationships to others in school and measure their empathy and ability to think about the consequences of their actions. In older grades, they ask about respect for diversity, school involvement and leadership, educational expectations and school safety.
Hess believes the survey has proved its utility. Administrators do occasionally share findings with individual teachers when appropriate, but more often the survey is employed as a broader measure of a school’s or the district’s general atmosphere: Where do students say they are happiest or learning the most? Where do they worry about drugs, bullying or violence?
“We’re all aware of the importance of self-improvement – students, teachers, administrators,” Hess says. “We do a lot of things in a lot of different ways to promote that idea. Student input is one component. It needs to be considered. It’s another way to identify excellence and underperformance.”
Hess doesn’t expect the district or state to significantly change its current approach. The district will continue to solicit student feedback, but only as a request – and only in a general sort of way. “What we do works for us,” he says.
Pittsburgh Public Schools
Pennsylvania legislators in 2012 created a new rating system for teachers, to be implemented in the 2013-14 school year. A new system for rating principals and other non-teaching professionals will follow in 2014-15.
Under the new system, teacher evaluations will comprise three components: 50 percent classroom observation and feedback, 35 percent student growth measures and 15 percent student feedback. The student feedback element will be a K-12 survey, grade- and age-specific.
Samuel Franklin, executive director of the Office of Teacher Effectiveness in the 26,000-student Pittsburgh Public Schools, says the new approach supplants a poorly functioning teacher evaluation system. “Teachers worried about an over-reliance on infrequent classroom visits by administrators,” he says.
Pittsburgh decided to be pro-active. Years before the state legislation, the district considered Ferguson’s Tripod survey in a new evaluation system with funding from the Gates Foundation. The result, called Research-based Inclusive System of Evaluation, or RISE, has been tested in pilot programs since 2009 and repeatedly tweaked.
“Teachers made it clear that their impact in the classroom is much broader than just academic growth measures,” says Franklin, “so we’ve very carefully and slowly refined the survey to make it valid, useful and meaningful. We want a student survey that teachers want to use to become better teachers.”
He believes the effort is paying off. “The data is proving useful and stable,” he adds. Student feedback will be part of the official evaluation process in 2014, but its weight and influence have yet to be determined.
“I think a mistake districts make is in deciding how much of a percentage to give different components of the evaluation,” he said. “Nobody has any real experience with assigning a value to student feedback. It’s clear that it’s important. Studies prove it. But it will take time to get a feel for how to use it and how to more smartly use it.”
Scott Lafee is a health sciences writer at the University of California, San Diego. E-mail: firstname.lastname@example.org.