Feature

Feedback From 360 Degrees: Client-Driven Evaluation of School Personnel

Client-Driven Evaluation of School Personnel by Richard P. Manatt


Team evaluation, or 360-degree feedback, is so well established in American business and industry that it has become a recurring theme in Dilbert cartoons. Team evaluation means that an employee is evaluated by all who have contact: supervisors, peers, clients, and the public.

This approach is attractive to schools for two reasons. First, student achievement is not improving using a single evaluator. Data never seems adequate to hold anyone accountable. Second, conventional evaluation from the top results in every employee in each job-title group being rated similarly. Stated another way, traditional evaluation of educators lacks the ability to sort. This results in everyone getting high ratings.

The overarching purpose of performance evaluation is to improve performance year after year. It just doesn’t happen using the old, almost ceremonial approach.

Some public school districts have taken on this challenge by creating full-blown, 360-degree feedback for educators that is accurate and effective and requires little work on the part of the evaluatee. Done right, 360-degree feedback can be the keystone to school transformation efforts.

A five-year study of 360-degree feedback in the Hot Springs County School District in Thermopolis, Wyo., identified a 15 percent increase in achievement across all subjects measured by the SRA standardized tests. These gains occurred over the period with no decline in morale among teachers and principals due to the ironclad accountability of 360-degree feedback.

Superintendents who have tried 360-degree feedback for their own performance evaluation have found that multiple data sets provide school board members with a more valid and reliable means of judging performance. The typical lament of board members without such information is "how can I rate this behavior when I don't have any information?"

Marred Evaluation

Feedback to teachers is especially important. Despite a decade of feverish activity during the 1980s to evaluate teachers with more precision, principals, working solo, could not do it with any real discrimination. Principals using feedback from parents and students consider it a powerful tool in their evaluations of teacher performance. Dissertations by Peter Price in 1990 and Joan Wilcox in 1995 at Iowa State University demonstrated that student feedback (1) serves as a proxy measure for students’ achievement gains and (2) stiffens the principals' resolve to do a more discriminating job of teacher evaluation.

The use of clinical supervision as teacher evaluation, with its implication of remote control of teaching, lost its allure in the 1980s. Teaching is so complex, interactive, and contingent that script tapes and time lines, with constructs developed by scholars, did not result in valid measures of practitioners. Lee Shulman, in his final report for the Carnegie Corporation’s Board Certification Project in 1991, concluded that every method one can imagine for teacher performance evaluation is marred in a fundamental way. The solution, he argued, would be a judicious blend of assessment methods.

The School Improvement Model research team at Iowa State University's College of Education reached the same conclusion in the early 1980s. When conducting process/product research for large-scale projects in Minnesota, Iowa, and Texas, researchers clearly noted that most teacher evaluation models ignored the most important question: Do students learn? We concluded that multiple sources of data and different approaches were needed for different classes of employees. We also determined that producing feedback from all directions for teachers provided the same 360-degree feedback for principals and superintendents simply by aggregating the data.

The data sets include feedback from principals, peers, parents, and students, as well as self-reflection and student achievement gains. (See list.) When providing feedback for principals, building climate and teacher expectations are added to the mix. Principals’ feedback sets, in turn, provide feedback for superintendents and their cabinets.

Three Tracks

This new approach to teacher evaluation represents a sharp break with the "treat 'em all alike" custom of the past. The contemporary approach envisions three tracks in which all teachers are evaluated by 360-degree feedback.

* Track 1 is for rookies and consists of basic training with extraordinary resources devoted to working with beginning teachers.

* Track 2 is for a small number of experienced staff members who cannot or will not meet the school's performance standards. These teachers are placed on an assistance track.

* Track 3 contains the majority of teachers under the new scheme. These teachers are encouraged to set and pursue individual and group goals as members of professional development teams. When working in groups, the new model is intended to stimulate professional conversation. First, however, team members need feedback data to discuss. Ron Brandt, in his editor’s column, "Coaching and Collegiality," in the March 1996 issue of Educational Leadership, says that for most aspects of performance, helpful feedback is probably essential. Just ask the school’s basketball coach and choir director.

Data Sources

To use quality improvement language, appraisal of instruction requires that teachers listen to their customers, namely parents, students, and other teachers. Too often they listen only to their principals. Moreover, to set useful goals and measure progress, 360-degree feedback requires addressing the unspoken criterion of teacher performance: Do the students learn?

The School Improvement Model Team has answered this question for selected districts nationwide, from Monroe County, Fla., to Forest Grove, Ore. Not all districts use all data sources as they embark on 360-degree evaluation. Some will leave out peer feedback, others will decide against using parents’ feedback.

Three districts are applying 360-degree feedback to administrators’ evaluation, as well. They are Cave Creek, Ariz., Unified School District, Lincoln County School District in Kemmerer, Wyo., and Mesa, Ariz., Public Schools. Mesa and Cave Creek have career ladder programs supported by the state statutes. Lincoln County has 360-degree feedback because of its forward-looking staff, administration, and school board.

* Student feedback. Each year, teachers survey their classes with 20-question instruments that are designed to be age-grade specific (K-2, 3-5, 6-8 and 9-12). The questions center on preparation for teaching, instructional delivery, and student interest. The four instruments are articulated with each other to provide a common score of 80 points maximum. A special response mode was created for the K-2 pupils. A byproduct of scoring these student ratings has been the creation of a norm set with levels by grade and subject taught.

* Peer feedback. Teachers select an appropriate colleague to visit their classes and provide feedback on the same criteria that students and principals use. Peers are not expected to rate their colleagues on employee rules such as promptness of reports or punctuality.

* Self-evaluation. In order to stimulate self-reflection, teachers complete the 20 questions used for students, only in this case the questions are couched in an "I do this" format. Several research studies have suggested that teachers' self-perceptions, while generally higher than other raters, are more closely correlated to student ratings than are the ratings by their principals on the same questions.

* Principal feedback. Principals' ratings of teachers are based on observations, interviews, work samples, and examination of progress toward goals set by the teacher. After a year of familiarization by teachers to build confidence, all of the 360-degree feedback shared with the teacher is provided to the principal for use in a "consideration" folder. Teachers give feedback to their principals through a school climate survey. These instruments are also normed for elementary, middle, and high schools.

Mary Ann Rogers, assistant superintendent in Lincoln County, Wyo., District No. 1, reports that this multiple data set methodology has a positive effect on the reliability and validity of principals' ratings.

* Parent feedback. At each parent-teacher conference session, parents are provided with a five-question report card to complete. Questions apply to the performance of the teacher and the entire school. The opportunity to submit their own evaluations has encouraged high parental attendance at such events, in some cases as high as 95 percent. Teachers using the report card are pleasantly surprised by the positive and supportive feedback from parents.

* Student achievement. The major component of the system is the report of student achievement gains for each class, subject, and section taught by the teacher. Criterion-referenced tests and authentic assessment are used in a pre-test/post-test format. The results are provided to teachers in a percentage-of-mastery report.

This data set required several years of curriculum renewal, alignment, and assessment to develop. Once completed, it's the ultimate 360-degree source.

* Professional growth goals. Each year teachers and administrators are asked to examine all of their 360-degree data sets and determine one or more professional goals that will contribute to improved performance (for themselves and/or their students) in the next cycle. Common sense dictates that the valleys of performance, not peaks, are used for goal setting. Goals progress among teachers is assessed by the principal. Goals progress by principals is assessed by the superintendents, even in large school systems.

Improvement Plans

The data analysis and planning that precedes setting improvement goals is the most important link in the team evaluation process. The cognitive dissonance between the expected performance and the actual performance creates the targets for improvement. The evaluator in charge of helping the employee set the professional growth plans must combine and assess the types of feedback information and compare this information to the intended outcomes.

Typically, an organization using 360-degree feedback will have desired outcomes embedded in its strategic planning goals and its site improvement plans. A common expectation for teachers is that student achievement will improve continuously over time.

Using templates called management action plans for administrators or project action plans for instructional staff, the coaching evaluator and the evaluatee will set one to three goals. The action plan will ask the following questions:

* What is to be accomplished? (the goal)

* How it is to be accomplished? (a series of short-range objectives)

* What resources are needed? (funds, materials, staff)

* When must the goal be completed? (a specific date usually within a year)

* How will accomplishment of the goal be measured? (via achievement results, client satisfaction, improved feedback, lower costs, etc.)

The School Improvement Model team that I direct studied 3,000 performance goals set by employees of five K-12 school organizations. We found that good accomplishment is more likely to occur when (1) the evaluatee truly believes that the goal is hooked to feedback data; (2) enough time is allowed (evaluators tend to be impatient); (3) a specific measurement of success is used; (4) deadlines are reasonable but enforced; and (5) goals are announced publicly.

The last item is likely to cause a problem if the school board or superintendent has set challenge goals in strategic planning that are unrealistic. Still, going public usually results in better goals. A more cautious approach is to use a validating committee that has oversight of goals and their accomplishment.

Skeptical Staff

When a school system undertakes such a deep and sweeping restructuring of its performance evaluation system, district leaders should expect to face many questions, some doubts, and a certain amount of skepticism.

Excellence comes from ever-improving quality, and quality means conforming to specifications. As many schools operate now, specifications are so broad that everyone gets to do their own thing. To provide accountability and shrink the wide achievement gaps between majority and minority students, teachers and administrators must do the "school's thing," i.e., teach the written curriculum.

Administrators in districts using 360-degree feedback are compassionate. They know that teachers must be convinced and supported. This understanding also extends to students, many of whom are not well-served by the present instructional delivery system. Seeking equity and excellence at the same time is likened sometimes to playing a piano while carrying it upstairs!

Needless to say, those of us involved in the School Improvement Model have heard objections in 20 years of working toward 360-degree feedback. Most teacher concerns center on student feedback and measuring student achievement, while principals typically worry about feedback from their staff. Their usual complaints are these:

* "Students will be unfair."

In projects spanning from 1973 to 1996, we have worked out the bugs. We have controlled for "I expect an A—I expect a C." "This is a required course—this is an elective course." "I like my teacher—I hate my teacher." "This is a morning class—this is an afternoon class." "I'm a kindergarten student—I'm a high school student." With thousands of salary dollars involved in career ladder districts, we never have had a grievance filed as a result of student feedback.

* "It's too much work!"

Existing systems based on bastardized versions of clinical supervision are too much work and yield too few results.

* "It's too much paperwork!"

Feedback for 360-degree evaluation is relatively quick and done once a year. Consider the many hours superintendents now spend in "shadow" visits and principals spend visiting classes with yellow legal pad in hand. The multiple data sets are byproducts of doing the various jobs well.

* "Teachers won't like this!"

Teachers and their unions are well aware of the dirty little secret of the clinical supervision approach to teacher evaluation. Lacking enough data to make accurate summative evaluations, principals rate everyone high. After Monroe County, Fla., established pre-tests and post-tests, the School Improvement Model consultant suggested that matrix sampling would be adequate for program assessment. The teachers insisted on a continuous testing of all students at all grade levels. That approach kept everyone accountable, including the students.

* "It costs too much!"

Up-to-date, client-driven feedback with sophisticated assessments in meaningful reporting to all stakeholders doesn't cost. Rather, it pays off in better performance among staff and students. What’s truly costly are students continuing to do poorly, angry parents, and lack of accountability on the part of anyone with no idea of how to better satisfy clients, what to teach better, what to teach again, or never teach at all. That is expensive!

Richard P. Manatt is Director, School Improvement Model Projects Office, Iowa State University, Ames, Iowa.

Where to Learn More

For additional information about moving from clinical supervision to a client-driven evaluation system, readers may want to consult the following:

* Educational Leadership, March 1996. This was a theme issue on "Improving Professional Performance."

* "The Changing Paradigm of Outcomes and Assessments" by Richard P. Manatt, International Journal of Educational Reform, January 1993.

* "Removing Barriers to Professional Growth" by Daniel Duke, Phi Delta Kappan, May 1993.

For sample instruments, norm group data, and assistance in data processing, contact School Improvement Model Projects Office, Attn: 360-degree consultant, College of Education, N225 Lagomarcino Hall, Iowa State University, Ames, Iowa 50011-3195, or call 515-294-5521.