Feature

Seeing the Whole

Seven decision points when you plan to evaluate the effectiveness of a program in your district by SHARON S. SKEANS AND ROBERT G. SMITH

Remember the ancient parable from India about the blind men and the elephant? Each man, using his dominant sense of touch, examines different parts of the animal, such as the trunk, the tail, the leg and the ear. One is certain the object is a tree branch, another calls it a rope, and still others guess a pillar or a fan.

When they subsequently gather to compare perceptions, they are surprised to learn they are in total disagreement. The moral: Knowing in part may make a fine tale, but wisdom comes from seeing the whole.

Education communities are, at times, similar to these blind men. While school district leaders keep close watch on areas or departments or buildings listed in their job descriptions, they rarely have time to pay attention to other components of the institution nor are they asked to reflect on how their domain impacts that of others.

 

A Whole View
A case in point was the use of computer labs in elementary schools in the school district in Texas in which we both worked. The principal and the on-site technology specialist methodically designed a schedule and monitored the use of labs by teachers and their students. From their perspective, rotating classes in and out of the computer lab once a week appeared generous, considering the district’s limited technology resources.

Feature_SkeansSharon Skeans, former language arts director in Spring, Texas, works as a staff development consultant, transferring research into practical classroom instruction.

When English language arts classes frequently canceled their lab time and ultimately withdrew from the rotation, neither the principal nor the specialist thought to inquire why. Fellow teachers of mathematics and science failed to alert the principal to these cancellations, as they now had additional opportunities to schedule hands-on computer time for their students.

Indeed, wisdom comes from seeing the whole. In the above scenario, our districtwide English language arts program evaluation detected the reason for the teachers’ lack of access to the lab: Their programmatic expectations in the area of writing instruction, which involves a multiple-day process, took precedence over the mechanical administration of once-a-week lab usage.

Of course, language arts teachers were eager for students to learn word processing skills to facilitate the art of writing. Yet they soon realized that time in the computer lab only allowed students to conduct initial prewriting of drafts or initial drafting. If these writing stages were done in the classroom prior to scheduled lab time, then student writers only had time to input their drafts and still had to return to classrooms for revision and editing. Now, who would enjoy writing if the process were so tedious?

Our program evaluation committee thus recommended to the administration that either the language arts classes be scheduled into labs for five consecutive days, even if only once every month, or computer workstations be added to classroom areas to provide sufficient time for students to write and publish electronically produced compositions.

The second suggestion was accepted. Building principals, technology specialists and language arts teachers were delighted. Equally important, the board of education appreciated a concrete example related to student learning to support the need for a bond election to upgrade technology.

 

Systematic Use
Educational evaluation in too many states and school districts, if done at all across an institution, has been reduced to crunching test scores in only the subject areas and grade levels tested. While measuring student achievement in reliable and valid ways is an important part of the mission of education, it is not enough. What’s also needed is a systematic process for evaluating the effectiveness of education programs and how those programs affect all aspects of other subject areas and local resources used in instructional delivery. Analyzing test scores only is like examining only the elephant’s trunk!

Feature_SmithRobert Smith spent a dozen years as superintendent in Arlington, Va., before moving in 2009 to George Mason University as an associate professor of education.

In the past 20 years, we have engaged in systematic program evaluations in three different school systems (two in Texas and one in Virginia), from the perspective of various roles (director of an instructional program, assistant superintendent for curriculum and instruction, third-party consultant and superintendent). Our evaluations have been directed toward assessing programs across the curriculum. In all cases, the general purpose has been the systematic application of evaluation principles and procedures to allow program managers and governing boards to make better-informed instructional program decisions.

We describe below the purposes of school district program evaluation; the assumptions undergirding its application; and the steps followed in planning to conduct, analyze and report results. In doing so, we do not suggest they represent a cookbook of steps to be followed slavishly. Instead, we believe they portray principles of evaluation that school district personnel might find helpful if applied in ways that are consonant with local conditions.

Multiple Purposes
Program evaluation is ongoing in any healthy education institution. Informal, formative evaluation occurs at systemwide meetings, through classroom visitations and among building administrators and faculty during discussions to analyze achievement data and to implement curricula.

However, a more systematic and comprehensive review, or summative evaluation, is recommended for each subject area once every five to seven years. Scheduling all programs for review on a set cycle communicates to all stakeholders the seriousness of evaluation for the school system. It also tends to lessen the tendency to evaluate only those programs with mandated external testing and those around which political issues surface.

A formal, summative evaluation of an instructional program may be defined as a process by which data are periodically and systematically collected, analyzed and interpreted to draw conclusions and make judgments about current practices, as well as to frame recommendations for improvement.

Evaluating programs in this way achieves the following goals: (1) ensures that the school board’s policies and objectives are being addressed; (2) assesses the fidelity of implementation of district curricula and researched-based instructional methodology, and approved materials/resources; (3) reveals staff and student satisfaction and concerns; (4) facilitates sound decision making for future goals, activities and expenditures; and (5) most importantly, improves student learning.

Seven-Step Process
To complete a program evaluation, certain decision points before, during and after data collection are required. We focus here on a seven-step process in planning the evaluation:

Step 1: Complete a program discrepancy analysis.A program discrepancy analysis compiles written reflections in which the program director describes, first, the present status of the program; next, an ideal program; and finally, the discrepancy between the two. This document, written initially from the perspective of the current program director, becomes the starting point for discussion.

Sections about the current status might include: (1) a description of the program’s grade levels, course titles and ability levels; (2) external factors affecting the program’s implementation (e.g., federal and state mandates, changes in positions responsible for the program’s success, available instructional facilities and resources); (3) level of expertise of educators based on experience, previous staff development/training; (4) current curricula and resources reflecting research-based pedagogy; (5) available student achievement data from the past three years; and (6) a summary of ongoing topics under review by task forces, study teams and professional learning communities.

Next, a vision for an ideal program should be articulated. Using the above sections, describe what an ideal program would look like, considering those areas in which you can effect change. Last, summarize the current program’s strengths and areas needing improvement.

Step 2: Determine the extent and dimensions of the program evaluation. Based on identification of the program’s discrepancies, decide on the extent of the program evaluation, ranging from the entire program to particular grade spans or even specific curricular strands (e.g., mathematical problem solving or writing). Developing the scope of the evaluation might be based on (1) the number of staff members involved in the program, (2) the grade levels or curricular strands in need of immediate attention and (3) whether a concurrent program evaluation is under way or scheduled affecting similar grade levels or stakeholders.

Step 3: Revisit, revise or draft a philosophy statement for the program to be evaluated. If there is no written philosophical statement for the instructional program, write one. If there is one and it hasn’t been reviewed with educators implementing and monitoring the program in a while, dust it off and share it now. All teachers involved in delivering the program must agree on the program’s methodological and pedagogical bases, or the gathering of data to assess a program is hindered before data collection even begins.

Step 4: Determine the questions to be answered by the evaluation. The basic categories of questions to be answered might include:

  • What are the essential characteristics or attributes of the program to be evaluated? It is essential to know and to specify what defines the program.
  • To what degree are those essential characteristics or attributes carried out in the program? In the absence of this information, conclusions could be drawn about a program that was never implemented or implemented in ways that vitiated its effectiveness. 
        For example, one middle school program evaluation identified teacher think-alouds as an ideal way to model cognitive strategies (e.g., making connections, summarizing). Random classroom observations, while revealing evidence of think-alouds, indicated they were used to clarify classroom management procedures rather than to model critical and evaluative thinking.
        Additionally, having well-articulated program characteristics provides a gold mine of data helpful to assess the needs for program revision and for planning professional development.
  • To what degree are the goals and objectives of the program being achieved? The answers to this question should constitute the meat of the evaluation and should encompass, at the very least, results of measured student learning.
  • What are other results of the program, intended and unintended, positive and negative? Programs often exert effects other than those anticipated by program developers. 
        For example, a program evaluation in one school district discovered that a renewed emphasis on mathematics computation had paid off in gains in computation test scores but had been accompanied by a significant decline in conceptual understanding and application scores.

Step 5: Determine the types of data and the number and types of data-collection instruments to be used. Specific questions to be answered by the evaluation should guide your selection of data-collection instruments. Surveys are commonly used to report the perceptions of respondents. Multiple respondent groups (e.g., teachers, students, administrators, parents) yield more reliable results than sampling a single group of stakeholders and paint a clearer picture of the program’s strengths and areas needing improvement. 

Classroom observations represent another source of data that informs program reformation, particularly the fidelity of program implementation. Other data that may be considered include extant achievement data, achievement data acquired or reanalyzed for specific purposes of the evaluation, and results from focus group deliberations designed to solicit perceptions of stakeholders as an aid to understanding survey responses or as a vehicle to help develop initial survey questions.

Step 6: Decide who will be involved in the program evaluation process. List all the stakeholders of the program and who might aid in facilitating its evaluation — the committee charged with this evaluation task and noncommittee members who can assist in the design and production of data-collection instruments, in data gathering, in reviewing analysis for reporting and in producing the final evaluation document.

The size of the program evaluation committee depends on the size and complexity of the program to be evaluated and a considered judgment regarding the interplay among the involvement of stakeholders, the likely use of results, the integrity of the evaluation, and the resources available to fund involvement. Select members from all stakeholder groups. Consider using established advisory councils in the school system to serve as groups for reviewing and revising data-collection instruments and analysis reports.

Outside consultants might be used for facilitating either the entire evaluation process or for conducting specific process steps (focus groups, surveys, classroom observations, report writing). A third-party consultant also can suggest dramatic changes in programs. For example, a historically used yet ineffective method, such as a phonics program for middle school students implemented before current staff members were aboard and which data suggest had no positive impact on student achievement, needed to be eliminated. Such recommendations appeared in the final report.

Whoever is asked to participate in the evaluation process, remember to keep control of the interpretation of data and recommendations for improvement in the hands of staff members, those leading, implementing and, thus, charged with improving the program.

Step 7: Develop a program evaluation timeline. Program evaluations can be efficiently conducted during two to four semesters, depending upon the size of the program, the number of grade levels evaluated and the number of data-collection instruments used.

Modified Perceptions
Paradoxically, program evaluations take time, yet save time and money if well planned and conducted. Results and findings identify what’s working well and what needs revision, and even what needs omission. Subject-area leaders, administrators and teachers alike reshape their perceptions of the program by reflecting on its systemwide impact on student achievement.

Sharon Skeans is director of Houghton Mifflin Harcourt’s Texas professional development team. E-mail: skeans@consolidated.net. Robert Smith, a former superintendent, is a professor of education leadership at George Mason University in Fairfax, Va. He is the author of Gaining on the Gap: Changing Hearts, Minds, and Practice (co-published by Rowman & Littlefield Education and AASA).