Balancing State and Local Assessments

A district's duty to meet needs of all students through testing by Stanley Rabinowitz

Paul Koehler has served on-and survived-two fronts of the "assessment wars."


As an associate superintendent in the Arizona Department of Education, Koehler presided over the development and implementation of a far-reaching statewide student assessment program. In his next job, as superintendent of the Peoria, Ill., Unified School District, he not only had to administer the state tests he helped to create, he also had to build a local assessment system to serve the more specific needs of his own district. That experience seeded his understanding of the ideal relationship between state and local assessment.

From the vantage point of his current post as policy director for WestEd, an education research organization, Koehler observes that when it comes to testing, "neither the state nor the local district fully appreciates the pressures and responsibilities the other faces.

"But when the roles and limitations of each are fully understood," he adds, "it becomes clear that the state and local assessment systems don't have to be at odds with each other. Nor should they try to replicate each other. Instead, we should begin to think about building systems that complement each other."

Never has this concept been more important. Students across America are being tested at unprecedented rates, due in large part to a proliferation of state-developed and administered assessments. Forty-eight of the 50 states have adopted some form of statewide assessment program. Unprecedented numbers of students now must pass tests for promotion or high school graduation. For teachers, principals and district administrators, state assessment results can trigger accountability-based rewards and sanctions.

A Local Rationale

The recent proliferation and expansion of statewide assessment programs raises an important question about the role and need for a local assessment program: Are local assessment programs still relevant?

The answer is clearly "yes" because effective local assessment is essential to improved student learning.

Our belief at WestEd, an education laboratory, based both on research and experience working with states and local districts nationwide, is that locally developed and administered assessment programs have a unique capacity to provide diagnostic information that, when understood and used effectively, has immediate impact on classroom practice.

"Our local assessment program allowed us to target instruction in a more timely and detailed manner than the state test," says Janice Florey, test director for the Douglas County, Nev., School District. "Principals and teachers began using these data to track student growth, identify difficult content strands and pinpoint instructional strengths and weaknesses."

So why don't state tests provide similar information? Simply put, they are not designed to do so. As statewide assessment programs focus increasingly on high-stakes student and school accountability concerns, they must rely on more narrow and conservative assessment methods, primarily multiple-choice tests. The strong suit of these instruments is their ability, in a valid, reliable and efficient manner, to reveal broad patterns of relative strengths and weaknesses across large groups of students. Such information can serve as an early warning system, pointing to content areas, schools, student groups and even individual students warranting greater attention.

What such statewide tests cannot yield is the detailed information necessary to target instruction for individual students. This leaves a clear and essential role for local assessment: developing diagnostic information about what students do well, where they are having difficulty and how the instructional program might be adjusted to address their specific needs.

Local assessment programs have greater potential for generating this kind of complex information largely because they are not bound by the same technical and political constraints as state-level programs. As a result, they can more realistically incorporate innovative assessment methods, such as teacher observations, portfolios and performance events, which are able to generate more specific and classroom-relevant information about the strengths and weaknesses of individual students.

Equally important, they can be aligned maximally to local content standards and to the community's value systems. Cheryl Milner, former director of assessment and staff development for the San Mateo/Foster City, Calif., schools, describes a process where the entire community was invited to review results both from state standardized tests (Stanford 9) and from locally developed assessments.

"We were able to educate parents about the strengths and weaknesses of each type of assessment-how one instrument might be able to give you this kind of information about student and school performance but not provide other key data," Milner explains.

Despite the state's growing assessment and accountability role, community members ended up fighting for continuation of the complementary local assessment program.

A Model's Attributes

Effective and efficient local assessment programs don't duplicate statewide efforts. Instead, they are designed to yield information that the state system can't provide. In doing so, they are responsive to the needs of local constituencies, including students, parents, teachers, administrators and the community at large. In building or revising a local assessment program, local administrators and teachers, working together, should ensure that the system has the following attributes:

Link to state and local content standards.

Florey, the testing director in Douglas County, Nev., reports that a major selling point for teachers and principals in her district has been the ability of the local assessment program to predict performance on state-mandated tests. Many local communities also value content or skills that are missing from state standards and therefore missing on the state test.

For instance, a local district might want to develop and address standards identified as important to local industry, such as a particular writing style or approach to teamwork. Thus, district-developed curriculum and assessments might reflect different content, emphases or performance levels than are embodied in the state standards and assessments.

In its recently developed state content standards, Nevada actually designates those standards most appropriately assessed at the state level-in the state graduation test and other required statewide tests-and those best addressed locally because they must be assessed by more innovative methods.

Provide information valued at the local level.

Local assessments should yield detailed diagnostic information for each child. Here, districts or schools might choose one of two approaches. One option is to administer more-detailed assessments to all students with the intent of building a tailored educational plan for each one.

The other option is to focus diagnostic attention on students whose performance has been identified as below standard by the state test and/or by local indicators, concentrating both assessment and subsequent intervention resources on this smaller pool.

The latter approach is more cost effective because it takes advantage of efficient, global measures to identify the most pressing needs, allowing schools to focus their own, more limited resources on the student population most in need of them.

Support teaching and learning.

For reasons noted above, large-scale, system-monitoring assessments at the state level provide limited information to classroom teachers. In fact, they can result in a narrowing of instruction if teachers focus too much on raising test scores. And given that the format of most state tests is largely multiple choice, teachers, in their attempt to prepare students for the test, may give less instructional attention to certain higher-order skills, such as conceptual understanding.

Free from some of the constraints that necessarily narrow a state-level program, a local assessment program has greater potential to promote more effective teaching and learning. It can do so by using performance-based assessment tools, such as projects, demonstrations, journals, students' self-evaluations and/or portfolios. These alternative assessment tools support greater development of students' metacognitive abilities (problem-solving, critical reasoning, application of knowledge in real-world contexts).

As protests against high-stakes tests grow, from California to Massachusetts and at many points in between, the ability of schools to augment limited state assessment data will become increasingly important.

Devising Local Systems

What follows is a brief overview of the steps that ensure the most efficient and effective development and implementation of local assessments. These steps are necessary regardless of the instruments chosen and regardless of whether the school or district decides to develop its own assessments or use or adapt existing tests.

As a general rule, the more innovative the program, the more time and effort must be committed to the process (see related story). Schools should plan on from 12 to 18 months to develop and pilot potential assessments before fully implementing them. In many cases, it might be best to stagger the development of program components across several content areas and school years rather than attempt to implement all components at one time.

• No. 1: Identify and prioritize needs and goals.

The needs that the local assessment system is intended to address and the expected outcomes should be identified as early as possible. Only then can staff decide what combination of assessment instruments is appropriate.

In making that decision, it's important to consider the concept of value added: Is the proposed assessment worth the time and effort of students and teachers? Is there another less costly way of getting the same information? How would this assessment work contribute to raising the achievement of all students, particularly those most at risk?

Having considered these questions, policymakers then need to meet with and gather the support of constituencies within and beyond the school walls (teachers, parents, business leaders). Most important at this point is the development of a process by which decisions will be made and resources found and allocated. Lead staff must be identified, trained and empowered.

• No. 2: Meet with state assessment officials.

Before investing in a new assessment system, local staff should meet with their counterparts at the state level who deal with assessment policy and technical issues. This commonly overlooked step can yield several advantages.

First, a thorough understanding of the state program, including its future directions, can ensure that the local program is complementary, not duplicative. Next, state officials might be able to point to other local agencies that have embarked on similar development activities. Finally, the state might be able to allocate technical staff and other resources to assist in the local effort.

Budget Constraints

• No. 3: Identify costs and resources.

Local development takes time and money. Budgets need to be developed and outside sources of funding, such as businesses and foundations, identified in case additional monies are required. Include in your anticipated costs those associated with shifting staff from other activities. External technical consultants might also be needed. If existing testing instruments can be adopted or adapted, it could result in substantial savings.

In some instances, policymakers must decide whether ongoing, repetitive tasks, such as scoring and reporting, should be an internal function or contracted out. If other schools or districts have similar goals and plans, they can become excellent resources. Creating a consortium of like-minded districts can add to savings and efficiencies by allowing them to pool talent and resources.

A well-developed plan will offer a realistic estimate of the human and fiscal resources needed and how they should be allocated appropriately.

• No. 4: Convene development teams.

While existing instruments may be available, chances are that some additional development will be necessary. In almost every case, the use of development teams, provided with a proper charge and training, will improve the final product, as compared to the results of an individual working alone.

The use of consultants familiar with the test-development process can be invaluable at this point. Teams should consist of teachers, administrators and, when appropriate, parents and other community members. This makeup will result in more valid tasks and in broader support for their implementation.

• No. 5: Provide necessary professional development.

The professional development needs of teachers expected to implement the new system must be considered in the initial planning. A complex system that no one can implement is doomed.

Professional development activities fall into four general categories: (1) philosophy and goals of the local assessment system; (2) how to teach consistent with that philosophy; (3) how to administer the actual assessments, including scoring; and (4) how to interpret results for teachers, students, parents and administrators.

Training might need to be repeated over time to reach newly hired staff and to refresh the knowledge of existing teachers and administrators.

Trial Runs

• No. 6: Pilot tasks and reports.

All new assessment tasks must undergo pilot testing. A dress rehearsal will ensure that tasks work as expected and that teachers, students and support staff are ready for the new expectations.

Piloting also can identify specific content that teachers might have thought they were teaching well but for which initial assessment results show otherwise. This information can lead to changes in curriculum and/or instruction. Sample reports of assessment results need to be tried out to ensure that teachers, students, parents, administrators and the public understand all the information the local assessment can provide.

• No. 7: Revise tasks based on pilot results.

Invariably, glitches occur. Some assessment tasks may take longer to administer than expected or not work at all. Others may not be equally suitable for all segments of the student population, such as low-performing students.

Revision time, often substantial, must be built into the implementation schedule.

• No. 8: Implement and monitor.

Over time, the new system should run increasingly smoothly. Indicators of success should be developed and regularly monitored throughout development and implementation.

Complementary Systems

The above process can be complex. But careful adherence can help ensure that a local assessment program complements its state counterpart in goals, focus, approach and, most importantly, results.

Properly developed and implemented, a local system can yield truly valuable information about student learning-information that can guide instruction and program development, ultimately resulting in higher achievement. And when it complements the state system, a local assessment program can yield data that support reform efforts without overburdening students, teachers and the education system in which they operate.

Stanley Rabinowitz is co-director of assessment and standards development services at WestEd, 730 Harrison St., San Francisco, CA 94107. E-mail: srabino@WestEd.org