Features

Technology Solutions for Testing

The computer’s greatest use is in measuring student progress quickly by Allan Olson

The school board at the East Side Union High School District in San Jose, Calif., has a narrow focus: growth. And with a high percentage of its 24,000 students at risk or transient, it doesn’t have time to wait for test results.

 

Dan Ordaz, the assistant superintendent for instructional services at the district, says, “Our board’s insistence on growth was the driving force behind our move toward a new technology-based assessment tool.”

East Side is emphasizing academic growth as the basis for major decisions, such as whether to implement a computer-based education program or assessment system. Technology is merely a tool to gather and evaluate student progress more quickly. Computers offer several benefits, but what’s more important is whether the tool provides what you need to inform your decisions.

Peter Hendrickson, a central-office administrator with the Evergreen Public Schools in Vancouver, Wash., agrees. Like many in his role, he feels technology should not be the sole basis for selecting a test. “A need for information should drive the pursuit for applications,” he says.

As assessment director at Evergreen and formerly at the Centralia, Wash., district, Hendrickson has presided over many exams. “Students take so many tests we don’t like to add a new one unless we know it will provide us with high-quality data that will supplement or replace our existing tests,” he said.

What Needs?

Technology solutions for testing run the gamut from online coursework with built-in assessment to electronic systems for scoring written essays. There are standalone systems, designed for one or two students at a time, and network solutions, where entire classes can take the same test at the same time. In addition, there are Internet-delivered tests, which students take online, and Internet-enabled tests, which students take via network-connected computers, although student data, test items and scoring information is transmitted online from the testing organization.

With all the options, selecting a computerized test can be confusing. But when you focus on what you need and how quickly you need it instead of trying to fit the latest technology into your assessment program, the decision process is simplified.

The need in the East Side Union High School District was clearly to measure growth. Many students there are recent immigrants and have poor English language skills. Many have been promoted despite significant academic shortfalls. “It’s easy to see that our students need extra guidance to make progress,” Ordaz says, “so we needed to be able to find out exactly where to focus our efforts.”

That need is common to educators. A measure that identifies the status of the child or the average status of the group is not an adequate measure of whether the program or the school or district is effective with the children. More valuable is an assessment tool that provides information about what children need in order to progress. Also important is information about what instructional methods are working.

Growth measures are the only type of test that will provide that level of detail. A growth measure is a test that is administered at least annually (and often more frequently) that measures effectiveness or the amount of change we produce in individual students or classes.

The assessments included in many computerized learning programs are similar to growth measures. They monitor improvement. As the student moves through the material, the computer evaluates and keeps track of that student’s progress for that subject.

That type of test is good in certain curricular areas or grade levels, but to improve learning across the board, a more global measure is needed. A custom-built test developed by a school district is one way to obtain this information. Another is to use a test such as the Measures of Academic Progress, which the Northwest Evaluation Assessment has designed.

Selecting the Test

Ultimately, your goal should be to pick a test that aligns with the curriculum or local and state content standards so you can see growth trends toward these benchmarks. Ordaz says the East Side Union High School District uses its growth measure to gauge progress toward its goals and expects to use it to predict how students will do on the California high school exit exam.

“Now we have something much more objective than teacher hunches to help us prepare our students for graduation,” he says.

Traditional standardized tests can be used to some extent for this purpose, provided a district uses the scale scores and administers the tests annually. However, using them in that way can be very time consuming, especially considering there are better options.

Finding a good growth measure, therefore, should be the first priority. But a computerized growth measure will provide even more benefits. Computerized tests offer several advantages: relatively simple to install and use, easy to administer and score, reduced test-taking time, reduced reporting time and built-in security.

On-Demand Testing

Another important benefit available through technology is the computerized adaptive test. This test adjusts items based on a student’s answer in “real time.” For example, when a student is presented with a difficult item and gets the answer wrong, the next item presented is easier. When a student answers correctly, more difficult items are presented. The test adjusts to keep the student appropriately challenged.

This solves one of the biggest drawbacks to traditional standardized tests—inaccuracy. Students who aren’t challenged may become bored, and those unable to do well on the exam may simply be frustrated and give up. In both cases, the results aren’t an accurate measurement of their achievement because students aren’t engaged. Also, the more individualized the tests items, the greater the degree of accuracy in scoring results.

Virgil Mabrey, a veteran school board member at the 3,500-student South Madison Community School Corp. in Pendleton, Ind., says the adaptability of their computerized test is a real step in the district’s ongoing improvement process.

Computerized tests also enable on-demand testing. Ordaz says East Side tests students within hours of their arrival to show teachers their areas of strength and need.

On-demand testing is valuable primarily because computerized tests provide almost instant scoring. In most cases, results are available immediately and reports are generated within two days. Typical standardized tests take six weeks or longer for scores to be returned.

Quick scoring enables educators to make immediate instructional adjustments. Teachers can review the scores with students while they still remember the relationship between the score and the test.

Another benefit computers offer is the capacity to locally disaggregate data when needed. The better quality the data are, the greater this benefit is. Even if paper-and-pencil tests are used, high quality data can be entered into computers and evaluated through statistical and other programs.

In more than 15 years of experience with computer-based assessment, I’ve heard few complaints. Perhaps the biggest downside with adaptive testing, in particular, is that students can’t return to questions they already answered. Another is that not all students are comfortable with using computers. However, our research has shown that computer-based testing is not adversely affecting test scores.

In fact, since the majority of students today are comfortable with technology, computer-based testing is providing one of the best benefits of all: student satisfaction. A high school freshman in Indiana who recently took electronic NWEA growth tests in reading, mathematics and language usage says the tests were “more effective and efficient ... and were rather fun, more relaxed and less stressful than other ... tests.”

Data Is Key

While computerized tests all share some common benefits, they do have a key differentiator: data quality. Many educators today think most tests are not very valuable for gaining information about students. That’s because many tests don’t align with local content matter, don’t have high quality or longitudinal growth scales, and don’t lead to reports for teachers, schools and districts, regardless of whether their tests are technology-based.

These educators might be tempted to select a computerized test solely on the basis of the benefits technology adds. That would be a mistake, because computerized tests are available today (outside of those included with electronic learning programs) that provide valuable data. The key is matching the test with the assessment purposes.

“At its most basic level, assessment is a collection of different measures used for collecting data to gauge student progress toward predetermined goals,” Jim Tilghman, principal at Wayne Center Elementary School in Kendallville, Ind., says. “The better matched an assessment tool is toward meeting those goals, the better the data we will get from it.”

When the Kendallville district evaluated an assessment system, Tilghman says, “our goal wasn’t to eliminate all our tests and replace them with one ultimate solution. It was to find a tool that met our instructional needs.”

The district chose a supplemental computerized test that aligns with state and local goals and provides specifics on individual student learning to guide improvement efforts. It is also a reliable predictor of results on Indiana’s state test.

“Our data enable us to diagnose achievement trends by subject, across the district and the state,” Tilghman says. “They are specific enough to guide instructional changes at the classroom level or provide individual student help and broad enough to be used for widespread curriculum reform or accountability.”

The best quality system will provide this level of detail. Reports should detail normative data, as well as normal growth for children with different characteristics. Schools should have the capacity to disaggregate data based on factors such as age, gender, socioeconomic background or other delineations that could have an impact on learning.

Measures, whether computerized or standard paper-and-pencil systems, should provide data that ensures we are focused on children, not grade-level content. They should help us answer the important questions in education. These questions include: Are the top 20 percent of a school’s high-performing students growing rapidly and are the lowest-achieving students growing rapidly enough to meet the exiting requirements. Where on the growth scale should a student entering 3rd grade be? What do we have to do to help that student succeed each year?

Testing of the Future

Having the right data is just the beginning. Once we can measure individual growth across time, we can ask the tough questions about what programs will provide the greatest learning potential. We will be able to assess why some students do better and then create the best possible learning environment for each one.

Technology also opens the door of the future to higher quality, more complex tests, such as ones with constructed response questions or tests that rely on multimedia capabilities to present questions in nontraditional formats.

With those tests and better data, we can objectively study different teaching strategies such as what combination of whole language and phonics instruction is most effective. In addition, technology can link that information to a national electronic database of assessment and other data. When we can examine a wide array of student growth and learning questions, the possibilities for improvement will be virtually endless.

Imagine a world where schools can run the numbers just like a well-run business before deploying a new reading system or hiring new teachers. The very culture of our schools could change.

I’ve seen small examples of this happening already as educators examine the data. For example, an Idaho district evaluated three different mathematics programs to see which was the best for them. After two years of using them, the district leaders looked at the data. Two produced growth. One didn’t. Was the lack of growth for the third program due to an inadequate curriculum or poor staff development? The data gave them the answer.

Every time educators test a new program, there is the potential that students will miss an opportunity for learning. The more data we have, the fewer opportunities we will miss. Quality assessment data will take us there. Computers will make it possible faster.

Allan Olson is the president and executive director of the Northwest Evaluation Association, 12909 S.W. 68th Parkway, Ste. 400, Portland, OR 97223. E-mail: allan@nwea.org