.Nameplate
Feature                                                 Pages 18-23

 

A School Test Globally Administered

High schools now can measure themselves in math, science and reading and on school-based influences of student success against global standards

BY JACK D. DALE

Dale ReportSince 2000, the Organisation for Economic Co-operation and Development has made the Program for International Student Assessment available to countries around the world. In 2012, 65 countries and economies participated in the triennial cycle of PISA. Each three-year cycle assesses a sample of students in math, science and reading.

OECD describes the purpose of PISA in this manner: “The Programme for International Student Assessment reviews the extent to which students near the end of compulsory education have acquired some of the knowledge and skills that are essential for full participation in modern society, particularly in mathematics, reading and science.”

Every three years, the PISA results are released, and every three years, educators across the world analyze those results and reflect on policy implications for their countries. The analysis and policy discussions heretofore have only been at the national level, and not at the school or district level. But that is now changing.

For the first time ever, individual schools across the world were able during the 2013-14 school year to participate in a school-based version of PISA called alternatively the OECD Test for Schools, the Pisa-Based Test for Schools and PISA for Schools. In the United States, nearly 300 schools across 26 states and the District of Columbia volunteered to participate in the OECD Test for Schools in the past year.

At each participating school, a random sample of 15-year-olds was selected to take part in a matrix sampling of test questions covering math, science and reading. In addition, students participated in an OECD-designed student survey covering student perceptions of engagement, drive, self-beliefs, classroom management, relationships with teachers, attendance practices and school morale — all influencers of student success.

This summer, each school received results in the form of a 100-plus-page booklet that probably is more extensive than anything any school receives about student testing results from its state education department, national college preparation test or any other assessment. So why did these schools participate? What actionable information did schools receive, and what do they plan to do with the volume of comparative information? Will they choose to participate again? And can other schools participate in the future?

Why Participate?

While a great deal of time, energy and political capital has been expended on developing and now implementing the Common Core standards, interest is growing into how well schools and students compare internationally. Over the past decade, a frequent global comparison point has been PISA and how our schools in the United States compare to other nations in math, science and reading. For the first time ever, individual schools can participate in those global comparisons and are able to respond to community and business leaders’ questions about whether an individual school is globally competitive. After all, that is where our students will be participating — in the global marketplace.

As the prior superintendent of the Fairfax County, Va., Public Schools, I knew we had to ensure our students were prepared for the global marketplace and that our community expected that level of performance. So when given the opportunity to determine how we stack up to top countries in other parts of the world, we jumped at the opportunity.

The results were revealing, and in some areas, surprising. So let’s look at some of those results and the learnings we gleaned from participating in the OECD Test for Schools.

Actionable Findings

As mentioned, each school’s results come in a 100-plus-page document bearing extensive details.

The report includes two major categories of results. The first has to do with the obvious student results in the content areas of math, science and reading. But even in this area, results are multifaceted and quite rich.

The second area of results cut across multiple perspectives of the learning environment at your school. As all school leaders recognize, the quality of the learning environment plays a significant role in students’ achievement and problem-solving capacity. Yet this is not a typical area of test-score analysis.

Results of the learning environment survey cut across all three content areas as well as for the school as a whole. Let’s look at each of these two areas in greater detail.

 Dale Students
Jack Dale, while serving as superintendent in Fairfax County, Va., had students participate in an international exam that enabled easy comparisons.
Content Data

The first view of results describes what your students know and can do in math, science and reading, though from multiple perspectives. The first (and most commonly referenced) are the school’s mean scores in the three subjects. These scores are readily compared to other countries’ mean scores on the most recent PISA assessment. Comparisons can be made to the distribution of mean scores of other schools in the United States, as well as distributions of mean scores from the top- and bottom-performing countries in the world. All of these comparisons are available for all three content areas.

A second mode of analysis is to compare your school to similar schools based on an OECD socioeconomic index that is internationally applicable and relatively similar to the U.S. measure of free and reduced-price lunch eligibility. (OECD likewise has found a significant correlation between performance and wealth within countries.) Each school then can see how well students perform on the assessment compared to schools with like levels of poverty and do so in each of the three content areas. Immediately, school administrators can discern whether their school outperformed expectations that would be predicted with a given level of poverty.

Another level of analysis of student performance is to review the distribution of student results across a rigorous 6-point rubric, again in each of the three content areas. Descriptions of these levels and the percentage of students at each level add to a rich analysis of the assessment results that go well beyond mean scores. OECD spends some time discussing the percentage of students at risk of not being successful in the global marketplace — those scoring below 2 on the rubric — and the implications of such an outcome.

A second subset of rubric analysis examines the percentage of students achieving at the highest levels — a score of 5 or 6 on the rubric — and each school will see how it compares with schools globally. Schools have found a rich set of discussions emerging from how their student body is distributed across the 6-point rubric.

In some instances, schools with identical mean scores yet vastly different distributions across the rubric will engage in a rich analysis of the curricular offerings and instructional approaches that might influence that distribution. It also is in this distribution that the expectation of application of knowledge and skills in new situations is starkly revealed — an area where U.S. schools typically fall short in global comparisons.

The analysis of the content results also leads school leaders to examine the rigor of the taught content standards, the level of critical thinking and application that is expected of students, the role of interdisciplinary instruction, and the level of access to all these expectations. Are expectations intended for all students or just some of the 15-year-olds in your school? All these analyses are rich discussion points from which any school will benefit.

Environmental Data

A second set of results examines the factors influencing the learning environment for students in your school and the impact of those factors on learning outcomes. Environmental information is collected via a student survey as well as a brief principal survey.

OECD then reports on these factors and their influence on student outcomes across the three content areas. The major learning-environment data points include disciplinary climate, student-teacher relations, student reading habits, student motivation and self-efficacy in math and science, truancy and tardiness, teacher preparedness and school morale.

Your schools’ data again are presented in comparison to other schools in the United States as well as other countries, including the top and bottom performers. What some have found to be interesting is the variation of the environmental factors across math, science and reading (English) classrooms.

When our first 10 high schools reviewed the learning-environment data, we quickly learned our student-teacher relations were, at best, middle of the road and, in some cases, well below national and international averages.

The analysis led us to conclude we had taken to heart only two of the three legs of Ronald Ferguson’s excellent work identifying the importance of rigor, relevance and relationships. We had hit the first two quite hard but hadn’t intentionally focused on the third leg, relationships. So we began reviewing our staff development curriculum to ensure we emphasized all three legs of our instructional stool in our discussions and training.

Another result, somewhat consistent throughout the United States, is that our students’ self-efficacy in math is quite high, yet their performance is not. Our students have great confidence in their knowledge, but their skills in applying that knowledge, especially in new situations, is less impressive. Again, these findings form the basis of valuable conversations with professional educators as we all seek to ensure our students will be competitive in the global arena.

READ MORE:

Joining a global network

Signing up for the global test

Jack Bierwirth: Merits of international benchmarks

Implications for Districts

Instructional leaders in our schools found the information in the document to be far richer and deeper than any other assessment results they had received. The most obvious feedback is about the rigor of student performance in all three content areas. One immediately begins to ask questions about the level of critical thinking and application of knowledge and skills demanded in each of our classrooms.

Equity of access to rigor is another area revealed with the analysis of results. The percentage of students performing at the higher rubric levels versus the lower rubric levels gives insight into how well our schools and districts ensure equitable access to the rich curriculum for all students.

Finally, a wealth of new information exists in the analysis of environmental factors influencing student performance on the assessment. In our test-laden environment in the United States and our data-focused analysis of those test results, we may forget the soft side of educating students. Examining our learning environment can be equally as important as the performance in math, science and reading. The OECD Test for Schools illuminates the necessary analysis.

Many other policy implications come from a thorough analysis of the results, making this a powerful tool for leading and learning. In Fairfax County, Va., we examined how to create supportive relationships between students and teachers, how to alter instructional practices to enhance interdisciplinary problem solving and critical thinking, and how to promote the importance of reading for pleasure. Each of these areas for improvement arose from the analysis of our results.

Education of youth worldwide is complex and ought to be driven by data-supported policy decisions. Participation in the OECD Test for Schools moves us in that direction. n

Jack Dale, retired superintendent in Fairfax County, Va., is a consultant with America Achieves in Washington, D.C. E-mail: jackdale01@gmail.com

feedbackicon
Give your feedback

ICON-facebook-35px
Share this article

bookicon
Order this issue