Feature

Testing 'What If?' Scenarios

A process known as system dynamics allows leaders to try out an initiative before actually implementing it by Ralph A. Brauer
One challenge of system leadership always has been that even with the best data and research in place, with all the programmatic and political dimensions under control and with years of experience handling similar decisions, you never really know whether a policy initiative will work until you implement it. That is why anyone who has been a school administrator tends to firmly believe in the old truism that leadership is part art and part science.

Still, a tantalizing possibility continues to entice decision makers: What if you could try out an initiative before you had to put it in place? What if you had a process that went beyond focus groups, committees and even pilot projects and could simulate the possible effect of an initiative on your system? Curiously, such a process does exist and has been used by a who's who of major international corporations. The process is called system dynamics and has been in use for several decades since its creation by Jay Forrester, professor emeritus of management at the Massachusetts Institute of Technology.

After designing simulators during World War II and helping to build one of the world's first computers, Forrester turned his talents to creating a method of better understanding systemic functioning in organizations.

Popularized by Peter Senge in The Fifth Discipline, The Fifth Discipline Fieldbook and Schools That Learn, system dynamics mathematically models the components of a system to better understand how they interact, allowing you to take apart and rebuild a system as if it were a Lego creation, relieving you of having to experiment on real students or teachers. It parallels the simulators used by pilots and others to train for potentially life-threatening situations.

Possible Scenarios

Five years ago a group of Minnesota school administrators, teachers, researchers and others decided to build a system dynamics model of a school system that could enhance system leadership. We were guided by the question that has plagued educational improvement through much of the systemic change period of the last decade: Why did some initiatives to improve student achievement succeed but not others?

In simple terms, what we created was a functioning flight simulator of a school district, one that allows schools to input their own data and try out various "what-if" scenarios. This model is, of course, anything but simple, containing more than a thousand equations and what systems thinkers refer to as sixth-order feedbacks, the most complex that can be modeled.

The process used to create the model involved several discrete steps. First a design group of district-level and school administrators, teachers, university researchers and others guided by two professional modelers identified key components of the model and ensured they reflected current research and best practices. During this phase the team received a crash course in system dynamics, learning the language and structure of dynamic modeling.

Among the most important insights was that we came to see the idea of feedback was not a one-way street (I received feedback from you), but a loop (I received feedback from you and my reaction in turn influenced you). Soon we began recognizing a variety of school system functions in terms of these loops.

The challenge of this initial phase was to capture the key school system feedbacks of student achievement and model their interrelationships in a way that the model could be useful to any school or district. We used several facilitation techniques to do this, but all were guided by the caveat that everyone had to agree. Discussions were spirited, teaching us that one of the most powerful appeals of modeling is that it considerably lessens the personality conflicts that inevitably enter discussions by enlisting people to collaborate on a common task that moves them beyond the personal. In a way we were all like house builders who knew we had to construct something we all could live with.

Time’s Importance

The two key decisions made during this early phase focused on resources and demand. On a hot July afternoon in a deserted classroom, researcher Mark Davison explained over the hum of the fans how, in much of the data he had collected as head of the Minnesota State Office of Educational Accountability at the University of Minnesota, time appeared to be a crucial factor in boosting student achievement. Davison's observation about time as the common currency of educational resources and education demands opened up and redefined our possibilities, becoming the foundation for the entire project.

As we discussed the role of time in the system, we found the idea immediately resonated with teachers and administrators. Juan or Juanita takes a certain amount of time to learn a lesson. On the resource side of the equation, each teacher has a certain amount of time to give each student, with experience and talent governing how efficiently the teacher uses it. Administrators, aides, staff, supplies and facilities influence this efficiency.

Teachers and administrators on the team provided the second crucial insight: demand. Starting from the idea that each student takes different amounts of time, we broke this into what we call academic demand (based on previous assessments) and behavioral demand (for which we constructed a new rubric for teachers). As all teachers know, a student may have a low academic demand, breezing through assignments, but may be a handful in the classroom because she or he cannot sit still for long.

With the help of University of Minnesota researchers we used statistical data for all the teachers in Minnesota to calculate how much demand teachers with different levels of experience can handle. Education then can be seen as a kind of supply-and-demand equation: How much demand do the students bring, and how much in the way of resources does the system have to meet these time demands? In this sense everything in the system contributes to the time available for students.

When we felt we had something all of us were reasonably comfortable with, we subjected it to a variety of tests, just as you would for any structure. First, we conducted an intensive statistical analysis of the model and its variables. This analysis generated more than 127 output files from more than 500 data runs. Researchers also considered this analysis. Then sample data, some representing extreme cases, were fed into the model to see if these data would generate reasonable results.

Finally, the model was tested in beta sites in various school districts using historical data. In these tests, district data from several years ago were used to initialize the model. The results generated were compared with what really happened. In all cases the model's results were congruent with the history of the district.

An example of the model's ability to force mental model shifts came during one of the tests we ran on a school district that could not understand why its test scores had stayed relatively stable even though budgets had declined, teachers had been laid off and the number of non-English speakers and special-needs students increased, a scenario familiar to a lot of school districts.

What the model told us was that their performance largely stemmed from a policy they had implemented several years before when the district had hired an especially large number of new teachers and put them through an induction program designed to jump start their experience in the classroom. The model also showed that in the next few years the influence of these teachers will level off and even decline, especially if the current budget cutting continues.

Goal Analysis

With the positive results generated by the testing, we felt we were ready to move the model into a larger arena. We took it to various conferences, letting people see how it ran and allowing them to try some simulations. Senge and Forrester graciously offered time to review and critique our efforts. The suggestions we received at these presentations were invaluable in helping to redesign many facets of the model, especially its interface.

With these improvements in place it was time to see whether the model could truly enhance system leadership. The vehicle for this was the Blandin Education Leadership Program, an initiative of the Minnesota-based Blandin Foundation. Blandin wanted to use the model to help facilitate goal setting, allowing participants to see whether goals they formed were feasible and what possible impact would these goals have on the system. In a way, the model provided the ability to do a cost-benefit analysis of each goal at a sophisticated level.

District teams consisting of up to 24 people, including the superintendent, key administrators, teachers, staff and community members, first worked on teasing out their mental models of school change. Based on a suggestion from Senge's books, participants were shown a series of graphs representing various mental models, from the straight-line ramp (change gets better each year in defined increments) to a series of curves including the learning curve familiar to most educators. They then had to identify which stakeholders in the system held each model and what might be the policy implications of this dissonance. For example, what if teachers favor the learning curve and parents see their child's progress in terms of the linear ramp? Needless to say, the debate over No Child Left Behind became much clearer.

From this the teams moved to learning to run the model themselves, using a zero-sum game where they had to improve student performance in an imaginary school district by reallocating resources among four achievement levels. All eventually mastered this task, but the real payoff was the systemic insights it generated.

Community members came to see that school leadership was a good deal more complex than they had imagined. Everyone was more appreciative of the tradeoffs in any decision. Mental models about various policies were reassessed.

We then moved to testing the goals using data from their own districts. In effect, the model had now become their district and they had become the decision makers. Participants first used the model to see whether various goals were feasible using existing resources, then moved to scenarios with budget cuts and increases to see what impact they might have.

The discussions about this were incredibly rich, touching on everything from how much do you push students using high goals (but not so high they are unattainable) to what makes a goal cost-effective. The real value, though, was that a lot of "what-ifs" were dealt with without actually having to put them in action to see what might happen. Appreciation for the tasks facing school leaders increased even more, as participants came to understand both the art and science needed by administrators.

Of course not everyone experienced the same degree of "Aha!" moments and some were not moved from their positions. But overall reactions were quite positive. In the Blandin program, perhaps the most gratifying evaluation came from a staff member from a Native American school district who said the model was the first process she had seen that was congruent with the decision-making traditions of her culture.

System Flaws

Since the Blandin experience we have used the model to help better guide system leadership. For example, we recently did some runs with data from a school district that is notorious for its low test scores and high per-pupil expenses. This district has baffled reformers, administrators, state officials and foundations. The model, with its emphasis on demand, showed us the main problem with this district concerned its teaching staff. It wasn't they weren't trying or not doing a good job, but collectively they had the lowest average years of experience in the state and a 30 percent turnover rate. In essence, a lot of young, new teachers were placed in a high-demand situation, an equation with predictable results.

This is a classic case of how the problem rests with the system, not the people. If it is the system, what could the district do differently? One option would be to explore ways to better retain teachers to build a more stable and experienced faculty, but that would have to be explored for its possible consequences for other parts of the system. For example, if they paid teachers to stay, would it ratchet up all district salaries? What would the impact be for neighboring districts? How would the community react?

In another case we did some exploratory model runs investigating the ongoing public/private vouchers issue. While these findings are preliminary, in comparing the performance of private and public schools we noticed private schools tended to have a lower average demand, perhaps because they tend to not accept students with high behavior demands and more easily dismiss those who do cause problems. Charter schools actually may suffer from the opposite problem because many of them have high demands and few resources.

As a result of such insights, we have begun to rethink system leadership in terms of the supply, demand and time terms the model has shown to be so fruitful. Certainly, without understanding and identifying demand, initiatives such as No Child Left Behind or various grant programs may follow what Jay Forrester refers to as the “Law of Unintended Consequences”—making things worse rather than better. With NCLB, a school with low test scores may be resource poor (remember resources are time-based), so that cutting funding or allowing students to transfer may result in the academic equivalent of bankruptcy rather than the intended consequence of improvement.

As for grants, one foundation has committed to creating smaller high schools with 500 or fewer students, a departure from the large schools common to so many states. While this effort is laudable, without calculating the demand of these new schools versus that of the regular schools, the findings may have less value. Even more critically, taking 500 students and their teachers out of the system also has a demand-and-supply impact on the larger school system.

Deeper Understanding

While our model is still being refined, we feel our experience has shown that system dynamics can provide system leaders with an important process that enhances their ability to do better strategic planning and make more informed decisions. The experience was best summarized in an evaluation given by a school district administrator during the beta-testing phase. He said, "Our bottom line is student achievement. This [process] respects that. It puts finances in service of our mission rather than the other way around."

For us the experience with system dynamics has added space-age tools and processes to our arsenal, allowing us to look at larger systemic causes. Most of all, it promises to end the finger pointing that plagues education. For many of us the current scene is one of passing the blame as teachers criticize parents, parents criticize teachers, both criticize administrators and school boards, school boards and administrators criticize state education officials and all of them criticize the federal government.

With system dynamics modeling the focus shifts from blaming to a deeper understanding. Affirming Total Quality Management creator W. Edwards Deming's oft-quoted aphorism, people see it is not a specific person who causes a problem but facets of the system that we need to understand better. Leadership, like teaching, becomes better facilitation rather than dictating what must be done.

Ralph Brauer is executive director of the Transforming Schools Consortium, 14419 Waco St., Ramsey, MN 55303. E-mail: tsc@mtn.org