All Value-Added Models (VAMs) Are Wrong, but Sometimes They May Be Useful
January 01, 2020
Appears in 2020 Spring Journal of Scholarship and Practice.
In this study, researchers compared the concordance of teacher-level effectiveness ratings derived via six common generalized value-added model (VAM) approaches including a
- student growth percentile (SGP) model,
- value-added linear regression model (VALRM),
- value-added hierarchical linear model (VAHLM),
- simple difference (gain) score model,
- rubric-based performance level (growth) model, and
- simple criterion (percent passing) model.
The study sample included fourth to sixth grade teachers employed in a large, suburban school district who taught the same sets of students, at the same time, and for whom a consistent set of achievement measures and background variables were available.
Findings indicate that ratings significantly and substantively differed depending upon the methodological approach used. Findings, accordingly, bring into question the validity of the inferences based on such estimates, especially when high-stakes decisions are made about teachers as based on estimates measured via different, albeit popular methods across different school districts and states.
Authors
Audrey Amrein-Beardsley, PhD
Professor
Mary Lou Fulton Teachers College, Arizona State University
Tempe, AZ
Edward Sloat, EdD
Faculty Associate
Mary Lou Fulton Teachers College, Arizona State University
Tempe, AZ
Jessica Holloway, PhD
Australian Research Council DECRA Fellow
Deakin University
Melbourne, Australia
Advertisement
Advertisement
Advertisement
Advertisement