An Art of Teaching Science Inquiry
[restabs alignment=”osc-tabs-center” pills=”nav-pills” responsive=”true” icon=”true” text=”More” tabcolor=”#246820″ seltabcolor=”#3d52c6″]
[restab title=”Research” active=”active”]You might want to visit this site to see the research on value-added modeling.[/restab]
[restab title=”A Teacher Speaks Out”]John Spencer, an Arizona middle school teacher wrote a post that described his experience with the value-added model. He reports that one of his students said to him, “You look stressed.” You might want to read his post in which he explains how his stress was derived from Arizona’s use of VAM scores to rate teachers.[/restab]
[restab title=”Key Studies”]Two studies were recently published which spell bad news for advocates of the current models to rate teachers.[/restab][/restabs]
Teacher bashing has become a contact sport played out by many U.S. Governors. The rules of the game are staked against teachers by using measures that have not been substantiated scientifically. For many governors, and mayors it is fair play to release the names of every teacher in the city, and their Value-added score determined by analyzing student achievement test scores. None of the data that has been published has been scientifically validated, and in fact, the data that is provided is uneven, and unreliable from one year to the next.
A VAM score is a number that is derived using a covariate adjustment equation (Figure 1). The idea is to rate teachers using student test scores. For example, in the Florida VAM big data release, VAM scores are reported for teachers who taught math and reading, and for those that didn’t teach math or reading. They reported next to each teacher’s name, a score that indicates the learning gains students made above or below what they were expected to learn (based on earlier performance, with OTHER teachers).
Here is equation used to figure teachers’ “value added effect.”
Using student achievement scores to compute a number which claims to find what a teacher “adds” to student learning simply doesn’t add up. This is what this inquiry is about.
For years now, I’ve written about the nonsense attributed to using student achievement scores to assess teachers. But there are others who have written more powerfully about the nonsense attributed to the use of these scores. I want to direct you to two websites where you can find important information why using student test scores to evaluate teachers doesn’t add-up.
Dr. Cathy O’Neil’s Blog (Mathbabe) Cathy O’Neil is the Program Director of The Lede Program. Prior to Columbia, she was a data scientist in the New York startup scene and co-authored the book Doing Data Science. She blogs daily at mathbabe.org, appears weekly on Slate’s The Big Money podcast, and is active in Occupy Wall Street’s Alternative Banking group. She has written considerably on education, and particular her views on the use of value-added modeling to evaluate teaching. You can read her value-added posts here.
Audrey Amrein-Beardsey at Vamboozled! This blog, founded by Dr. Amrein-Beardsley as the lead blogger, focuses the research-based analysis of teacher evaluation, teacher accountability, and value-added models used the nation’s public schools. She is Associate Professor of education at Arizona State University. As a university professor, she has taken the lead in contributing to public debates about education, especially the use of value-added models to rate teachers. In addition to the Vamboozled blog, I recommend Dr. Amrein-Beardsley’s book, Rethinking Value-Added Models in Education: Critical Perspectives on Tests and Assessment-Based Accountability (Library Copy).
E-valuating Teaching: It Doesn’t Add Up
In this inquiry, we ask if it is a practical system to use student achievement test scores as a measure of teacher effectiveness. Is the system viable, and will such a system have a detrimental affect on student learning.Home