Stanford professor finds Michelle Rhee's teacher evaluation system was effective
October 17, 2013 | By John Fensterwald | 29 Comments
Score one for Michelle Rhee and performance pay.
A study released Wednesday of the controversial teacher evaluation system that Rhee initiated when she was chancellor of the District of Columbia Public Schools has found that both its threats of dismissal and big pay incentives worked as intended. Within its first three years, the system led to increases in the retention and the performance of effective teachers while encouraging ineffective teachers either to quit or improve.
The research, co-authored by Professors Thomas Dee of the Stanford Graduate School of Education and James Wyckoff of the University of Virginia, is one of the first studies to show a positive impact of offering more money to teachers who perform better. As they acknowledge, most of the research “raises considerable doubt about the promise of teachers’ compensation-based incentives as a lever for driving improvements in teacher performance.” Especially when the pay incentives were linked to increasing test scores alone, “it may be that teachers generally lack the willingness (or, possibly, the capacity) to respond to incentives that are linked narrowly and exclusively to test scores,” Dee and Wyckoff wrote.
Washington, D.C.’s IMPACT system also gave heavy weight – 50 percent of a teacher’s evaluation score – to valued-added test scores of students who took the district’s standardized tests. But only 17 percent of the district’s teachers taught subjects that were tested. The evaluations of the other 83 percent of teachers were based on multiple factors, including five observations by principals and master teachers of classroom management and instruction, a teacher’s impact on the school community and other measures of student achievement not involving standardized tests. Three other elements distinguished IMPACT from other pay-for-performance programs, the researchers said:
- Strong incentives of dismissals after two straight “minimally effective” reviews and substantial monetary rewards, including first-year bonuses of up to $27,000 and permanent raises of as much as $25,000 for two consecutive “highly effective” ratings;
- Instructional coaches to assist teachers in improving performance;
- Recognition that the system would be neither small-scale nor temporary.
“IMPACT was not cash for test scores,” Dee said Wednesday. “It was based on multiple measurements and powerful incentives – a jump of five years on the salary schedule for those twice rated highly effective – that dwarf other programs.”
The research concentrated on those teachers who would be most likely to be motivated by IMPACT: those teachers facing the prospect of dismissal for a second consecutive “minimally effective” rating and borderline “effective” teachers motivated to become “highly effective.” The study attributed the higher rates of attrition of minimally effective teachers and higher rates of retention of highly effective teachers to IMPACT. A significant percentage of minimally effective teachers who didn’t quit saw improved evaluation scores.
“We think there should be new and different ways of assessing teachers,” Dee said. “IMPACT may not necessarily be the best way and the final word, but it does provide early important evidence.”
Dee acknowledged that the study didn’t focus on the vast majority of teachers rated effective whose jobs were not in jeopardy and may not have been motivated by IMPACT’s incentives. “For teachers square in the middle, not close to minimally or highly effective, reform might have washed over them,” he said.
Rhee resigned as chancellor of D.C. Public Schools in the fall of 2009, the first year that IMPACT went into effect, to head StudentsFirst, a Sacramento-based education nonprofit. Since 2010, 500 teachers with “ineffective” ratings on evaluations have been dismissed. There were investigations into allegations that teachers in some schools, motivated by the possibility of big bonuses or fear of dismissal, changed students’ test scores.
Dee said that most of the cheating charges surfaced a year before IMPACT appeared and that the small number of teachers identified with the potential violations were excluded from the study.
Dee and Wyckoff published “Incentives, Selection and Teacher Peformance: Evidence from IMPACT” as a working paper for the National Bureau of Economic Research.