Mistakes fascinate me because they hold rich opportunities to learn.
The stories behind mistakes made by those who lead and govern schools, as well as teachers, and the learning lessons they offer, became the heart of a book I wrote during the pandemic, together with my friend Jill Wynns: “Mismeasuring Schools’ Vital Signs.” Her 24 years as a school board leader in San Francisco, and my 23 years working with more than 240 California districts, gave us a wealth of stories to draw from.
Some of these stories come from my having read dozens of California school districts’ Local Control and Accountability Plans, or LCAPs, and school site annual plans. Others came from my co-author’s experience as a school board trustee governing San Francisco Unified School District. Even more emerged from the research papers I reviewed for the book. Consider one example: Jenny Rankin’s 2013 study of California teachers’ errors when making sense of test results. Even under optimal conditions, only 48% of teachers interpreted results correctly. Under less-than-optimal conditions, just 11% did so. (Summary of study)
When teachers misinterpret test results, many wrong decisions may follow. For example, they may assign a kindergartner who is an emerging bilingual 5-year-old into the English learner category, even if that student’s command of English is equal to that of her English-only peers. Or they may fail to reclassify as proficient in English a fifth-grade English learner student, even though his results on both the state’s math and English language arts tests are better than most of his English-only classmates.
Evidence of two kinds could make those errors visible: (1) students’ own assertions that they are ready, willing and able to do without English language development supports; and (2) looking at results from two tests at the same time, and evaluating those results with regard for the uncertainty and imprecision they contain.
I’ve led a team that has helped districts build evidence like this and then taught district and site leaders how to interpret them. Leaders in Morgan Hill Unified in Santa Clara County were exploring why one-third of their third-grade students were lagging in reading by one year or more. Some believed it reflected weak delivery of sound reading instruction. Others believed it might be the result of a fundamentally flawed instructional program.
Both sides had evidence in hand. Defenders of the current instructional program relied on the Fountas & Pinnell running record, effectively a score of a student’s accuracy, speed and fluency when reading a passage. The other side brought results from the Northwest Evaluation Association’s Measures of Academic Progress, or MAP. When viewed separately, the two tests often led to different conclusions about reading mastery. But when results from both tests were viewed together where each student was a single dot on a scatter plot field, the mixed signals about each student’s mastery became visible. When more evidence measuring students’ understanding of the relationships between the letters of written language and the sounds of spoken language was added from KeyPhonics, the problems with the Fountas & Pinnell results became even more apparent. (For more on this, see this blog post or this video of our presentation to a conference of the National Center for Education Statistics.)
Teachers had to decide which students to refer for more intensive reading support. Twice the number of students needed extra support as the staff could handle. Deciding which students most needed the extra support required better evidence than teachers’ favored test (the F&P running record) could provide.
Guided by their respected, diplomatic director of curriculum and instruction, principals and reading specialists on both sides of this debate were able to dig deeper into this new evidence together. Their civilized discussion avoided the hazard of contentious curricular debates about reading.
The guidance leaders received about how to interpret scatter plots, how to allow for uncertainty and imprecision in test results, how to evaluate the quality of evidence, and how to make better sense of each student’s result in the comparative context of their peers, led to other insights. Principals could see when teachers’ past referrals of students to additional reading support were not supported by either test. They could see when referrals were only supported by the lower quality F&P running record. Just reviewing teacher judgment in light of evidence was a sign of notable progress.
In hospitals, when patients are harmed by a surgical team’s error, or when nurses err in giving patients their medication, it is documented and may be the subject of an internal review to examine how to avert similar mistakes in the future. If educators identified errors and acted on them as seriously as those in the medical profession, it would be the start of a glorious new era.
Districts, if they are going to learn from their teachers’ and leaders’ mistakes, first need to notice when those mistakes occur. This requires much more than another laundry list of standards and a pledge to follow “best practices.” It requires enlightened management, monitoring of teacher judgment and assessment-literate educators backed up by analysts in the front office. But districts aren’t staffed to do this, and leaders aren’t yet taught how it can be done.
In the meantime, students, parents, advocates and lawyers take note when educators make mistakes. I’m grateful for people like Kareem Weaver and his organization, FULCRUM, and for Todd Collins and his group, the California Reading Coalition, who are pressing districts one at a time to do the right thing. I’m grateful for public interest lawyers like Mark Rosenbaum and the ACLU of Southern California suing California school districts like Berkeley Unified to get K-12 leaders and school board trustees to recognize they have failed to teach all students to read.
But I’d rather see district and school leaders have the ability to see their teams’ errors, the wisdom to discover how and why they occurred, and the courage to admit their mistakes.
Steve Rees is co-author of “Mismeasuring Schools’ Vital Signs“ and is the founder of School Wise Press, a company that helps school districts analyze and share education data.
The opinions expressed in this commentary represent those of the author. EdSource welcomes commentaries representing diverse points of view. If you would like to submit a commentary, please review our guidelines and contact us.
To get more reports like this one, click here to sign up for EdSource’s no-cost daily email on latest developments in education.
We welcome your comments. All comments are moderated for civility, relevance and other considerations. Click here for EdSource's Comments Policy.