With baseball season starting, I’ve been reading scouting reports on my favorite team. Because my immigration journey started in Philadelphia, I am cursed to spend my life as a Phillies fan. The scouting reports all say that the Phillies will be terrible. But after spending countless hours scanning baseball stats for a glimmer of hope, I gave up and started thinking about the future of statistical analysis in education.
For most of the last century, baseball and education had a lot in common. Most folks thought that baseball was all about talent. Playing the game well was characterized as an art. Managers trusted their hunches and used their gut feelings to guide their decision making. The important thing was playing the “right way” and honoring tradition.
Over the last two decades, baseball changed. A new generation of leaders, starting with Billy Beane, the general manager of the Oakland A’s, questioned the sport’s basic assumptions (see Moneyball). They reframed the assessment of talent. They started using statistical analysis instead of gut feelings to make decisions. To everyone’s surprise, their teams started winning games even though they spent far less money than the opposition. Along the way, they transformed one of the most traditional and hidebound sectors of our society into one of the most innovative and interesting.
Despite all the scientific and technological advances that have altered our world, education hasn’t changed very much. Instead of thinking about data and statistics as a way to analyze and solve systemic problems, we look at aggregate student results and extrapolate their sources and solutions based on our political leanings, gut feelings or individual experience.
Is it any wonder that we fight the same battles over and over again? Traditionalists vs. reformers. Districts vs. charters. Test haters vs. supporters. More money vs. money doesn’t matter. In-school vs. out-of-school factors. Etc. Etc. Despite all the energy and all the shouting, nothing actually changes except for the names of the latest initiatives and the people doing the fighting. Large numbers of kids never learn to read at grade level. They lose interest in science and math. They drop out of high school. They fail to complete college.
These results are not inevitable. But to change them, we need to stop throwing initiatives at a wall and praying something sticks. It doesn’t make sense to start with a solution before identifying the actual problem and its underlying sources. It makes even less sense to implement a “solution” at scale without assessing whether it has any impact on the underlying sources of the problem. This goes for both new initiatives like blended learning and the things we’ve been doing for years like class-size reduction. Regardless of an initiative’s actual merits, its proponents have inevitably tended to extrapolate its benefits to multiple problems – grade-level reading, dropout rates, college readiness, etc. – without clearly identifying their sources. As a result, our public investments are often based less on the merits of any individual reform than the political power of its supporters and/or the strength of their public relations campaign.
This isn’t all the fault of the proponents of these reforms. For much of the history of education, longitudinal assessments or evaluations of anything at scale cost a lot of money and took a lot of time. As a result, proponents of any current reform will often cite decades-old research or the evaluations of recent pilots to support their approach. But we are reaching a point where we can identify the sources of problems and promising solutions without spending bazillions of dollars on multi-year mixed methodology research studies. There is an incredible amount of longitudinal data emerging from multiple sources inside our education system. Instead of rejecting the potential power of this data in the name of local control, we simply have to commit at a state level to collecting that data, questioning our conventional wisdom, and seeing what the data says.
Baseball analytics experts have used this approach over and over again to transform their sport. For years, managers used the sacrifice bunt as a way to move runners into scoring position, thinking it always gave them a better chance to win. After crunching the data, it became clear that bunting was hugely counterproductive because you were more likely to lose runs than gain them. Armed with this knowledge, managers started telling their hitters to swing away and began using the sacrifice much more judiciously.
This change isn’t limited to baseball. In medicine, Kaiser Permanente created a massive “integrated electronic health records system” that incorporates information from multiple data systems. By synthesizing and analyzing this data, Kaiser could provide its physicians with better information on effective interventions for their patients. By looking at the data at scale, they are also able to identify the underlying sources of health problems. For example, when their data crunchers found that obesity rates in Oakland were correlated with patient access to parks, Kaiser started investing in parks and building relationships with schools and YMCAs.
Now, I know sports and medicine are not perfect analogies to K-12 education. But that doesn’t mean they don’t have something to teach us about using data to solve big problems. Think about how much we could learn about the real impact of areas where we spend billions, like academic remediation and special education. With the right data and analysis, we could figure out why, for example, some kids don’t learn to read by third grade, assess the relative benefits of our various early intervention strategies and consider unlikely alternatives.
Move beyond past mistakes
At this point, the biggest barrier to asking those questions isn’t the collection and analysis of data but the corrosive nature of education politics. If we could just step outside the warring houses of pro- and anti-reform, we could have a real conversation about the possibilities presented by data analytics. We could begin by acknowledging that both sides have made important points about the use and misuse of data in education accountability over the past decade.
Critics of education reform are right when they say that decision makers misused data over the past decade to curtail the ability of education leaders to use their professional judgment when making complex accountability decisions. But the simplistic and erroneous use of data in high-stakes decision-making (such as the Program Improvement model in NCLB) does not render the data itself useless or make poor educational results any less troubling. Nor should it undermine the desire to collect and analyze data to answer critical questions about beneficial strategies and investments to correct our most difficult problems.
Supporters of education reform are right to point out that 20 years ago data on educational outcomes was either nonexistent or highly variable and assessments of educational quality depended solely on individual judgment. But the poor educational results that emerged from the first systematic collections of education data, particularly for low-income students and students of color, were not by themselves a sufficient rationale for stripping away the ability of local education leaders to make complex accountability decisions. Nor should the trends and lessons that emerge from future analyses serve as the rationale for the imposition of a similarly rigid accountability structure.
There is nothing inherently good or bad about data analytics. It is simply a powerful new tool that can help open-minded leaders make better decisions. That’s why it is so frustrating to hear Governor Brown bash data collection and analysis. The governor was mayor of Oakland at a time when our local innovators, the A’s and Kaiser Permanente, were transforming sports and medicine with analytics. It would be an extraordinary act of leadership if he took a page from their books, became a supporter of data analytics and committed to building that nation’s best statewide education data systems here in California.
There is no better time to do this than now. We have just engaged in a burst of unprecedented education policymaking and spending. Yet, from Common Core assessments to work-based learning to LCFF, our state has absolutely no plans to collect any data to assess the impact of these multibillion-dollar investments in public money. In this day and age, that makes no sense.
The first year of the implementation of the Smarter Balanced assessments could yield a treasure trove of information on the readiness of our state’s digital infrastructure, student preparedness for online assessment and so much more. The investments in work-based learning should be connected to post-secondary and work placement data systems to reveal whether these programs actually provide graduates with real college and career opportunities. The Local Control and Accountability Plans (LCAPS) from more than 1,000 districts and charter schools will provide an unprecedented amount of information on goals, outcomes, strategies and expenditures in eight state priority areas. However, the state currently has no plan to collect the information in these LCAPs or to evaluate what kind of progress districts are making toward their goals. Without collecting this information, our state leaders will know absolutely nothing at a state level about our state priorities! Meanwhile, local stakeholders will have no way to assess the comparative impact of their strategies and learn about better ones.
The mantra of local control and abdication of any state role may be good politics, but it is not good policy in the 21st century. By taking the time to collect and analyze the myriad data emerging from these multiple initiatives, we might actually learn something. In fact, a few years from now, we might be able to figure out which of these reforms were sacrifice bunts and which were home runs.
Arun Ramanathan is executive director of The Education Trust–West, a statewide education advocacy organization. He has served as a district administrator, research director, teacher, paraprofessional and VISTA volunteer in California, New England and Appalachia. He has a doctorate in educational administration and policy from the Harvard Graduate School of Education. His wife is a teacher and reading specialist and they have a child in preschool and another in a Spanish immersion elementary school in Oakland Unified.