California’s Smarter Balanced test results: Use with caution

September 25, 2017

When the results of the third year of the Common-Core-aligned Smarter Balanced tests are released in California this week, they will almost certainly receive a great deal of attention, in the media and elsewhere. There will be an outpouring of analysis, and another round of back slapping or hand wringing, depending on what the scores show.

The results will inevitably be held up as evidence of whether California is on the right or wrong track with the series of education reforms introduced during Gov. Jerry Brown’s tenure. These include the Local Control Funding Formula, which was approved by the Legislature in 2013, and which devolved more decision-making to local school districts, a new accountability system, and the Common Core standards in math and English.

But it is worth asking some basic questions. What will the test results tell us? How much should we care about them? Do they matter?

The reality is that school districts have already received student results, as have parents who were mailed individual notices about how their children did.  “We were able to start the school year with good data on how students performed,” said Ting Sun, the executive director of the Natomas Charter School in Sacramento, echoing remarks from other school officials interviewed by EdSource.  Ting is also a Brown appointee to the State Board of Education. “We never had that under the old system,” she reflected at the board’s meeting two weeks ago. 

One of the main reasons California administered the Smarter Balanced tests was because the test results are supposed to be more useful to teachers in improving instruction in their classrooms and with individual children. They are also supposed to give parents a better sense of the academic strengths and weaknesses of their children. So in theory at least it is up to local school districts to use the information they have already received, not to wait for the state’s official release to do so.

In addition, the state has also broadened the way school outcomes are measured to go beyond mainly looking at results on standardized test students take once a year.  Instead, it has developed a color-coded “dashboard” of multiple measures, which includes not only test scores but other factors, such as graduation rates and suspension and expulsion rates. These are intended to give a more three-dimensional view of school success or failure than scores on standardized tests.  A variation of this  “multiple measure” approach has since been incorporated into the federal Every Student Succeeds Act now being implemented in all 50 states.

State Board of Education member Sue Burr, a close advisor to Gov. Brown and who was involved in drawing up the new accountability system, noted at the board’s recent meeting that under California’s old system “test results were the be-all and end-all” as to assessing whether the state’s students were making sufficient progress.

She said that in the future it may make more sense for California to release results on performance on all measures simultaneously, not just test scores. That way, Californians would have a better idea of “what the whole picture looks like,” instead of making “a big whoop-de-do about test results.”

It is also the case that in a state with close to 1,000 school districts and 10,000 schools, test results are a blunt instrument in telling us what is going on. The statewide averages are just that— averages. They mask how well individual schools and students are doing, and similarly, how poorly others are doing.

“There is a risk that people will pay too much attention to the magic numbers, because they are easy to understand and to compare across the system” said John Affeldt, managing attorney for Public Advocates, a public interest law firm that has been heavily involved in promoting better education outcomes in California’s schools. “It is incumbent on policy makers and educators to communicate that California education is about a lot more than the numbers of students who score at a certain level on a test.”

There is also a danger that parents, advocates and others who are understandably impatient to see rapid improvements will be tempted to declare the current reforms a failure if the tests scores don’t improve over last year’s scores.

That, said Affeldt, would be a mistake. “We haven’t really had enough time for the Local Control Funding Formula to work yet,” he said. “The state is also still just putting into place the new accountability system based on multiple measures, as well as a system of support for schools that persistently lag behind.”

“The test of the new system is how effectively it motivates schools and districts to improve,” Affeldt said. “We haven’t seen that yet, because we just adopted the new system. We are still designing the assistance and support system, and we have to see how schools respond to the spotlight instead of the hammer.”

At the same time, the tests do have some value in showing how well students overall are doing compared to those in earlier years, said Ryan Smith, executive director of the Education Trust-West, a nonprofit organization dedicated to closing academic achievement gaps.  “They are absolutely essentially for goal setting for Latino students, students of color, and low-income students.  If we believe we can narrow and close the opportunity and achievement gaps in the state, then we need to look at the data, and make decisions based on that, the scores absolutely help us do that.”

Smith acknowledged the value of looking at multiple measures of success, “but that doesn’t mean we shouldn’t look at achievement levels of kids” as indicated by their test scores.

The scores are also essential for research purposes, said Laura Hill, a senior fellow at the Public Policy Institute of California. “It is really important to have that database and results on the state level,” she said. Because the database includes results of every school and district in the state, it allows educators to see how they are doing relative to others. “It is a real important tool, not only for researchers like myself, but for researchers at schools and districts to evaluate how things are going at their district in comparison to others,” Hill said.

For better or worse,  the release of the test results represents “one of a very few moments when the public pays attention to the education system,” says David Plank, executive director of Policy Analysis for California Education, or PACE, a joint project of Stanford University, UC Berkeley and the University of Southern California. In that sense, said, “politically they (the test results) are extremely important.”

He said the state also has to take some responsibility that the range of reforms now in place “were all to some extent sold on the basis that they would lead to perceptible changes in the education system.”

Because the state is now in year three of the new academic standards along with new tests designed to measure them, “we are getting to the point where we have to see some signs of progress,” Plank said.

“It was easy in year one to say it doesn’t mean anything,” he said, as scores were intended to be a baseline to compare results with those in later years. In year two — last year — there were at least some improvements in test scores, he said. “It may not have been as fast as we wanted, and the gaps weren’t closing as much as we would have liked but it was OK.”

At a minimum this year, he said, “we have to see that (improvement) again, otherwise even people who in principle support the Local Control Funding Formula will say it is taking too long.”

Despite the dangers in over-relying on test scores, Hill said, “as a researcher I am a believer in having as much information as possible,” she said. “Schools and children invest time in administering and taking this test, so why not use as much information as we can?”

Even if the test score results are not as positive as policy makers and educators hope they will be, that does not mean the state should abandon its current reform regimen, PPIC’s Hill said. Instead, educators should look to the results as a guide for what to do next.“Even if we are not seeing the gains that we hoped for, that does not mean that the whole premise is wrong,” she said. “It is useful information about the direction of the work that we need to do.”

Theresa Harrington contributed to this report.

To get more reports like this one, click here to sign up for EdSource’s no-cost daily email on latest developments in education.

Exit mobile version