Credit: Alison Yin / EdSource (2014)
The California Department of Education wants to begin phasing in online tests for new science standards this spring, in lieu of taking paper-and-pencil tests based on old standards.

Correction: The article was updated on Nov. 4 to delete an inaccurate reference to the former Academic Performance Index. 

In an unexpected move, the State Board of Education postponed approving the method for determining a key element of its new school accountability system on Wednesday, potentially delaying by weeks or longer the release of the first district and school “report card” that it had promised for early 2017.

Board members said they’d prefer a better methodology instead of producing a report using flawed criteria and revising it next year. They want to take another look in January, at their next meeting.

At issue is how to measure student performance on the Smarter Balanced tests in math and English language arts, one of a half-dozen statewide metrics that will make up the new report card. The state board objects to the approach that for years the state has used – and the federal government has required – but staff at the state Department of Education haven’t yet presented an alternative.

The current method measures school and district performance by the percentage of students who meet or exceed a score defined as proficient on the Smarter Balanced tests. In the second-year results announced in August, 49 percent of students in grades 3-8 and grade 11 taking the test met or exceeded the minimum proficient score in English language arts and 37 percent scored at that level in math.

The percent-proficient method is simple and easily communicated to parents, but critics cite several problems, particularly when tracking growth in scores from year to year. It doesn’t credit the progress of students whose scores may have started really low and improved significantly, though not yet reached a proficient level. Nor does it credit students whose scores continue to grow to advanced levels. State board members say a more sophisticated method would better reveal achievement gaps among lower-scoring student groups, including English learners and special education students, and changes over time.

Under the No Child Left Behind law, the federal government sanctioned schools that failed to meet percent-proficient targets. In July, 40 academics, led by Morgan Polikoff, an associate professor at USC’s Rossier School of Education, wrote U.S. Secretary of Education John King Jr. urging the federal government to drop that methodology.

This week, the California Charter Schools Association released a school ranking system using an approach that may be closer to what the state is contemplating. The districts in the California Office to Reform Education, or CORE, also have developed a school quality index that takes a more sophisticated look at students’ scores.

State board President Michael Kirst and state Superintendent of Public Instruction Tom Torlakson also wrote King last summer asking the federal government to abandon percent-proficient reporting, but at the September state board meeting, Deputy State Superintendent Keric Ashley said the department had lacked the staff time and budget to investigate alternatives. The department had proposed using the percent-proficient method in the first accountability report cards and switching methods next year.

The state board has pushed up the timetable to its next meeting. Ashley said Wednesday that staff, working with the nonprofit Learning Policy Institute and research agency WestEd, will return in eight weeks with recommendations for a different approach.

The state board is under pressure to get the accountability report cards out soon. The new version of the Local Control and Accountability Plan that the board approved Wednesday requires districts to use data from the report cards in setting their goals and spending priorities, starting next year.

But Kirst reiterated at the meeting that a better way of analyzing test results is important. “We want to get it right from the beginning, instead of using an outmoded approach,” he said.

If the board doesn’t settle on an alternative in January, then, under state law, it may have to wait until 2018-19 to make the switch, said David Sapp, deputy policy director and assistant legal counsel for the board.

Share Article

Comments (1)

Leave a Comment

Your email address will not be published. Required fields are marked * *

Comments Policy

The goal of the comments section on EdSource is to facilitate thoughtful conversation about content published on our website. Click here for EdSource's Comments Policy.

  1. Doug McRae 2 years ago2 years ago

    This is much ado about not much. First, the complaint that the percent met and above metric causes ill-advised emphasis on teaching the "bubble" kids scoring just below met is a ghost that doesn't exist. Accountability use of percent metrics give credit for advancing kids into the next highest achievement category for ALL performance categories, not just the percent met category. The API gave more credit for advancing students from Far Below to Below than for … Read More

    This is much ado about not much.

    First, the complaint that the percent met and above metric causes ill-advised emphasis on teaching the “bubble” kids scoring just below met is a ghost that doesn’t exist. Accountability use of percent metrics give credit for advancing kids into the next highest achievement category for ALL performance categories, not just the percent met category. The API gave more credit for advancing students from Far Below to Below than for Below to Proficient, and likewise for from Proficient to Advanced, a progressive weighting with more weight for the low achieving students. The new 5×5 color grids now being used for CA accountability gives equal credit for increases in all achievement categories. The “bubble kid” criticism is a figment in the collective imagination of folks in the trenches, a figment not based on the facts for how the percent metrics from tests are actually used.

    Second, there have always been pros and cons for the differing metrics used to analyze and report K-12 test scores. There has been continual debate over how to use metrics such as grade equivalent scores, percentile scores, stanines, normal curve equivalents, and scale scores over the past 50 years. So, the metric “debate” is not new, with the percent metrics from standards-based tests the latest widely used metric to join the fray. Over the last 45 years that I have been involved, a relatively common answer folks have arrived at is that scale scores are in most cases the best metric for computation and analysis purposes, for example for growth model analysis work. But scale scores are a miserable metric for reporting purposes; nobody, not even the wisest most experienced psychometric gurus, advise use of scale scores for reporting to the K-12 trenches or to policymakers or the media or the public. So, the common solution for the past 45 years has been to use scale scores for computing and analysis work, but to convert those results to friendlier metrics for reporting purposes, which for the current standards-based tests is a percent metric. The percent met and above metric is the easiest percent metric for reporting and communication of CA’s statewide test results; reporting percents for all four performance categories is less friendly.

    Finally, it should be said that even explaining this relatively simple solution to this “better way to measure test scores” so-called dilemma takes effort to combat misinterpretations in the K-12 trenches as well as policymaker circles, media circles, and the public. Combating and reversing the misinterpretation that “percent met and above” means the easiest way to move the needle is to focus on the bubble kids won’t take place without a concerted effort from those in charge of reporting and communicating K-12 test scores. But, from a system design perspective, the solution to the “better way” dilemma doesn’t take new rocket science. The solution is available by taking a look in the rear view mirror at past practice.