Overusing test for special ed students inflates API scores

October 11, 2012
Doug McRae

California’s 2012 Academic Performance Index (API) results, released today, in general show small but steady gains similar to the last four years. But a deeper look at the results shows not only inflation contributing to the gains but also a substantial policy shift toward lower expectations for special education students in California.

The API trend data inflation is due to the introduction of a new test for special education students over the past five years: the California Modified Assessments, or CMAs. These tests were introduced to give selected students greater “access” to the statewide testing system, by making tests easier than the regular California Standards Tests (CSTs) given to all other students. When the CMAs were approved in 2007, the plan was that roughly 2 percent of total enrollment (or about 20 percent of special education enrollment) would qualify to take CMAs instead of CSTs. A major criterion for taking a CMA rather than a CST was that a special education student had to score Far Below Basic or Below Basic on a CST the previous year; the decision whether a student should take a CMA or a CST was left to each student’s Individual Education Program (IEP) team.

Over time, however, the implementation of the CMA program has resulted in almost 5 percent of total enrollment (or close to 50 percent of special education enrollment) taking the easier CMAs. In addition, CMA scores count the same as CST scores for API calculations, even though the state Department of Education acknowledges that the CMA is an easier test. The result has been to inflate reporting of API trend data over the past few years, and more importantly to cause a subtle but substantial lowering of academic standards that we expect for our students with disabilities in California.

Alice Parker, a Sacramento-based national consultant on special education issues, comments “California has more than 600,000 students identified for one of 13 disability categories. Of that number, more than 70 percent are students in disability categories who have average or above average intellectual capabilities, such as a specific learning disability or an emotional disability or an orthopedic impairment. These students should be held to high academic standards and should be tested as any other student with average or above average intellectual ability. To assign these students to easier tests or to opt these students out of an accountability system based on high expectations does major harm to each and every student capable of achieving the higher standards.” Indeed, students who receive “higher” scores due to easier tests will likely not receive the appropriate instruction needed to maximize their learning capabilities.

To dig deeper into this, let’s first look at the data from our statewide testing system for the time period since CMAs were initiated, and then talk about the policy shift in standards for California special education students.

The data

Table 1: Students taking CMA by year & grade span (click to enlarge)

First, we can look at the CMA participation rates from 2008 through 2012 (see Tables 1, 2, and 3). For these data, we see the following:

Table 2: 2012 CST, CMA and CAPA participation rates as a percentage of total enrollment (click to enlarge)

Second, we can look at how the CMA program has affected the reporting of statewide assessment program results (see Table 4). These results involve percentages of students scoring Proficient and Above on the CSTs. When more than 200,000 special education students with low scores on a CST are removed from the calculations, the reported percents increase artificially.

This factor is easily understood as simply taking low-scoring students out of the calculations, and – bingo! – the averages for the remaining students go up.

Not to identify this contributing cause for increasing results is disingenuous. The data show that reported statewide assessment program results are inflated by about 25 percent over time.

Table 3: 2012 CST, CMA, and CAPA participation rates as a percentage of enrollment of students with disabilities (click to enlarge)

Third, we can look at how the CMA program has affected the reporting of API results (see Table 5). For this analysis, we only have data from the elementary school and middle school grades; while CMAs were finalized for the high school grades in 2011, the use of CMAs at the high school grades has not fully matured as yet. These data show that the gains in API scores reported over the past five years have been inflated by 15 points or 39 percent for elementary schools and 12 points or 27 percent for middle schools.

Again, the reporting of inflated API trend data is disingenuous. Also, by giving the same weights to CMA and CST scores for API calculations, the accountability system provides motivation for districts to administer more of the easier CMAs to artificially boost API results.  This motivation may explain at least to some degree the overuse of CMAs at the district and school level.

Finally, it is interesting to look at CMA participation rates by local districts.(Click here for a breakdown of the CMA participation rate of 412 districts in 19 counties.)

Table 4: CST reported gains vs CST gains adjusted for inflation (click to enlarge)

These data show some extreme cases of very high use of CMAs by some large local districts, as well as cases of moderate use by other large districts:

Table 5: CMA inflation effect on API scores in 2012: This table compares gains in statewide API scores as reported by the state superintendent to re-calculated APIs to adjust for the introduction of CMA scores. It also calculates API inflation attributable to the introduction of CMAs over the past five years (click to enlarge).

These local district CMA participation rates provide compelling evidence that something is haywire. The target is to have 20 percent of special education students take CMAs. To have local districts test more than 75 percent indicates that something other than simple judgment of what is best for the student.  My suspicion is two factors contribute to these data: First, when given a choice to take an easier test or a more rigorous test, human nature gravitates toward easier tests; second, when given an opportunity to boost a district accountability score, adult administrators find a way to tilt individual IEP team decisions in that direction.

Policy shift for students with disabilities

How did California quietly lower expectations for half the special education students in the state? What can California do to address this “under the surface” change for our expectations for special education students?

When California’s current statewide assessment system was initially designed, there wasn’t much attention to separate tests or provisions for special education students. The first discussions for special education students involved defining and implementing various accommodations (alterations in testing formats that do not affect the validity of scores) and modifications (alterations in testing format that do affect validity of scores, such as reading an English language arts test to a student). Then, in the early 2000s, experts agreed that it would be inappropriate to give the more rigorous CSTs to about 10 percent of special education enrollment, or 1 percent of total enrollment – those students with severe cognitive challenges. As a result, a so-called 1 percent test was developed, the California Alternate Performance Assessment (CAPA) targeted for these students. During this time period, policy discussions clearly supported the notion that special education students needed to meet the same academic standards as non-special education students, in order to maximize achievement for special education students.

When the federal government changed its assessment policy for students with disabilities in 2006, it allowed for so-called 2 percent tests, which measured the same academic content standards as the mainstream standards-based tests that were required for No Child Left Behind (NCLB) but had modified achievement standards. This is technical jargon for lower actual achievement levels, or in effect easier tests. The feds indicated these tests should be targeted for only 2 percent of the total enrollment in a state – the next-lowest 2 percent above the 1 percent of severe cognitive disability students targeted for CAPA. About half of the states then set out to design modified tests, or so-called 2 percent tests, for selected special education students. California was among those states, and it took from 2007 to 2011 for the CMAs to be developed and phased in. Unfortunately, California used the same performance category labels for the new CMAs as were used for the more rigorous CSTs, and counted CMA scores the same as CST scores for API calculations. These assessment and accountability decisions have resulted in overuse of CMAs as well as inflated API results.

Other states have handled the introduction of tests for students with disabilities in a better fashion. For example, Massachusetts uses different labels for performance levels for the various versions of their differing tests. Different performance category labels would signal personnel in districts and schools as well as students and parents that the CMA scores are different than CST scores.

Tennessee uses the same labels for modified assessments as mainstream assessments, but uses a different scale score system for the two tests. This strategy is the same as the strategy that California used when CAPAs were introduced in the early 2000s – CAPA has a two-digit scale score system, while our CSTs use a three-digit scale score system. The scale scores appear on individual student reports, thus alerting parents and students and teachers and administrators that the two tests are indeed different.

If California wants to address better use of CMAs in schools, then clearly we should either change the performance level labels or change the scale score metrics, or perhaps change both, in order to better communicate the meaning of the results of these assessments. Also, it would help the IEP teams immensely if they had information on the “impact” of assigning a student to an easier CMA. For instance, if an IEP team had information that assigning a CMA meant that the student only had, say, a 20 percent chance of earning a high school diploma while continuing to strive for the higher standard represented by a CST would give the student, say, a 70 or 80 percent chance of earning a high school diploma, then IEP teams would be less likely to assign CMAs at the rates they do now. This information would simply be truth in advertising for CMAs.

If California wants to address the CMA inflation factor for assessment and accountability results reported each year, this can be done relatively easily. For assessments, it’s a matter of acknowledging the inflation factor associated with CST data when scores from lower-scoring students are removed from the base CST results that are being reported. For accountability, it’s a matter of assigning lower weights for API calculations for CMA vs CST scores, an adjustment to reflect the fact that CMA scores represent lower achievement levels than counterpart CST scores.

While critical of the implementation of CMAs over the past five years, I am not opposed to the CMA as a strategy to get more meaningful individual student test scores for selected students with disabilities. Rather, I am critical of the assessment and accountability practices that have allowed for inflated reporting of annual assessment and accountability results, and fostered gross overuse of CMAs by local districts. Appropriately implemented, the CMA strategy may well be better than the computer-adaptive tests now being proposed to replace CMAs.

In 2011, U.S. Secretary of Education Arne Duncan weighed in on the issue of modified or 2 percent tests for students with disabilities when he declared he would not support modified tests “that obscure an accurate portrait of America’s students with disabilities.” Rather, he said that “Students with disabilities should be judged with the same accountability system as everyone else.” With these statements, Duncan joined others opposing the “soft bigotry of low expectations” that silently plagues many otherwise well intentioned education initiatives.

We should pay attention to viewpoints from those individuals most affected by our statewide policies for students with disabilities. A year ago, when CMA issues were discussed by the Advisory Commission on Special Education, student member Matthew Stacy listened to the discussion and made several powerful statements on behalf of special education students on the topic of statewide tests. “It is unfair not to hold students with disabilities to the same standards as students without disabilities,” he said, adding, “students with disabilities resent being held to lower standards.”

“All that is needed is to make sure students with disabilities have all the necessary accommodations and modifications specified in their IEPs and then hold those students to the same standards as all other students,” he said.

Sometimes it takes the wisdom of youth to cut to the chase and keep adults on target for good assessment and accountability system policies and practices.

Doug McRae is a retired educational measurement specialist living in Monterey. In his 40 years in the K-12 testing business, he has served as an educational testing company executive in charge of design and development of K-12 tests widely used across the US, as well as an adviser on the initial design and development of California’s STAR assessment system. He has a Ph.D. in Quantitative Psychology from the University of North Carolina, Chapel Hill.

 

To get more reports like this one, click here to sign up for EdSource’s no-cost daily email on latest developments in education.

Share Article