Source: EdSource
Students at Redwood Heights Elementary School in Oakland taking computer based tests.

With less than two months of instruction time left before summer vacation for most California schools, roughly half of the 3.2 million students expected to take the first online tests aligned with the Common Core State Standards have begun to do so, the California Department of Education reported Monday.

“From what we understand, things are going well,” said department spokeswoman Pam Slater. “We haven’t had a lot of reports of computer malfunctions and we’re happy with results so far.”

The computer-based tests, known as the California Assessment of Student Performance and Progress, are replacing the multiple-choice, pencil-and-paper California Standards Tests that students had been taking since 1998.

School districts began administering the tests on March 10, with testing windows varying widely among different districts, depending on their instructional calendars. Most districts will complete testing by mid-June, although a small number of California schools that offer year-long instruction will be giving the tests up until Aug. 31.

The assessments, in math and English Language Arts, are being given to students in grades 3 through 8 and 11. The test developer, the Smarter Balanced Assessment Consortium, has estimated the tests will take students between seven and eight-and-a-half hours to complete, but it is up to the schools to decide precisely how to schedule the assessments. Slater has said that in most cases students are taking the tests over several days, in blocks of time as short as 30 minutes to an hour.

Of the more than 1.6 million students who have embarked on the tests to date, 573,299 have so far completed the tests in English language arts and literacy, and 366,794 have finished the tests in math.

“We think this is a really big week for the testing, and we’ll be keeping a close eye on the system,” said Cindy Kazanis, the state’s director for educational data management.

At the peak so far, 287,778 students were online at the same time, with no interruption in the state system monitoring results, Kazanis said. She expected that number to increase this week, but said she was confident that districts will avoid major problems.

“We’re not getting any panicked calls that this isn’t working,” she said.

To get more reports like this one, click here to sign up for EdSource’s no-cost daily email on latest developments in education.

Share Article

Comments (12)

Leave a Reply to Doug McRae

Your email address will not be published. Required fields are marked * *

Comments Policy

We welcome your comments. All comments are moderated for civility, relevance and other considerations. Click here for EdSource's Comments Policy.

  1. Don 9 years ago9 years ago

    In the Vergara trial test data was used to quantify achievement expressed as increased or decreased instructional time, i.e, Johnny received the equivalent of 2 weeks more instructional time due to his having a higher quality teacher/better test results. Despite the scientific shortcomings of parsing out the correlations, we can assume that a better teacher is likely, to some degree, to increase learning and, therefore, test results (also to some degree). So, why do … Read More

    In the Vergara trial test data was used to quantify achievement expressed as increased or decreased instructional time, i.e, Johnny received the equivalent of 2 weeks more instructional time due to his having a higher quality teacher/better test results. Despite the scientific shortcomings of parsing out the correlations, we can assume that a better teacher is likely, to some degree, to increase learning and, therefore, test results (also to some degree). So, why do we deliver these tests over such a wide time period of at least 3 months (from the earliest starters to the latest) when late starters students have more time to learn are likely to do comparatively better?

    Replies

    • navigio 9 years ago9 years ago

      Not all districts start the year at the same time. They also have differing holiday schedules. Districts know the tradeoffs involved and I'm sure they schedule their tests as are appropriate for their curriculum. Btw, interesting that you recognize the inherent flaws in the methods used to measure teachers, including this supposed 'normal' first few years of a new standard or a new test. I wonder whether those kinds of fluctuations were accounted for by … Read More

      Not all districts start the year at the same time. They also have differing holiday schedules. Districts know the tradeoffs involved and I’m sure they schedule their tests as are appropriate for their curriculum.
      Btw, interesting that you recognize the inherent flaws in the methods used to measure teachers, including this supposed ‘normal’ first few years of a new standard or a new test. I wonder whether those kinds of fluctuations were accounted for by those ‘scientists’..

    • Doug McRae 9 years ago9 years ago

      Don -- The 12-week window for CAASPP test administration for grades 3-8 is not good educational measurement policy. But when the regulations were written and approved by the State Board, the rationale for a 12-week window was that some districts had limited technology capability that required kids to share computers and/or spread out bandwidth time just needing a wider window than good testing policy would dictate. In effect, the push to get to technology-based assessments … Read More

      Don — The 12-week window for CAASPP test administration for grades 3-8 is not good educational measurement policy. But when the regulations were written and approved by the State Board, the rationale for a 12-week window was that some districts had limited technology capability that required kids to share computers and/or spread out bandwidth time just needing a wider window than good testing policy would dictate. In effect, the push to get to technology-based assessments trumped good testing practice. As you note, the wide testing window also potentially creates differential instruction time, thus interfering with usual interpretation of results . . . .

    • Don 9 years ago9 years ago

      Doug, thanks for the reply. It seems that an awful lot of decisions including this one regarding the timeline of administration were made to trump good educational practice and that they were done so in an effort to push policy through quickly.

      Correction to my previous comment: Despite the scientific shortcomings of parsing out the correlations, we can assume that LONGER INSTRUCTIONAL DELIVERY TIME is likely to increase learning and, therefore, test results.

  2. Katherine Ellison 9 years ago9 years ago

    I just triple-checked with the CDE and they stand by their numbers, i.e. 1.6 million students, not tests.

    Replies

    • Doug McRae 9 years ago9 years ago

      Well, if CDE insists 1.6 million kids have started at least one test, then I've got no idea how they arrived at the number. Many many kids have already started both ELA and Math tests [many having finished one area before starting the other, some taking both tests at the same time via middle or high school scheduling systems (ELA during their English period, Math during their Math period)]. The 1.6 million student number does … Read More

      Well, if CDE insists 1.6 million kids have started at least one test, then I’ve got no idea how they arrived at the number. Many many kids have already started both ELA and Math tests [many having finished one area before starting the other, some taking both tests at the same time via middle or high school scheduling systems (ELA during their English period, Math during their Math period)]. The 1.6 million student number does not comport at all with the number of tests completed number, nor with the progression of status numbers over time included in CDE pressers over the month of April. Can you get the CDE media office to supply an explanation for how the numbers are derived, what constitutes “starting” a test and “completing” a test, and providing data on the total number of tests that have been or are now being taken . . . . that is the only way to explain the “unduplicated student” presser numbers to anyone interested in following the action . . . .

  3. Doug McRae 9 years ago9 years ago

    The lead sentence is misleading -- the process data reported by the CDE are # tests administered, and since each student takes 2 tests (ELA & Math), that 1.6 million tests have been started does not translate into roughly half of the 3.2 million students having started testing. Some unknown number of the 1.6 million tests started were by students starting their 2nd test; probably in the 400K-500K range based on the test completion data … Read More

    The lead sentence is misleading — the process data reported by the CDE are # tests administered, and since each student takes 2 tests (ELA & Math), that 1.6 million tests have been started does not translate into roughly half of the 3.2 million students having started testing. Some unknown number of the 1.6 million tests started were by students starting their 2nd test; probably in the 400K-500K range based on the test completion data later in the post. An unduplicated student count, therefore, is likely in the 1.1 million to 1.2 million student range starting at least one test by the end of last week, or around 35 % of the total number of students to be tested.

    Less misleading data are based on the # of tests completed — that 573K ELA tests have been completed for 3.2M students comes to 18 percent, that 367K Math tests have been completed for 3.2M students comes to 11.5 percent, that an aggregate of 940K tests have been completed of the 6.4M tests anticipated comes to 15%.

    I believe CDE has # unduplicated students initiating test data from the Smarter Balanced real-time reporting system, but chooses to put easily misinterpreted data in their press releases. EdSource reporters and editors ought not disseminate this misleading data without fact checking for more balanced reporting.

    Also, CDE and Smarter Balanced should now have actual testing time information from early testing schools and districts, and this data can replace the estimated testing times done before final tests were constructed. Do we have early information on actual testing times? These should include not only actual time logged on for test administration purposes for each student, but also the non-computer time required for the classroom exercise portion of performance tasks. Indications from the field tests last year were that the performance tasks portion of the tests took considerably longer than anticipated by Smarter Balanced estimates.

    Finally, the status data available right now are only process data, roughly akin to # pencils used and # sheets of paper consumed for prior paper/pencil statewide tests. The real reason for statewide tests is to obtain quality standardized achievement scores for students. Actual test scores for students completing Smarter Balanced tests in March (before spring break for most districts) should now be coming back to schools and districts, within 2 to 4 weeks of test completion per Smarter Balanced indications. These data are individual student data, which of course are held confidential, but districts can and do compute summary data as soon as they can and districts are free to release those summary data if they so choose. Are there any schools or districts out there willing to share this type of more meaningful information on the spring 2015 Smarter Balanced statewide assessments? At the end of the day, it will be this more meaningful information that will determine the worth of the achievement data generated by Smarter Balanced tests.

    Replies

    • Katherine Ellison 9 years ago9 years ago

      In fact, the CDE reports that 1.6 million *students* have begun the tests, which, as we reported, is indeed roughly half of the 3.2 million students expected to take them. The number of tests completed is fewer than that, because, as explained in the story, tests stretch out over time and not all of the 1.6 million students have finished their assessments.

      • navigio 9 years ago9 years ago

        My interpretation is that Doug meant that the state superintendent used incorrect wording in the press release when he said students and instead should have said tests. I guess someone should let him know.

        • Doug McRae 9 years ago9 years ago

          Katherine -- Navigio is correct. If student Albert has begun work on both ELA and Math, then student Albert is counted twice in the count of 1.6 million tests initiated. And there are 400-500,000 Alberts out there. Thus, to label the 1.6 tests started as "students" is misleading. In fact, far fewer than 1.6 million students have begun testing as of the end of last week. The initial CDE press releases a year … Read More

          Katherine — Navigio is correct. If student Albert has begun work on both ELA and Math, then student Albert is counted twice in the count of 1.6 million tests initiated. And there are 400-500,000 Alberts out there. Thus, to label the 1.6 tests started as “students” is misleading. In fact, far fewer than 1.6 million students have begun testing as of the end of last week. The initial CDE press releases a year ago had the same error, and CDE changed their language mid-stream for the 2014 SB field test, but the language currently being used is misleading. Initiating 1.6 million “tests” is not the same as 1.6 million students initiation at least one test . . . . .

          • Molly Moloney 9 years ago9 years ago

            How do the performance tasks come into those figures, Doug? Students have to take two separate ELA test– the summative adaptive and the performance tasks, and then the same is true for math. Are those being counted together or separately in these figures? (just curious!)

            • Doug McRae 9 years ago9 years ago

              Molly -- At this point, I've got no idea how CDE derived the numbers in this post [particularly the 1.6 million students number] -- see my reply to Katherine above. Last year, CDE counted the performance tasks as a separate test, since that was how the sampling design for the field test was set up. This year, my assumption is that kids are taking two tests, one ELA that includes both the computer-adaptive machine-scored … Read More

              Molly — At this point, I’ve got no idea how CDE derived the numbers in this post [particularly the 1.6 million students number] — see my reply to Katherine above. Last year, CDE counted the performance tasks as a separate test, since that was how the sampling design for the field test was set up. This year, my assumption is that kids are taking two tests, one ELA that includes both the computer-adaptive machine-scored component and all other human-scored items (including performance tasks) and the second test Math with includes the same two components. But, until CDE explains how they derived the numbers they are reporting, we all just have to guess how they were derived . . . .