William Slotnik

William Slotnik

By now the consensus is clear: California needs a better, more systematic way of supporting and ensuring teacher effectiveness. Though the teacher evaluation bill, AB 5, collapsed again in August, there is wide agreement on the state’s responsibility to ensure that every student has an effective teacher. Moreover, good teachers welcome accountability and they want and need support. As the 2010 Accomplished California Teachers report noted, every teacher wants to know “How am I doing?” and “How can I do better?”

Efforts here in California to structure a workable evaluation system have run into the same sticking points bedeviling states across the country. The design challenge is to ensure both accountability and support as anchors of high-quality evaluation. What are the right components? Which policy decisions belong at the state level and which should be determined locally? Should student growth be part of teacher evaluation and, if so, how should that growth be measured?

Joan McRobbie

Joan McRobbie

With state policy still in limbo, many districts are taking up these questions on their own. Some 80 California districts and other entities filed an intent to apply for federal “District Race to the Top” funding. That application required spelling out plans for an educator evaluation system, one that indeed factors in student growth.

As districts grapple with system design, one big question cuts across all the others: How to best ensure teacher participation in shaping the system? This goes straight to the why of evaluation. Is this about rooting out bad teachers? Or is it about trying to help more teachers do a better job with more kids? That starting point will ultimately lift or doom the outcome. People enter the teaching profession from a sense of mission. Teachers want to succeed with kids. It’s hard to kill that intrinsic motivation, but evaluation approaches that stem from teacher bashing or assumptions of rampant incompetence can do it – with profound implications for students.

So what approaches are available that respect teachers’ professionalism and tap, rather than squelch, their motivation?

One is peer assistance and review (PAR), which has gained recent attention for this very reason. Less known in California, though complementary to PAR, is the use of Student Learning Objectives (SLOs).

SLOs emerged as an integral component of systemic reform in the Denver Public Schools more than a decade ago. In Denver’s high-stakes environment, research by our organization, the Community Training and Assistance Center, showed that elementary, middle, and high school students whose teachers had high-quality SLOs outperformed their peers on state and national standardized tests over a four-year period. Since then, both the Austin and Charlotte-Mecklenburg school districts have implemented and improved on SLOs. SLOs are now being implemented on a wide scale in numerous states and districts. They are a cornerstone of the new teacher and principal evaluation systems in New York State.

SLOs are exactly what they sound like – carefully planned learning goals that teachers set for each student for a given time period. They’re based on teachers’ analyses of past student performance data and evidence. SLOs involve front-end planning of the content and instructional strategies needed for students to reach learning goals as well as decisions about the measures to be used to gauge student progress.

Importantly, SLOs are collaborative: Whether developed by individuals or teams, they involve input and guidance from principals and colleagues. Each teacher has to be able to demonstrate knowledge of content and instruction as well as the ability to set learning targets. Yet this is done with the input and guidance of the principal. And at the end of the process, teachers provide evidence to the principal of students’ attainment of the targeted goals.

Imagine a math SLO for a 4th grade teacher, for example. The teacher first works with her grade-level team to review her class’s math achievement from last year as well as this year’s diagnostic data. Guided by the Common Core standards, their joint analysis initially determines that this class’s weakest areas are fractions and solving measurement-related problems. They note that within the Common Core, fractions are one of the key areas to master for future success, while measurement is less emphasized.

The teacher then does a more granular analysis of her students’ answers on specific assessment items. She finds several things: Students missed more items on fractions than on measurement; students understand the concepts of measurement, but usually miss the computational aspects; students usually miss all types of fraction items, especially those on equivalence and ordering.

Attention to data and results

Given those insights, the teacher decides that the standards related to fractions will become the learning content for this SLO. She makes this case to the principal, whose advice helps fine-tune her plan for each SLO element. To measure student progress, she will use findings from the diagnostic assessment as her baseline, and she chooses the post-assessment she deems most focused on fraction skills. She will then set specific learning targets aligned with that post-assessment and identify instructional strategies – e.g., using manipulatives; effective questioning – that will help students learn the content and meet the targets.

At the end of the semester or year, assessment results demonstrate the degree to which the students are learning the standards.

When SLOs are part of an evaluation system, district-wide parameters are set. Districts need to make a number of decisions to inform teachers’ and principals’ efforts. For example, which assessments can be used for SLOs: State tests? District assessments? Portfolios? What are acceptable definitions of growth – for example, by 20 percent pre to post? How will the outcomes of SLOs be used in teacher evaluation – as an additional component considered along with classroom observations? How much will SLOs count – 20 percent, 40 percent?

SLOs are gaining momentum in states and districts for several reasons. For starters, wherever test scores are included in evaluations as evidence of student growth, as is required in states with NCLB waivers, SLOs address the problem of non-tested subjects and grades, i.e., those not tested by the state. SLOs apply to all teachers, since they involve using valid assessments (including available state tests) along with selected other evidence.

But fundamentally, SLOs provide a methodology that links accountability and support. They work in tandem with classroom observations to more accurately understand learning and teachers’ contributions to that learning. They enable teachers, in collaboration with principals, to assess their strengths and weaknesses, such as analyzing data or linking instruction to standards. Those insights then allow for specifically tailored professional development to meet those needs, improve instruction, and, ultimately, bolster learning.

In short, SLOs tie teacher practice to student learning and do so in ways that honor and motivate teacher professionalism. They conceptualize evaluation as something done with teachers, not to teachers. They also force the entire school district to focus on teacher support and provide a precise road map for directing that support.

With broader use nationwide, the SLO process continues to be strengthened. For example, it’s clear that effective implementation includes the use of validated rubrics to ensure high, objective standards of rigor as well as to objectively measure levels of student attainment of learning goals. Teacher leaders and principals participate in developing and approving the rubrics, which then must be clearly communicated to all.

California can benefit from and contribute to this growing body of knowledge. Districts striving to build new evaluation systems need to find ways to put priority on student learning while also acknowledging the complexity of teaching. Top-down approaches or simplistic reliance on standardized test scores won’t accomplish these goals. An SLO component offers both thoughtful measurement and an improvement strategy. That combination can break through tough standoffs.

William J. Slotnik is the founder and executive director of the Community Training and Assistance Center, a minority-controlled, not-for-profit organization that supports the systemic reform of urban school districts. Based in Boston, it has a field office in California. Joan McRobbie is senior associate, national school reform, at CTAC.

 

 

 

 

 


Filed under: Charts and Data, Commentary, Featured

Comment Policy

EdSource encourages a robust debate on education issues and welcomes comments from our readers. The level of thoughtfulness of our community of readers is rare among online news sites. To preserve a civil dialogue, writers should avoid personal, gratuitous attacks and invective. Comments should be relevant to the subject of the article responded to. EdSource retains the right not to publish inappropriate and non-germaine comments.


EdSource encourages commenters to use their real names. Commenters who do decide to use a pseudonym should use it consistently.


Leave a Reply

Your email address will not be published. Required fields are marked *

 characters available

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

  1. Planning for long-term financial sustainability should begin at the start of the new teacher evaluation system initiative and continue throughout subsequent phases of implementation. It must anticipate three types of costs: costs of transitioning to the new evaluation system (e.g., building new data systems and developing new assessments), costs of maintaining the new evaluation system (e.g. providing induction for teachers), and costs of doing business differently (e.g. targeting professional development resources). Some of these are recurring costs; others are one-time or incremental costs. The first area of costs involves new funding; the second and third involve reallocating existsing priorities and resources.

  2. Navigio says:

    And not to belabor the point, but many of these things are supposed to be happening already.

  3. Gary Ravani says:

    Being from out of state the authors are likely unaware of the newly adopted “Greatness by Design” document that has a roadmap to the development of a comprehensive teacher evaluation system that is research based.

    1. Joan McRobbie says:

      Hello Gary–Actually, I am a Californian and we’re well aware of the excellent “Greatness by Design” report. What we’re saying is that SLOs are one approach that school districts can use to help them systematically incorporate the key features that “Greatness by Design” so well delineates that characterize high quality evaluation. In a comprehensive evaluation system, SLOs are generally used in tandem with classroom observations. They are also complementary to approaches such as PAR. –Joan

  4. john mockler says:

    Can anybody cost out these bold proposals to see how many dollars will be needed to make this happen?