Charlotte Danielson, the doyenne of teacher evaluations, says that when schools use her highly-regarded rubric to gauge teacher effectiveness, the label of Highly Effective is “a place you visit” while the label of Effective “is where most teachers live.”
Not in New Jersey. Here, one in three teachers (33.8%) reside in Highly Effective Land, at least according to the just-released Educator Evaluation Implementation Report, the second iteration since the passage of the state’s 2012 teacher tenure reform law. In fact, 98.6 percent of teacher received ratings of Effective or Highly Effective, a 1.6 percent increase from last year.
That’s a feature, not a bug. Just like in New York City, where fewer than 1% of teachers earned ineffective ratings because evaluations are almost entirely subjective and student outcomes play a minimal role, just like in Connecticut where Superior Court Judge Thomas Moukawsher called the state’s current teacher evaluation system “little more than cotton candy in a rainstorm,” NJ’s highly-vaunted teacher evaluation reform system, as currently implemented, is just so much fluff. The bipartisan legislation promised realistic differentiation of teacher quality in contrast to the former practice where seventeen teachers among a cadre of over 100,000 were fired for incompetence over the course of a decade. But it seems we’re right back where we started.
This may change. Right after Labor Day (terrible timing, to be sure) the New Jersey Department of Education released new regulations to districts that raise the infusion of student academic growth data into teacher evaluations from ten percent back to thirty percent, which was the original plan. This announcement has aroused strident protests from school boards, superintendents, NJEA leaders, and other stakeholders.
The widespread resistance raises several questions.
- Were the vast efforts expended by school districts, the Legislature, and the Department of Education to implement the 2012 teacher tenure law a waste of time and money, given that there’s little difference between evaluative outcomes under the old system (completely subjective) and the new system (mostly subjective but salted with a meager dash of data)?
- Is the lack of differentiation among teachers in the most recent Implementation Report a result of the DOE’s concession — after pressure from NJEA and intervention by Legislative leaders — to lower the incorporation of student growth data measured by standardized tests from 30%, per initial regulations, to 10%? And did the DOE decide to jump back to 30% because the results are, at best, embarrassingly silly? (Find another profession where 98.3% of practitioners are uniformly good or great.)
- Given the backlash, will the DOE cave in again and leave Jersey mired in an evaluative system that lacks the professional accountability promised by the 2012 reform law?
- Would there be less push-back if the DOE had given school districts sufficient notice, i.e., not after Labor Day when plans were set for 10%? Or does the current educational zeitgeist render meaningful teacher evaluations a pipedream, flipping a proud bipartisan consensus among educators, NJEA leaders, legislators, and school districts into a flash in the pan?