Rick Hess at Edweek considers the investment in the Gates Foundation’s MET analysis, particularly its finding that using student academic growth gauged by standardized tests allowed researchers to reliably predict teacher quality:
[T]he hundreds of millions spent on MET were funny in that, on the one hand, this was a giant exercise in trying to demonstrate that good teachers can be identified and that they make a difference for kids. I mean, I always kind of assumed this (otherwise, it’s not real clear why we’ve been worrying about “teacher quality.”) So it was moderately amusing to see the MET researchers make a big deal out of the fact that random classroom assignment finally allowed them to say that high value-added teachers “cause” student test scores to increase. I’m like, “Not so surprising.” At the same time, this is too glib. Documenting the obvious can have an enormous impact on public understanding and policy, especially when questions are fraught. Moreover, I’ve long wondered about the stability of teacher value-added results over time, so I was struck that the third-year randomization showed earlier value-added scores to be fairly more predictive than one might’ve thought.
This is a statement by Paula White, Executive Director of JerseyCAN, on the New Jersey…
This is a press release. Earlier today, Gov. Phil Murphy signed a bill to eliminate…
Today Gov. Phil Murphy signed Senate Bill 896, which prohibits the New Jersey Department of…
The 74 conducted a study of the relative learning loss in Democratic (Blue) and Republican (Red) states and…
In October 2020 Newark Superintendent Roger Leon announced with great fanfare the opening of district’s…
This is a press release from the Governor's Office. In related news, one in five…
View Comments
Nothing funny here, Rick. These results will be used to cram down evaluation rubrics that will put peoples' livelihoods at stake.
Better take a close look at the methodology and conclusions.