Rick Hess at Edweek considers the investment in the Gates Foundation’s MET analysis, particularly its finding that using student academic growth gauged by standardized tests allowed researchers to reliably predict teacher quality:
[T]he hundreds of millions spent on MET were funny in that, on the one hand, this was a giant exercise in trying to demonstrate that good teachers can be identified and that they make a difference for kids. I mean, I always kind of assumed this (otherwise, it’s not real clear why we’ve been worrying about “teacher quality.”) So it was moderately amusing to see the MET researchers make a big deal out of the fact that random classroom assignment finally allowed them to say that high value-added teachers “cause” student test scores to increase. I’m like, “Not so surprising.” At the same time, this is too glib. Documenting the obvious can have an enormous impact on public understanding and policy, especially when questions are fraught. Moreover, I’ve long wondered about the stability of teacher value-added results over time, so I was struck that the third-year randomization showed earlier value-added scores to be fairly more predictive than one might’ve thought.
1 Comment