Quote of the Day

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on facebook
Share on twitter

Rick Hess at Edweek considers the investment in the Gates Foundation’s MET analysis, particularly its finding that using student academic growth gauged by standardized tests allowed researchers to reliably predict teacher quality:

[T]he hundreds of millions spent on MET were funny in that, on the one hand, this was a giant exercise in trying to demonstrate that good teachers can be identified and that they make a difference for kids. I mean, I always kind of assumed this (otherwise, it’s not real clear why we’ve been worrying about “teacher quality.”) So it was moderately amusing to see the MET researchers make a big deal out of the fact that random classroom assignment finally allowed them to say that high value-added teachers “cause” student test scores to increase. I’m like, “Not so surprising.” At the same time, this is too glib. Documenting the obvious can have an enormous impact on public understanding and policy, especially when questions are fraught. Moreover, I’ve long wondered about the stability of teacher value-added results over time, so I was struck that the third-year randomization showed earlier value-added scores to be fairly more predictive than one might’ve thought.

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on pinterest
Pinterest

1 Comment

  • kallikak, January 18, 2013 @ 8:44 pm Reply

    Nothing funny here, Rick. These results will be used to cram down evaluation rubrics that will put peoples' livelihoods at stake.

    Better take a close look at the methodology and conclusions.

Leave a Reply

Your email address will not be published. Required fields are marked *