Categories: News

Deja Vu for Michael Winerip

Michael Winerip, the New York Times education writer, strikes another blow for Luddites everywhere with today’s article on the use of “an automated reader developed by the Educational Testing Service, e-Rater, [which] can grade 16,000 essays in 20 seconds.” The “robo-reader” is hardly flawless, of course: as Winerip points out, it gives higher grades for longer essays on the SAT writing section, and also awards points for longer words and longer sentences and more complex sentence structure. Logic and facts are ungraded .

Writes, Winerip, “The possibilities are limitless. If E-Rater edited newspapers, Roger Clemens could say, “Remember the Maine,” Adele could say, “Give me liberty or give me death,” Patrick Henry could sing “Someone Like You.”

Message: computer applications like E-Rater render arbitrary and inaccurate judgements on students’ writing ability. We’ve left out the human factor! And, as Winerip notes, he’s got “a weakness for humans.”

He didn’t have such a weakness seven years ago when he wrote an article, also in the New York Times, about the flawed way in which ETS was grading the (then) new writing section of the SAT’s. In that 2005 piece he profiled Dr. Les Perelman, a writing professor at MIT, who was critical of the way human graders rated student essays. Writes Winerip,

He [Dr. Perelman] was stunned by how complete the correlation was between length and score. “I have never found a quantifiable predictor in 25 years of grading that was anywhere near as strong as this one,” he said. “If you just graded them based on length without ever reading them, you’d be right over 90 percent of the time.” The shortest essays, typically 100 words, got the lowest grade of one. The longest, about 400 words, got the top grade of six. In between, there was virtually a direct match between length and grade.

Dr. Perelman also found that the teachers grading the essay test disregarded inaccurate facts and logic.

Sounds to me like it’s just plain hard to fairly evaluate student writing in an artificially short window of time, and that the use of both computers and trained people evince shortcomings. That’s a different conclusion than “technology is evil” and we’re out to replace teachers with robots. Winerip asks in today’s piece, “Is this the end? Are Robo-Readers destined to inherit the earth?” Dr. Perelman says “no,” but Winerip doesn’t sound as sure.

Laura Waters

View Comments

  • OK, Winerip thinks robots AND humans do awful jobs of rating essays given the constraints of mass test grading. Is your point that Winerip a Luddite, a jerk or a hypocrite? Why do you write this stuff?

    Forget Winerip. What do YOU think this tells us about the subject at hand? What's important, Laura?

  • Interesting, the article written very sound, writing procedure looked very competent and has a good sense of humor, let a person believe and let people with a cheerful mood, the feeling is very excellent I like the article, the heart be thank the authors of shares. How To Grade Essays

Recent Posts

BREAKING: Statement from JerseyCAN on State’s Long-Delayed Release of Student Test Results

This is a statement by Paula White, Executive Director of JerseyCAN, on the New Jersey…

2 years ago

NJEA: Murphy’s Elimination of Teacher Performance Test Is a Major Win for Students and Educators

This is a press release. Earlier today, Gov. Phil Murphy signed a bill to eliminate…

2 years ago

Murphy Signs Bill Eliminating EdTPA Test for Teacher Certification

Today Gov. Phil Murphy signed Senate Bill 896, which prohibits the New Jersey Department of…

2 years ago

LILLEY: Blue States Had More School Closures and More Learning Loss — Just Like NJ under Gov. Murphy

The 74 conducted a study of the relative learning loss in Democratic (Blue) and Republican (Red) states and…

2 years ago

One of Newark Superintendent’s New High Schools Tolerates Racism Against Black Students

In October 2020 Newark Superintendent Roger Leon announced with great fanfare the opening of district’s…

2 years ago