Skip repetitive navigational links
View: Next message | Previous More Hitsmessage
Next in topic | Previous More Hitsin topic
Next by same author | Previous More Hitsby same author
Previous page (April 2014) | Back to main LRNASST-L page
Join or leave LRNASST-L (or change settings)
Reply | Post a new message
Search
Log in
Options:   Chronologically | Most recent first
Proportional font | Non-proportional font

Subject:

Writing Assessment.. Chron of Higher Educ

From:

Norman Stahl <[log in to unmask]>

Reply-To:

Open Forum for Learning Assistance Professionals <[log in to unmask]>

Date:

Mon, 28 Apr 2014 10:48:47 -0400

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (65 lines)

Writing Instructor, Skeptical of Automated Grading, Pits Machine vs. Machine


M. Scott Brauer for The Chronicle

Les Perelman (left), with the help of students at MIT and Harvard, created the Babel Generator, 
a software program that generates meaningless essays to test the mettle of machine graders.

By Steve Kolowich
Cambridge, Mass.
Les Perelman, a former director of undergraduate writing at the Massachusetts Institute of Technology, sits in his wife’s office and reads aloud from his latest essay.
"Privateness has not been and undoubtedly never will be lauded, precarious, and decent," he reads. "Humankind will always subjugate privateness."
Not exactly E.B. White. Then again, Mr. Perelman wrote the essay in less than one second, using the Basic Automatic B.S. Essay Language Generator, or Babel, a new piece of weaponry in his continuing war on automated essay-grading software.
The Babel generator, which Mr. Perelman built with a team of students from MIT and Harvard University, can generate essays from scratch using as many as three keywords.
For this essay, Mr. Perelman has entered only one keyword: "privacy." With the click of a button, the program produced a string of bloated sentences that, though grammatically correct and structurally sound, have no coherent meaning. Not to humans, anyway. But Mr. Perelman is not trying to impress humans. He is trying to fool machines.
Software vs. Software
Critics of automated essay scoring are a small but lively band, and Mr. Perelman is perhaps the most theatrical. He has claimed to be able to guess, from across a room, the scores awarded to SAT essays, judging solely on the basis of length. (It’s a skill he happily demonstrated to a New York Times reporter in 2005.) In presentations, he likes to show how the Gettysburg Address would have scored poorly on the SAT writing test. (That test is graded by human readers, but Mr. Perelman says the rubric is so rigid, and time so short, that they may as well be robots.)
In 2012 he published an essay that employed an obscenity (used as a technical term) 46 times, including in the title.
Mr. Perelman’s fundamental problem with essay-grading automatons, he explains, is that they "are not measuring any of the real constructs that have to do with writing." They cannot read meaning, and they cannot check facts. More to the point, they cannot tell gibberish from lucid writing.
He has spent the past decade finding new ways to make that point, and the Babel Generator is arguably his cleverest stunt to date. Until now, his fight against essay-grading software has followed the classic man-versus-machine trope, with Mr. Perelman criticizing the automatons by appealing to his audience’s sense of irony.
By that measure, the Babel Generator is a triumph, turning the concept of automation into a farce: machines fooling machines for the amusement of human skeptics.
Now, here in the office, Mr. Perelman copies the nonsensical text of the "privateness" essay and opens MY Access!, an online writing-instruction product that uses the same essay-scoring technology that the Graduate Management Admission Test employs as a second reader. He pastes the nonsense essay into the answer field and clicks "submit."
Immediately the score appears on the screen: 5.4 points out of 6, with "advanced" ratings for "focus and meaning" and "language use and style."
Mr. Perelman sits back in his chair, victorious. "How can these people claim that they are grading human communication?"
Challenging Evidence
In person, Mr. Perelman, 66, does not look like a crusader. He wears glasses and hearing aids, and his mustache is graying. He speaks in a deliberate, husky baritone, almost devoid of inflection, which makes him sound perpetually weary.
But this effect belies his appetite for the fight. Although he retired from MIT in 2012, he persists as a thorn in the side of testing companies and their advocates.
In recent years, the target of his criticism has been Mark D. Shermis, a former dean of the College of Education at the University of Akron. In 2012, Mr. Shermis and his colleagues analyzed 22,000 essays from high-school and junior-high students that had been scored by both humans and software programs from nine major testing companies. They concluded that the robots awarded scores that were reliably similar to those given by humans on the same essays.
Mr. Perelman took Mr. Shermis and his fellow researchers to task in ablistering critique, accusing them of bad data analysis and suggesting a retraction.
Mr. Shermis, a psychology professor, says he has not read the critique. "I’m not going to read anything Les Perelman ever writes," he told The Chronicle.
The Akron professor says he has run additional tests on the data since the first study and found nothing to contradict the original findings. He published a follow-up paper this year. Mr. Perelman says he is drafting a rebuttal.
The Prof in the Machine
Some of the most interesting work in automated essay grading has been happening on the other side of the MIT campus. That’s where computer scientists at edX, the nonprofit online-course provider co-founded by the university, have been developing an automated essay-scoring system of their own. It’s called the Enhanced AI Scoring Engine, or EASE.
Essentially, the edX software tries to make its machine graders more human. Rather than simply scoring essays according to a standard rubric, the EASE software can mimic the grading styles of particular professors.
A professor scores a series of essays according to her own criteria. Then the software scans the marked-up essays for patterns and assimilates them. The idea is to create a tireless, automated version of the professor that can give feedback on "a much broader amount of work, dramatically improving the amount and speed of formative assessment," says Piotr Mitros, chief scientist at edX.
Some of edX’s university partners have used EASE, which is open source, in their massive open online courses. Because of larger-than-usual enrollments, MOOCs often rely on peer grading to provide feedback on writing assignments. The peer graders are humans, true. But because they are not professional readers (and, in some cases, are not native English speakers), their scores are not necessarily reliable.
All grading systems have weaknesses, says Mr. Mitros. "Machines cannot provide in-depth qualitative feedback," he says. At the same time, "students are not qualified to assess each other on some dimensions," and "instructors get tired and make mistakes when assessing large numbers of students."
Ideally a course would use a combination of methods, says Mr. Mitros, with each serving as a fail-safe check on the others. If the EASE system and the peer graders yielded markedly different scores, an instructor might be called in to offer an expert opinion.
Mr. Perelman says he has no strong objections to using machine scoring as a supplement to peer grading in MOOCs, which he believes are "doing the Lord’s work." But Mr. Mitros and his edX colleagues see applications for EASE in traditional classrooms too. Some professors are already using it.
Bots at Work
Daniel A. Bonevac, a philosophy professor at the University of Texas at Austin, is one of them. Last fall he taught "Ideas of the Twentieth Century" as both a MOOC and a traditional course at Austin. He assigned three essays.
He calibrated the edX software by scoring a random sample of 100 essays submitted by students in the MOOC version of the course—enough, in theory, to teach the machines to mimic his grading style.
The professor then unleashed the machines on the essays written by the students in the traditional section of the course. He also graded the same essays himself, and had his teaching assistants do the same. After the semester ended, he compared the scores for each essay.
The machines did pretty well. In general, the scores they gave lined up with those given by Mr. Bonevac’s human teaching assistants.
Sometimes the software was overly generous. "There were some C papers that it did give A’s to, and I’m not sure why," says Mr. Bonevac. In some cases, he says, the machines seemed to assume that an essay was better than it really was simply because the bulk of the essays written on the same topic had earned high scores.
In other cases, the machines seemed unreasonably stingy. The College of Pharmacy also tested the edX software in one of its courses, and in that trial the machines sometimes assigned scores that were significantly lower than those given by the instructor. Meanwhile, in the MOOC versions of both courses, the machines were much harsher than instructors or teaching assistants when grading essays by students who were not native English speakers.
For his part, Mr. Bonevac remains optimistic that machines could play some role in his teaching. "For a large on-campus course," he says, "I think this is not far away from being an applicable tool."
In Mr. Perelman’s view, just because something is applicable does not mean it should be applied. The Babel Generator has fooled the edX software too, he says, suggesting that even artificially intelligent machines are not necessarily intelligent enough to recognize gibberish.
‘Like We Are Doing Science’
At the same time, he has made an alliance with his former MIT colleagues at edX. He used to teach a course with Anant Agarwal, chief executive of edX, back in the 1990s, and says they have been talking about running some experiments to see if the Babel Generator can be used to inoculate EASE against some of the weaknesses the generator was designed to expose.
"I am not an absolutist, and I want to be clear about that," says Mr. Perelman. He maintains that his objections to Mr. Shermis and others are purely scientific. "With Anant and Piotr" at edX, he says, "it feels like we are doing science."
Mr. Shermis says that Mr. Perelman, for all his bluster, has contributed little to improved automated writing instruction. The technology is not perfect, he says, but it can be helpful.
Mr. Perelman, however, believes he is on the right side of justice: Employing machines to give feedback on writing to underprivileged students, he argues, enables the "further bifurcation of society" and can be especially damaging to English-language learners.
"I’m the kid saying, ‘The emperor has no clothes,’" says Mr. Perelman. "OK, maybe in 200 years the emperor will get clothes. When the emperor gets clothes, I’ll have closure. But right now, the emperor doesn’t."


Norman Stahl
[log in to unmask]


~~~~~~~~~~~~~~~
To access the LRNASST-L archives or User Guide, or to change your
subscription options (including subscribe/unsubscribe), point your web browser to
http://www.lists.ufl.edu/archives/lrnasst-l.html

To contact the LRNASST-L owner, email [log in to unmask]

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011, Week 3
January 2011, Week 2
January 2011, Week 1
January 2011
December 2010, Week 5
December 2010, Week 4
December 2010, Week 3
December 2010, Week 2
December 2010, Week 1
November 2010, Week 5
November 2010, Week 4
November 2010, Week 3
November 2010, Week 2
November 2010, Week 1
October 2010, Week 5
October 2010, Week 4
October 2010, Week 3
October 2010, Week 2
October 2010, Week 1
September 2010, Week 5
September 2010, Week 4
September 2010, Week 3
September 2010, Week 2
September 2010, Week 1
August 2010, Week 5
August 2010, Week 4
August 2010, Week 3
August 2010, Week 2
August 2010, Week 1
July 2010, Week 5
July 2010, Week 4
July 2010, Week 3
July 2010, Week 2
July 2010, Week 1
June 2010, Week 5
June 2010, Week 4
June 2010, Week 3
June 2010, Week 2
June 2010, Week 1
May 2010, Week 4
May 2010, Week 3
May 2010, Week 2
May 2010, Week 1
April 2010, Week 5
April 2010, Week 4
April 2010, Week 3
April 2010, Week 2
April 2010, Week 1
March 2010, Week 5
March 2010, Week 4
March 2010, Week 3
March 2010, Week 2
March 2010, Week 1
February 2010, Week 4
February 2010, Week 3
February 2010, Week 2
February 2010, Week 1
January 2010, Week 5
January 2010, Week 4
January 2010, Week 3
January 2010, Week 2
January 2010, Week 1
December 2009, Week 5
December 2009, Week 4
December 2009, Week 3
December 2009, Week 2
December 2009, Week 1
November 2009, Week 5
November 2009, Week 4
November 2009, Week 3
November 2009, Week 2
November 2009, Week 1
October 2009, Week 5
October 2009, Week 4
October 2009, Week 3
October 2009, Week 2
October 2009, Week 1
September 2009, Week 5
September 2009, Week 4
September 2009, Week 3
September 2009, Week 2
September 2009, Week 1
August 2009, Week 5
August 2009, Week 4
August 2009, Week 3
August 2009, Week 2
August 2009, Week 1
July 2009, Week 5
July 2009, Week 4
July 2009, Week 3
July 2009, Week 2
July 2009, Week 1
June 2009, Week 5
June 2009, Week 4
June 2009, Week 3
June 2009, Week 2
June 2009, Week 1
May 2009, Week 5
May 2009, Week 4
May 2009, Week 3
May 2009, Week 2
May 2009, Week 1
April 2009, Week 5
April 2009, Week 4
April 2009, Week 3
April 2009, Week 2
April 2009, Week 1
March 2009, Week 5
March 2009, Week 4
March 2009, Week 3
March 2009, Week 2
March 2009, Week 1
February 2009, Week 4
February 2009, Week 3
February 2009, Week 2
February 2009, Week 1
January 2009, Week 5
January 2009, Week 4
January 2009, Week 3
January 2009, Week 2
January 2009, Week 1
December 2008, Week 5
December 2008, Week 4
December 2008, Week 3
December 2008, Week 2
December 2008, Week 1
November 2008, Week 5
November 2008, Week 4
November 2008, Week 3
November 2008, Week 2
November 2008, Week 1
October 2008, Week 5
October 2008, Week 4
October 2008, Week 3
October 2008, Week 2
October 2008, Week 1
September 2008, Week 5
September 2008, Week 4
September 2008, Week 3
September 2008, Week 2
September 2008, Week 1
August 2008, Week 5
August 2008, Week 4
August 2008, Week 3
August 2008, Week 2
August 2008, Week 1
July 2008, Week 5
July 2008, Week 4
July 2008, Week 3
July 2008, Week 2
July 2008, Week 1
June 2008, Week 5
June 2008, Week 4
June 2008, Week 3
June 2008, Week 2
June 2008, Week 1
May 2008, Week 5
May 2008, Week 4
May 2008, Week 3
May 2008, Week 2
May 2008, Week 1
April 2008, Week 5
April 2008, Week 4
April 2008, Week 3
April 2008, Week 2
April 2008, Week 1
March 2008, Week 5
March 2008, Week 4
March 2008, Week 3
March 2008, Week 2
March 2008, Week 1
February 2008, Week 5
February 2008, Week 4
February 2008, Week 3
February 2008, Week 2
February 2008, Week 1
January 2008, Week 5
January 2008, Week 4
January 2008, Week 3
January 2008, Week 2
January 2008, Week 1
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998
August 1998
July 1998
June 1998
May 1998
April 1998
March 1998
February 1998
January 1998
December 1997
November 1997
October 1997
September 1997
August 1997
July 1997
June 1997
May 1997
April 1997
March 1997
February 1997
January 1997
December 1996
November 1996
October 1996
September 1996
August 1996
July 1996
June 1996
May 1996
April 1996
March 1996
February 1996
January 1996
December 1995
November 1995
October 1995
September 1995
August 1995
July 1995
June 1995
May 1995
April 1995
March 1995
February 1995
January 1995

ATOM RSS1 RSS2



LISTS.UFL.EDU

CataList Email List Search Powered by the LISTSERV Email List Manager