Is the NCLCA webinar address correct? When I click on it, "webpage cannot be found" comes up.



Priscilla DeDeyn
Tutor Coordinator
Office of Academic Support
Seton Hall, first floor
Niagara University
Niagara University, NY 14109
716-286-8063 (fax)
[log in to unmask]

-----Original Message-----
From: Open Forum for Learning Assistance Professionals [mailto:[log in to unmask]] On Behalf Of Holliday, Tacy
Sent: Tuesday, April 17, 2012 12:02 PM
To: [log in to unmask]
Subject: TUTOR EVALUATION & Grades

Dorothy Briggs posed an interesting question about tutor evaluation and grades. Evaluation is a topic close to my heart right now, so I thank her for bringing it up. She mentioned hearing a comment about wanting to look at the effectiveness of tutors based on the grades their students received.

I can relate to the desire to want to find a "magic bullet" that allows us to know exactly how tutoring impacts grades, but I would hesitate to use grade data as a one-size fits all. One of the challenges in using test grades is that there is no way to know what the students would have received without having the tutoring. For example, at my school a pilot study was done by a biology faculty member comparing students in intro biology courses with students in an intro to biology class that was linked to a study skills program. The students in the linked class had a higher exam score on average than their counterparts in the other classes. However, what we saw was that there were more students in the D, C, and B range in those classes, and no A's. We're thinking that the way to explain it is that the students learned to study better and so earned higher scores than they would have otherwise, but they were weaker students to begin with and so may have moved from a low C to a mi!
 ddle C, or something like that.

There are also many variables that make it difficult to infer causation when we're dealing with grades. A number of years ago my Center did a grade analysis of students who used the tutoring services. The only area where we found a positive correlation was with the anatomy and physiology classes because time in the Center was the only time they had access to study the body models that would be on their exams. The other subjects showed no or a negative correlation between grade and time in Center. We didn't take this to mean that they were doing worse because they were in the Center more, but that they were weaker students and so needed more help.

I'm also concerned about setting something up that readily morphs into a "teaching to the test" mentality like we see happening too often in K-12 education. On the other hand, being able to make some connection with grades would be very helpful. It just has to be approached cautiously and with deep understanding of research methodologies.

One of the methods I've developed and have been using is a modification of a pre-test/post-test design that allows tutors to determine whether learning occurred during the tutoring interaction. That allows me to assess if learning took place in that tutoring interaction. I can assume that if learning took place, the tutoring was effective without making the tutor responsible for the grade. If anyone is interested in learning more about that, send me an e-mail: [log in to unmask]<mailto:[log in to unmask]>. More information on that should be out in an upcoming edition of The Learning Assistance Review.

NCLCA is offering a webinar this week and next that will cover how to assess tutoring programs and learning centers in a comprehensive way. I'd encourage any who are interested in attending. For more information:<>

To access the LRNASST-L archives or User Guide, or to change your subscription options (including subscribe/unsubscribe), point your web browser to

To contact the LRNASST-L owner, email [log in to unmask]

To access the LRNASST-L archives or User Guide, or to change your
subscription options (including subscribe/unsubscribe), point your web browser to

To contact the LRNASST-L owner, email [log in to unmask]