We are in the middle of re-designing our evaluation process. 

Direct observation has always been a part of the process but the approach was too broad. We would have to prioritize what feedback to give to maximize effectiveness. 

So we have come up with a guided direct observation. First tutors will take an assessment and develop an action plan. The action plan has each tutor prioritize three skills they want to improve. 

We are currently testing the self-assessment for reliability with content from our training. I think that having reliable instruments will be the key to accomplishing guided direct observations. I also think this may be the answer to the grades thing, because like other folks have said, there are far too many variables. 

We are currently doing a great literature review about how direct observation works in medical school. 

If anyone is interested in the details my contact information is down below. 

William G. Hardaway 
Academic Support Coordinator 
CSUF Learning Center 
[log in to unmask] 
(559) 278-3052 
----- Original Message -----
From: "Tacy Holliday" <[log in to unmask]> 
To: [log in to unmask] 
Sent: Tuesday, April 17, 2012 9:02:07 AM 
Subject: TUTOR EVALUATION & Grades 

Dorothy Briggs posed an interesting question about tutor evaluation and grades. Evaluation is a topic close to my heart right now, so I thank her for bringing it up. She mentioned hearing a comment about wanting to look at the effectiveness of tutors based on the grades their students received. 

I can relate to the desire to want to find a "magic bullet" that allows us to know exactly how tutoring impacts grades, but I would hesitate to use grade data as a one-size fits all. One of the challenges in using test grades is that there is no way to know what the students would have received without having the tutoring. For example, at my school a pilot study was done by a biology faculty member comparing students in intro biology courses with students in an intro to biology class that was linked to a study skills program. The students in the linked class had a higher exam score on average than their counterparts in the other classes. However, what we saw was that there were more students in the D, C, and B range in those classes, and no A's. We're thinking that the way to explain it is that the students learned to study better and so earned higher scores than they would have otherwise, but they were weaker students to begin with and so may have moved from a low C to a mi!
ddle C, or something like that. 

There are also many variables that make it difficult to infer causation when we're dealing with grades. A number of years ago my Center did a grade analysis of students who used the tutoring services. The only area where we found a positive correlation was with the anatomy and physiology classes because time in the Center was the only time they had access to study the body models that would be on their exams. The other subjects showed no or a negative correlation between grade and time in Center. We didn't take this to mean that they were doing worse because they were in the Center more, but that they were weaker students and so needed more help. 

I'm also concerned about setting something up that readily morphs into a "teaching to the test" mentality like we see happening too often in K-12 education. On the other hand, being able to make some connection with grades would be very helpful. It just has to be approached cautiously and with deep understanding of research methodologies. 

One of the methods I've developed and have been using is a modification of a pre-test/post-test design that allows tutors to determine whether learning occurred during the tutoring interaction. That allows me to assess if learning took place in that tutoring interaction. I can assume that if learning took place, the tutoring was effective without making the tutor responsible for the grade. If anyone is interested in learning more about that, send me an e-mail: [log in to unmask]<mailto:[log in to unmask]>. More information on that should be out in an upcoming edition of The Learning Assistance Review. 

NCLCA is offering a webinar this week and next that will cover how to assess tutoring programs and learning centers in a comprehensive way. I'd encourage any who are interested in attending. For more information:<> 

To access the LRNASST-L archives or User Guide, or to change your 
subscription options (including subscribe/unsubscribe), point your web browser to 

To contact the LRNASST-L owner, email [log in to unmask] 

To access the LRNASST-L archives or User Guide, or to change your
subscription options (including subscribe/unsubscribe), point your web browser to

To contact the LRNASST-L owner, email [log in to unmask]