Print

Print


Kolene,

 

Like Jeff, I think you are stuck in a false dichotomy. My staff and I have been using the attached rubric (or comparable ones for various programs) for the past year.

 

While this particular instrument may not represent the specific practices you are trying to assess, it at least illustrates a “different path” from the two you describe.

 

Best of luck,

 

Jered Wasburn-Moses
Associate Director for Tutoring Programs
Learning PLUS
Northern Kentucky University

ATP Professional Development Chair

 

 

From: Open Forum for Learning Assistance Professionals [mailto:[log in to unmask]] On Behalf Of Kolene Mills
Sent: Wednesday, July 26, 2017 5:09 PM
To: [log in to unmask]
Subject: Tutor Observation/Evaluation Rating

 

We are in the process of revising our tutor evaluation process and I’ve got two staff engaged in a debate. I’m reaching out in hopes that someone will sway me one way or another.

 

Here’s the issue:

 

We can continue in our practice of having our tutor observation form based on a Likert Scale with vocabulary that aims at inspiring our tutors to approach this position with a growth-mindset. This will allow us to rate the actions we see during the tutorial in a continuum. And, after doing some preliminary research, most tutoring programs use a scale as part of their own tutor observation process. This, however, leads to a more subjective range of evaluations, making the observation forms less “comparable” across programs/supervisors, which also means we’ll have a more difficult time identifying improvement.

 

Or…

 

We can revise our tutor observation form to require a “yes” or “no” observation response; in other words, “Yes, I saw evidence of this behavior,” or, “No, the tutor did not do this.” Instead of rating how strongly a tutor performs (or how often a tutor behaves a certain way, etc.), observers will simply identify what did and didn’t happen. There will be a space for more comments and feedback (which can also contribute to growth-mindset), but this allows us to do more comparison across programs and supervisors—not to mention measure improvement—as it limits bias. But, we didn’t find evidence that anyone else is doing this and sometimes new feels a little bit uncomfortable.

 

Any wisdom that you would be willing to share is welcome.

 

Kolene Mills

Director, Academic Tutoring

Utah Valley University

~~~~~~~~~~~~~~~ To access the LRNASST-L archives or User Guide, or to change your subscription options (including subscribe/unsubscribe), point your web browser to http://www.lists.ufl.edu/archives/lrnasst-l.html To contact the LRNASST-L owner, email [log in to unmask]

~~~~~~~~~~~~~~~ To access the LRNASST-L archives or User Guide, or to change your subscription options (including subscribe/unsubscribe), point your web browser to http://www.lists.ufl.edu/archives/lrnasst-l.html To contact the LRNASST-L owner, email [log in to unmask]