Hi Kolene,

I’m curious what your program objectives and training learning outcomes are. Ideally, assessment of your tutors and the observations tied to the assessments are  aligned with those objectives and outcomes.

I’ve been collaborating with our institutional research staff to create rubrics that included observable behaviors. I’ve tied these rubrics to three outcomes that align with two program objectives. I’m initiating these rubrics for AY 18. Before, I was using observational rating scales that did allow for subjective variance and seemed less reliable.  

I’m attaching a draft of this rubric. Please note that I’ve yet to thoroughly test it. Also, the tutor cycle that I refer to is a six-step cycle that we train tutors to use.

All the best,

Jeffrey

 

Jeffrey White, M.A., M.S.

Learning Commons Administrator, Shepard Academic Resource Center 

Instructor of German, International Languages and Cultures

Buckley Center 163, MSC 184

 

University of Portland

5000 N. Willamette Blvd.

Portland, Oregon 97203

 

T: 503.943.7141  E: [log in to unmask]

www.up.edu/learningcommons

 

Follow the Learning Commons on Facebook

 

 

 

From: Open Forum for Learning Assistance Professionals [mailto:[log in to unmask]] On Behalf Of Kolene Mills
Sent: Wednesday, July 26, 2017 2:09 PM
To: [log in to unmask]
Subject: Tutor Observation/Evaluation Rating

 

We are in the process of revising our tutor evaluation process and I’ve got two staff engaged in a debate. I’m reaching out in hopes that someone will sway me one way or another.

 

Here’s the issue:

 

We can continue in our practice of having our tutor observation form based on a Likert Scale with vocabulary that aims at inspiring our tutors to approach this position with a growth-mindset. This will allow us to rate the actions we see during the tutorial in a continuum. And, after doing some preliminary research, most tutoring programs use a scale as part of their own tutor observation process. This, however, leads to a more subjective range of evaluations, making the observation forms less “comparable” across programs/supervisors, which also means we’ll have a more difficult time identifying improvement.

 

Or…

 

We can revise our tutor observation form to require a “yes” or “no” observation response; in other words, “Yes, I saw evidence of this behavior,” or, “No, the tutor did not do this.” Instead of rating how strongly a tutor performs (or how often a tutor behaves a certain way, etc.), observers will simply identify what did and didn’t happen. There will be a space for more comments and feedback (which can also contribute to growth-mindset), but this allows us to do more comparison across programs and supervisors—not to mention measure improvement—as it limits bias. But, we didn’t find evidence that anyone else is doing this and sometimes new feels a little bit uncomfortable.

 

Any wisdom that you would be willing to share is welcome.

 

Kolene Mills

Director, Academic Tutoring

Utah Valley University

~~~~~~~~~~~~~~~ To access the LRNASST-L archives or User Guide, or to change your subscription options (including subscribe/unsubscribe), point your web browser to http://www.lists.ufl.edu/archives/lrnasst-l.html To contact the LRNASST-L owner, email [log in to unmask]

~~~~~~~~~~~~~~~ To access the LRNASST-L archives or User Guide, or to change your subscription options (including subscribe/unsubscribe), point your web browser to http://www.lists.ufl.edu/archives/lrnasst-l.html To contact the LRNASST-L owner, email [log in to unmask]