Print

Print


First, kudos to Shevawn for her answer: it's worth far more than a mere two cents.

Educating supervisors and administrators about what's possible with evaluations is a challenge, and, regrettably, the education process may sound defensive and/or evasive.  Yet those of us 'in the biz' are well aware of the desperation visits in the final week and other realities that can make the data look less promising.  Responding to questions of effectiveness requires both data and time: if you need one or both, be sure your supervisor provides you with what you need in order to answer his/her questions.  If you have the data and the time to explore, here are a couple of other processes you might want to look at.

1.  Try looking at visits as well as hours.  It's possible that students who come only once or twice (or three times or ...) actually don't do as well as those who never use tutoring.  In fact, if you're looking only at visits, those two visits might be about 10 minutes each, and there aren't many of us who want to claim that we can make a noticeable grade difference with a mere twenty minutes of help.  It might be worthwhile to note when the students started using your services.  Those who start early in the semester and come in regularly may do better in the class than those who start tutoring very late in the semester: the latter group may wind up with more total hours of help, but their pattern of use isn't the most effective as measured by better course grades.

2.   Try graphing visits/hours and grades.  If you can imagine a zero line - - say, 2.000 - - for students who never come for tutoring, you can imagine points on the graph below that 2.000 for the students who use (but barely so) your services and points above 2.000 for those who use them regularly and/or wisely.  With correlation, those visits may cancel each other out (not the accurate terms, but perhaps it's easier to visualize it that way) and result in statistical insignificance.  But a graph may show you that there IS a positive, measurable impact once students reach a critical point of use (e.g., three visits, or 3.75 hours)

3.  Try partial correlation.  Imagine a set of clients in which those who have great ACT scores don't use tutoring much and end up with 3.000 GPA for the semester; another set of clients with dangerously low ACT scores use your services much more, but end up with 2.000 GPAs.  So is the interpretation that the more students use tutoring, the worse they do??  NO - - you need to statistically even out the baseline by mathematically removing the expected connection between ACT scores and students' earned GPAs.

My dream for our profession is that we can eventually all keep the same kinds of records and process data with the same kinds of equations - - that's our collective best hope for positive, understandable, replicable evaluations (and thus a more unified voice for policies and funding).  Perhaps as more and more of us buy programs to track our use, our data collection and coding will become increasingly more comparable.

Jan Norton, Director
Center for Academic Resources
University of Wisconsin Oshkosh


----- Original Message -----
From: Annette Hawkins <[log in to unmask]>
Date: Monday, June 27, 2005 2:54 pm
Subject: Relationship between hours spent in academic support and grade

> My Academic Support Director wants to compare the number of hours a
> developmental math student uses the Academic Support Center for
> tutoringand final grade in developmental math.  Has anyone studied
> this before?
> Can a relationship be shown?    I tried searching the archives.
> Thanks.

~~~~~~~~~~~~~~~
To access the LRNASST-L archives or User Guide, or to change your
subscription options (including subscribe/unsubscribe), point your web browser to
http://www.lists.ufl.edu/archives/lrnasst-l.html

To contact the LRNASST-L owner, email [log in to unmask]