There is a growing body of literature on this subject. I would use Norton & Agee’s “Assessment of Learning Assistance Programs: Supporting Professionals in the Field” as a starting point.
There really is no easy answer to the assessment question. It’s very difficult to prove a causal relationship between tutoring and retention and graduation—even though that relationship is quite intuitive. If a student utilizes tutoring and then passes the course for which they received tutoring, can we say for sure that the tutoring is what caused the student to pass? There are a host of other factors involved that make that claim difficult to make. The student could have attended office hours, changed their study habits, got assistance from a friend or may have already been from a privileged demographic group and therefore had economic advantages like having parents who attended college or having more time to study because of not having to work, etc.
Regarding quantitative data, there are many gifted statisticians in the learning center community who have tried/are trying to solve this problem by coming up with very elaborate methods to control for the types of external factors that I mention above, but I don’t think there’s any one silver bullet.
I’m currently working with my university’s office of institutional research and assessment on a model in which we’ll compare outcomes of students who utilize tutoring vs. those who don’t by GPA cohorts. For example, among students with GPAs between 3.0 and 3.5, how did tutored students do in a given class vs. untutored students. The idea is that this type of segmenting helps to minimize the effect of some of the external factors I mention above by comparing students with their GPA peers, but there are a host of problems that still remain with this model. For example, how many times does a student need to be tutored in order to see a tutoring effect?
Qualitative data is much easier to come by (student surveys being the most obvious option), but this type of assessment is less empirical because it relies on students’ assessment of their own learning and is anecdotal at best.
Assessment is one of my biggest challenges as a learning center director. My background is as an English professor and writing center person, so my training is definitely not in statistics and educational assessment, but I am now a student of these subjects. I’m very interested to hear what others are doing and if you know of more helpful articles or resources.
Jamie P. Bondar
Director, Tutoring & Peer-to-Peer Success Services
Center for Learning & Academic Success
Senior Lecturer, English Department
I hope you all had a wonderful holiday weekend.
We are in the process of evaluating our Student Learning Outcomes and the data we collect to measure these outcomes. Would you be willing to share your Student Learning Outcomes and how you measure them? What data do you collect and analyze to show the impact your centers have on retention and graduation?
Thank you so much for your input into these very important areas.
Have a great day,
[log in to unmask]">
Ruth Fries, MAED
Director, Disability Services/Academic Achievement
Adjunct Professor, School of Education
Responsibility | Belief | Developer | Harmony | Connectedness
Equipping Christ-centered learners and leaders
to invest in others and impact the world.
~~~~~~~~~~~~~~~ To access the LRNASST-L archives or User Guide, or to change your subscription options (including subscribe/unsubscribe), point your web browser to http://www.lists.ufl.edu/archives/lrnasst-l.html To contact the LRNASST-L owner, email [log in to unmask]