Print

Print


As an AQIP Systems Portfolio Evaluator for the Higher Learning
Commission and an English professor and department chair at my "day job"
in a community college, I think I can offer a bit of insight on the SLO
issue.

1.  Data on retention, satisfaction, and course grades provide
information, but they are, at best, indirect measures of student
learning.  They are the easiest to quantify, however.

2.  Assessing learning can't really happen unless we know our goals, or
SLOs.  Then we need some tools, like rubrics, that help us standardize
and replicate what we are trying to measure.  Finding the right tools is
the tough part.  Qualitative measures are okay.

3.  The HLC is trying to back off the word "assessment" a bit and focus
on the "student learning."  To some extent, we/they put the cart before
the horse in the early 1990s because SLOs should have been the starting
point instead of "assessment."  Processes need to start with goals, then
measures, then data, then review for effectiveness, which may lead us
back to redefining the goals, improving the measures or tools, and
continually comparing the data.  On a good day, this is continuous
quality improvement, and on a bad one it is the roller coaster ride that
never stops.

I think the good news is that we've been doing these things for years.
The part we need yet is to understand a system and make the processes
transparent and replicable as much as we can.

-----Original Message-----
From: Open Forum for Learning Assistance Professionals
[mailto:[log in to unmask]] On Behalf Of Jessica Nettles
Sent: Monday, February 12, 2007 1:22 PM
To: [log in to unmask]
Subject: Re: Student Learning Outcomes

I have a hard time understanding how we can measure learning outcomes.
I've been teaching for about five years now, and I still don't really
get the whole concept. If we use these measurements, and our students
don't produce the outcomes we're expecting them to, are we failing? I
think I'm asking the same questions as Shevawn. 

I sometimes wonder if we worry so much about learning outcome
measurements, retention, and other issues that seem to be so important
to many administrators and government agencies, that we lose sight of
our actual goal--the education of adults to enable them to succeed in
the work place. The only outcome I should actually care about is this:

That the students that enter my classroom come out after 12 weeks with a
better grasp of skills that they need to succeed through the rest of
their educational and life journey. 

Everything else is icing on the cake. 

Of course, I've been called an idealist too. 

-----Original Message-----
From: Open Forum for Learning Assistance Professionals
[mailto:[log in to unmask]] On Behalf Of Shevawn Eaton
Sent: Monday, February 12, 2007 1:29 PM
To: [log in to unmask]
Subject: Student Learning Outcomes

Hi all, 
I'm struggling right now with an issue, and I'm wondering if I am just
being stubborn.  I'm interested in hearing what the rest of you have to
say.

I am a trained evaluation/programmatic assessment professional, having
taken a number of courses in this area in my bachelors, masters and PhD
programs.  I have spent most of the past 20 years, in one way or
another, doing evaluative research, particularly in learning assistance,
developmental education and special admissions on my campus.  Over that
time, I have always felt fairly confident that I knew what I was doing. 
The evaluative work that I have done has resulted in significant
positive changes in the programs I work with.   I sit on the University
Assessment Panel on our campus.  I have written numerous assessments for
our division and have been an advisor to many others on campus.  

But now...."STUDENT LEARNING OUTCOMES" is doing me in.. 

I'm feeling like I no longer know what I'm doing. I don't know how to
resolve the conflict in my head between doing this SLOs-thing in my
areas of responsibility and doing what I feel is thorough and effective
assessment. And now, thanks to the Higher Ed Commission etc, that are
pushing SLOs as the be-all and end-all in assessment, I feel as useful
as a button hook.  And I am worried that as many of my peers are
convinced (required?) to switch to this model, are we doing justice to
evaluation of our programs?  Are we learning all we need to know from
assessment to deliver the best services we can. 

I oversee tutoring and supplemental instruction and a number of other
programs.  I work with the DE courses on our campus to help them assess
the effectiveness of their curriculum with specially admitted students. 
I help assess the effectiveness of admissions standards for special
admissions, effectiveness of placement testing in developmental
coursework and on and on and on.  And I have found that SLOs alone do
not really cover everything that quality assessment should do.  In my
opinion good, sound evaluation of the services and courses we provide go
beyond this.  And in some cases, I find the SLOs (e.g. helping tutored
students become better critical thinkers) to be so obtuse and difficult
to measure, particularly using the sacred DIRECT measures, that they
don't seem meaningful in my pragmatic view of how to determine if a
program is effective.  

I come from the view that assessment in our field should be retention
based.  Maybe I'm too pragmatic.  To me, retention and graduation, NOT
SLOs are the bottom line for the institution.  They are how my program
is evaluated by upper level administrators.  They are the measures used
to determine how limited resources are prioritized and allocated across
campus.  They are the ultimate measure of whether or not an institution
is doing all it can to support its students.  And they seem a lot easier
to understand than something like a measure of how well our students
think critically on a likert scale survey.  I'm sorry, but I just don't
see the President making a decision to choose something based on a
wobbly measure critical thinking from a survey (and I've looked at lots
of the surveys) over solid institutional research stats that show
increase in retention of 10%.  Don't get me wrong, critical thinking is
valuable, and measuring it is important.  But the slant that SLOs create
in assessment are moving towards behavioral measures of learning alone
rather than all the aspects of satisfaction, effectiveness,
cause/effect, longitudinal patterns of retention, etc. 

So help me out.  Am I struggling with this for no reason?  Are other
struggling with the same thing?  Is this one of those things we do until
the next trend moves forward?  And, despite all my training and
experience, am I past my prime as an assessment person?  Or just being
stubborn.  And lastly, do I need to become more vocal at my institution
about what is being omitted from assessment as we all jump on the SLOs
band wagon?

Thanks for your thoughts.  And for letting me whine.



Shevawn Eaton, Ph.D.
Director, ACCESS/ESP
Northern Illinois University
DeKalb, IL 60115
PH: (815) 753-0581
www.tutoring.niu.edu

FAX: (815) 753-4115

~~~~~~~~~~~~~~~
To access the LRNASST-L archives or User Guide, or to change your
subscription options (including subscribe/unsubscribe), point your web
browser to
http://www.lists.ufl.edu/archives/lrnasst-l.html

To contact the LRNASST-L owner, email [log in to unmask]

~~~~~~~~~~~~~~~
To access the LRNASST-L archives or User Guide, or to change your
subscription options (including subscribe/unsubscribe), point your web
browser to
http://www.lists.ufl.edu/archives/lrnasst-l.html

To contact the LRNASST-L owner, email [log in to unmask]

~~~~~~~~~~~~~~~
To access the LRNASST-L archives or User Guide, or to change your
subscription options (including subscribe/unsubscribe), point your web browser to
http://www.lists.ufl.edu/archives/lrnasst-l.html

To contact the LRNASST-L owner, email [log in to unmask]