Getting the most out of course evaluations

Student course evaluations—we all do them, but how do we use the data? What percent of students complete the surveys? What value do faculty and administrators derive from them? Are they inherently biased? How can we get more out of this often onerous process?
These are all vital questions, particularly if we give a lot of weight to these evaluations in our colleges and universities. There may not be easy answers, but let’s consider some possibilities.

Going mobile

If course evaluations are only getting completed by a small percentage of students, this can take away from the breadth of perspective we are getting. This is particularly problematic in classes that already have small populations. This was a challenge faced by The College of Westchester, a small, private college in White Plains, NY.
Senior Programmer/Analyst John Jurgens explains, “One of the changes we’ve made recently to help to address this is to recode the evaluation’s user interface to be mobile friendly so that we could deliver them through the college’s smartphone app.”
It will take some time to assess the effectiveness of the change. Jurgens noted that when they first pushed out the new mobile functionality for a recently completed term, the evaluation completion rate roughly doubled in a few days.

Improving the usefulness of feedback

One idea that may be helpful is to require more qualitative feedback in course evaluations. Numerical summations of quantitative results are certainly convenient but they aren’t always terribly helpful (although that is also influenced by the specificity of the question itself). Consider cutting down on the number of questions and having more that require text-based feedback such as, “What would you change about how this course was taught?”

Don’t wait until the end of the course to gather feedback

Learning at the end of a course that students were frustrated with some aspects of the course may help you the next time you teach it, but it does nothing for the students in the course that just ended. We should consider gathering feedback earlier in the course. This can be done through dialogue, or a quick one- or two-question survey. It could be as simple as: “What have you liked the most about the course so far, and what have you liked the least? This also helps to set the expectation of providing feedback.

Putting the data we get to good use

Mayra Whitaker is a developmental educational psychologist, an author, and an associate professor of education at Colorado College. In her article, “How to Make the Best of Bad Course Evaluations” (UBmag.me/mwhitaker), she reminds us that while negative comments can be frustrating and off-putting, we should focus on those that offer useful insights and take them seriously.

Whitaker offers some useful suggestions for how to assess evaluation feedback and make meaningful changes. For example, when students write that you spent too much or too little time on a topic and you do not agree, it probably means that students did not fully understand the purpose or goals of the course. You can help to prevent this by being sure at the start of the course to spend time talking about what the course is and what it is not. Make sure your syllabus is explicit. Offer students a rationale for why you will be focusing on some topics more than others.

Setting expectations upfront about content, outcomes, grading criteria, etc., can also help to prevent complaints about grading or about the course being “too hard” or “too easy”.
Whitaker also observes that course evaluations are highly biased and are by no means a thorough measure of an instructor’s effectiveness in the classroom.

Inherent bias in course evaluation results

Numerous studies over the years have shown some inherent bias in course evaluations (a quick search for “bias in student evaluations” will yield a list of studies and articles). Factoring this into your assessments of evaluation results is important, especially for administrators who might use these results to evaluate faculty performance.
While some have called for discontinuing the use of evaluations because of this inherent bias, this would eliminate an important opportunity for students to express their opinions about a course and more importantly, for faculty to seek to continuously improve the quality and impact of their work.

Kelly Walsh is CIO of The College of Westchester in New York.

Categories:

Most Popular