My school has gradually moved from paper-based course evaluations to online evaluations over the last couple of years. I have now had about three or four of my courses evaluated electronically. The first go around, I had a rather small class but was pleased to see a good response rate ie higher percentage of students filling in the online evaluation than had filled in the paper evaluations probably because the online evaluations are available outside of class time for a window of several weeks for students to fill in. I was also pleased and surprised to see no negative comments at all. I noticed the same pattern the following year ie a high response rate and a dearth of negative comments.
I also noticed that there were fewer substantive/qualitative comments overall in the online evaluations than there had been in their paper-based counterparts. In the paper-based evaluations, most of my students had written something in the 'comments' sections even if it was only a word or two. In the online evaluations a much smaller number of students wrote anything at all under 'comments'.
This got me to wondering whether the move to online evaluations skews the results in unfortunate ways. For example, although I'd love to think I've become such a magnificent teacher that no one can think of anything negative to say about me, I suspect this is not the case. I wonder if the lack of negative evaluations might be explained by students' suspicions that anonymous online surveys are not really anonymous at all and can be traced to them in a way that paper-based evaluations cannot.
The lack of negative evaluations might also be a function of the smaller number of qualitative/substantive comments overall. However, I can't see why a move to an online format would skew evaluations in favor of positive feedback unless students were concerned that they would be tracked and somehow punished for negative evaluations.
It's also interesting that students have written less comments in the online format when one would assume it's easier and quicker to type comments into a computer screen than to handwrite them on a paper form.
Has anyone else had the experience of a switch from paper to digital evaluations and, if so, what have your experiences been? Clearly online evaluations are convenient for students as well as being easier to collate and access remotely. But are there downsides we might have been missing?
We recently switched at Melbourne, and the faculty isn't happy about it. Our response rate has dropped considerably and the scores have become significantly more polarized -- clearly, the fact that students now have to put effort into completing an evaluation (as opposed to simply being handed a form in class) means that, in general, only students who really liked or really disliked a course go to the trouble.
Posted by: Kevin Jon Heller | July 10, 2012 at 03:22 PM
Could timing provide an explanation (or part of an explanation) for the lower number of negative responses? Paper evaluations are typically filled out on the last day of class, with the exam looming, as I recall. That may be a time when students have more complaints about the professor than at other times during the semester. If students have a few weeks to fill out an electronic evaluation, that might provide enough time for them to cool down or feel better about the professor and/or exam. Students may also leave electronic evaluations until the last minute and then rush through them. That rush might not make the evaluations more positive, but it might explain why students provide fewer written comments.
Posted by: Emily Bremer | July 10, 2012 at 04:03 PM
Interesting that Melbourne had the opposite experience to us - although I can only speak for my own classes. I haven't done a broader survey here.
Emily makes some good points. We tried to emulate the time period as well as possible by having most students fill in the online evaluations in class on the day we would have used for the paper evaluations, but by the time I ask them to do it in class a lot have already done it online or said they planned to do it later - so that may skew the results as you suggest.
Posted by: Jacqueline Lipton | July 10, 2012 at 04:07 PM
We (Appalachian) tried the switch and found that the response rate went down substantially and students were suspicious about the anonymity of the new process. As a result, we switched backed to a paper process. I did not see a major change my overall ratings, with most students liking me but one of the students still defiant enough to say that I sucked in the online version!
Posted by: Kendall Isaac | July 10, 2012 at 06:55 PM
We recently switched here at UF. The response rate is below 50% in some cases and in virtually all instances lower than it was. A student who goes to the trouble of signing on to evaluate seems more likely to comment but if the response rate is low that may mean the total comments is also lower. In terms of the evaluations, I have not seen a bias.
Posted by: Jeffrey Harrison | July 10, 2012 at 09:55 PM
At Wyoming we went to online evaluations and then quickly reverted to the paper version as our default when the response rate dropped dramatically. I think we technically have some discretion to use either version but personally I saw which way the wind was blowing and returned to using paper exclusively.
Posted by: Michael Duff | July 10, 2012 at 11:52 PM
At GW we use online evaluations, and I don't recall seeing any difference in evaluation quality or overall ratings after making the switch. One thing I have noticed in my own classes is that response rate varies dramatically based on how a professor times and presents the evaluations. If you present it as an afterthought, or just give students a few minutes of class time and hope they fill them out, a lot won't. But if you stress the importance of the evaluation, announce the date before hand, and ask students to bring in their laptops on that day to fill them, the response rate doesn't seem lower than it was during the paper era.
Posted by: Orin Kerr | July 11, 2012 at 01:59 AM
We switched from paper to online a couple of years ago. I have not noticed a drop in participation rate or in substantive comments (although my administration tells me that overall participation rates have dropped). Like Orin, I announce in advance when I intend to do the evaluations (almost everyone brings laptops to class every day anyway). I emphasize multiple times how seriously I take them. I give a little five minute lecture about cognitive biases prior to asking them to fill out the evaluations (on the theory that some of the cognitive bias literature suggests that people are better at resisting such biases when they are informed of their existence). I generally get at least 90% participation -- and I get plenty of substantive comments -- I tell the students that I value the comments much more than the numerical rating. This year I continued to remind students about my hope that they will fill out the evaluations after the date I made time in class -- they can fill them out any time up until the last day of class, and I sent reminders up until that point. In each of my classes, I think I had higher than 95% participation, which I was happy about.
Posted by: Alex Reinert | July 11, 2012 at 10:40 PM
Alex, what sort of cognitive biases do you discuss? I think that is a great idea and was just curious about the details.
Posted by: Newbie prof | July 12, 2012 at 07:50 AM
Newbie Prof: Deborah Merritt has a good article which talks about many of them. You can find it at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=963196. It summarizes a lot of the literature and proposes that evaluations be conducted in a different manner entirely. I have always wanted to try to use her proposal, but I have not yet made it happen.
Posted by: Alex Reinert | July 12, 2012 at 10:58 AM
Thank you very much!
Posted by: Newbie prof | July 12, 2012 at 11:30 AM