follow us in feedly

Thursday, April 10, 2014

Framing Effects and Student Evaluations

It's student evaluation season, which always puts me in a sunny mood. While there are some studies that apparently refute the hypothesis that leniency buys better evaluations, there are reasons to be suspicious of the validity of these OLS results. For example, in hard departments like science and engineering, someone is going to appear to be better than the others, and it is hard to control for individual characteristics of the students in anonomized surveys that can't be matched up to the observable characteristics about those individuals. But I digress...
One thing that strikes me as an interesting hypothesis is to test what would happen to teaching evaluations by changing the sequencing of the questions. For example, in the forms I will administer next week, (the "Student Instructional Report II, or SIRII) the very last questions before the overall evaluation ask students about their effort and the course's difficulty. If behavioral economics teaches us something, it's that framing (or maybe this is anchoring?) matters in survey design. It seems like a neat workaround to test the "leniency hypothesis" would be to change the order of the question so that effort and difficulty are randomly asked their overall evalution either before or after the difficulty and effort questions.