Thursday, 4 May 2017

Student Feedback dos and don'ts

Which of the following do you think has the biggest impact on 'student evaluation of teaching' (SET) feedback?
  1. How hard the course is 
  2. The grade the student gets 
  3. The teacher’s gender 
  4. The teacher’s personality 
  5. How ‘hot’ the teacher is 


Ready...SET...


There is a ton of research into SETs starting over 80 years ago (Clayson 2009) and including, as of 1990 over 2000 articles (Feldman in Felder 1992). The literature includes several meta-analyses and even one meta-analysis of meta-analyses (Wright & Jenkins-Guarnieri 2012). In short, it is a well-researched field. Since university professors' careers can depend on these evaluations, perhaps this isn't surprising. Despite the large body of research (or perhaps because of it?) the science is not settled (Spooren et al 2013). 

There are however some general observations which can be made, reasonably confidently, about the effect of certain variables on SETs. So according to the literature* which factors have the biggest effect on student feedback? What follows is my hand list of dos and don'ts to improve your student feedback.

Do be likeable!

One of the variables which correlates highly with positive student feedback is personality and there is a substantial relationship between a teacher’s personality and the feedback they will be given (Feldman 1986Cardy and Dobbins 1986, Williams and Ceci 1997). Foote et al suggest that “[instructors] who score highly on evaluations may do so not because they teach well, but simply because they get along well with students” (2003:17). One researcher writes that personality is such a strong predictor of SET results that "the SET instrument could be replaced with a personality inventory with little loss of predictive validity” (Clayson online). 

There also seems to be something of a Halo effect at work with SETs. Basically, one positive attribute (Good looks) may cause people believe other positive things about a person (they are trustworthy, for instance). This is the reason handsome criminals get shorter prison sentences for the same crime than less attractive ones. This means that student opinions of personality might colour other variables and subsequently ‘likeable’ teachers may be judged positively in areas unrelated to ‘likeability’, such as teaching ability or professionalism. 


Does attraction affect scores?
This is problematic because it means the feedback you get will be tainted by the students general opinion of you. The picture on the left shows some feedback I recently received. Clearly the student had a high opinion of my teaching. Ho-hum.

The last column asks how useful the virtual self-access centre (VASC) was, the student has written 'very useful'. Now, being the teacher of the course, I can say with some confidence that I said not a word about the VSAC nor did any part of the course use the VSAC. Studies seem to corroborate this phenomenon showing that students are more than happy, to report false information to either reward of punish teachers (Clayson & Haley 2011).It should be noted that the Halo effect also works in reverse, so whatever happens, don't be disliked! 


Do be hot! 

Company promotes bribery
There is evidence that teachers who are perceived to be physically attractive tend to score more highly than their plainer colleagues. Riniolo et al (2006) found a 0.8 advantage on a 5 point scale for ‘hot’ teachers. After analysing the ratemyprofessor.com website, where teachers can be given a ‘hot’ rating, Felton et al (2004) found that ‘sexy’ teachers generally rated more highly than ‘non-sexy’ teachers. The authors note:


If these findings reflect the thinking of American college students when they complete in-class student opinion surveys, then universities need to rethink the validity of student opinion surveys as a measure of teaching effectiveness (91).

Do be expressive!


Despite various methodological flaws, the landmark ‘Dr. Fox’ studies (Naftulin et al. 1973), created interest in the question of the validity of SETs and what exactly it is that students are assessing when they complete feedback. In this study (see the actual study in the video below) an actor lectured a group of medical students with a largely meaningless talk that he had learnt the previous day. The student were told the speaker, Myron Fox was an expert in 'game theory'. 





The actor’s expressiveness and charm was seemingly enough for him to receive positive feedback from three separate audiences. Later researchers showed that even the meaningless talk was unnecessary. Ambady & Rosenthal's (1993) “thin slice” study asked students to evaluate teachers based on a silent 15 second clip of them teaching. The authors found a remarkable similarity between the term-end evaluations and those made after watching the short clips. 15 silent seconds was enough time to give an 'accurate' evaluation of the teacher. 


Do be a man!



Russell is annoying, his class is boring 
Researchers tend to agree that gender plays a minor role in overall evaluation. That is, one gender is not consistently rated lower than the other. In fact, “when significant differences were found, they generally favoured the female teacher” (Feldman in Pounder 2007). So what does 'be a man' mean? Well, despite this seeming equality, different genders may be rated on the basis of stereotyped views of gender (Laube et al 2007). For example, the most highly scoring men were described as ‘funny’ whereas the lowest scoring men were ‘boring’ in contrast the highest scoring women were ‘caring’ whereas the lowest scoring were either 'too smart' or 'not smart enough' or were simply a ‘bitch’ (Sprague & Massoni 2005).


There is also the question of whether a male teacher has to work as hard to get a top SET score as a female teacher. Women may suffer from the ‘Ginger Rogers effect’. That is "Ginger Rogers, one-half of the famous dance-team of 1930s movies, had to do everything Fred Astaire did, only she had to do it backwards and in high heels" (Sprague & Massoni 2005:791).  


Do grade generously!


There is a reasonably strong correlation between the grade, expected or real, and the type of feedback a teacher gets. This correlation can be summarised thus, “to put it succinctly, university teachers can buy ratings with grades” (Hocutt in Pounder 2007:185). 

The highest rated prof on RateMyProfessor.com
Clayson (online) notes that in his research 50% of students asked, admitted purposefully either lowering or inflating feedback grades as retribution or reward, and adds that whether or not grades actually affect scores is perhaps less important than whether faculty believe this to be the case as the belief is potentially enough to alter the way grades are given. Pounder backs this up noting “many university teachers believe that lenient grading produces higher SET scores and they tend to act on this belief” (Pounder 2007:185). However, It should be noted though that this is something of a controversial area with a large number of studies finding no relation between SET score and grades. (see Aleamoni 1999)


And if this isn't enough...

Here are a few more killer tips taken from the literature (Pounder 2007)

Do
  • bribe students with food 
  • let students leave early 
  • praise the class on its ability before doing SETs 
  • do the SETs when the weak students are absent 
  • do a ‘fun activity’ before the SETs 
  • stay in the room 
  • teach small classes 
Don't 


Not convinced yet? 


Here's a satisfied customer's testimony. From a remarkable paper published under the pen name name "A Great Teacher". This teacher, faced with the prospect of losing his job over poor SETs decided to throw out his morals and aim for good ratings. He stopped being such a 'tough' teacher and 'sucked up' to the students instead, making the course easy and trying to build rapport with his students:
What were the results of my experiment? The consequences for learning were not good. Students did less well than expected even on deliberately easy quizzes. Their final exam papers proved to be among the worst I had seen in years. Most students displayed only a superficial knowledge of the material. It was clear that some had concluded that with a kinder, gentler me, one didn’t need to work as hard. Although the pedagogical consequences were poor, the results for me were great! My [SET] scores went through the roof (2010:495-6)
And so, armed with this information, you too can become an well-loved teacher. Alternatively, you can treat student feedback with the caution it probably deserves.   







* Seldin (2010) suggests “one can find empirical support for any common allegation pertaining to student ratings” (in Hughes and Pate 2013:50). It's also worth noting that all of this research was carried out (like much research) on American University students. THere has been very little research on carried on in this are on FL students.

2 comments:

  1. Thanks for this Russell. It's very timey as at the university I teach at, students are currently being asked to complete these surveys as part of the end of semester evaluation. The results of these SETs can then be used to help determine whether (and how much) academic staff progress on the university pay scale. In such cases, I don't think the (often negative) effect that this can have on teaching should be underestimated.

    If not carefully designed, these kind of surveys can give students far too much credit than they deserve. For the most part, students are not pedagogical experts and may not even be in the best position to know what their goals are. This seems like another good reason why placing too much value on results of these surveys is unwise.

    ReplyDelete
    Replies
    1. Thanks for the comment. It is interesting that students are asked to comment on teaching quality but are not teachers, yet I'm not asked to comment on say, my managers ability to manage.

      I think you're right that these instruments have to be designed with great care and the results looked at very carefully.

      Delete