top of page

No Teaching Eval Correlations


Although I have shared prior research on Student Evaluations of Teaching (SETs)[Sep 2018, Will my student evaluations decrease; Dec 2018, Availability of cookies during an academic course session; Feb 2020, Even ‘valid’ SETs are unfair], I would like to share updated research from a forthcoming book entitled "Evaluating Student Evaluations of Teaching: A Review of Measurement and Equity Bias" by Kreitzer and Sweet-Cushman. Flaherty reviews the book in this week's IHE article, "The Skinny on Teaching Evals and Bias." In this meta-study reviewing 100 articles, the authors confirmed that "SETs have low or no correlation with learning." Furthermore, they found that women faculty, faculty of color, and other marginalized groups are subject to a disadvantage.


The study found that "evaluations are impacted by characteristics unrelated to actual instructor quality. Classes with lighter workloads or higher grading distributions do have better scores from students. Students also rate nonelective and quantitative courses lower. Evaluations for upper-level, discussion-based classes are higher than those for larger introductory courses. Ratings vary across disciplines, with students rating natural sciences lowest and humanities highest."


"As for equity bias, the study finds that factors including an instructor’s gender, race, ethnicity, accent, sexual orientation or disability status affect impact student ratings. Compared to women, male instructors are perceived as more accurate in their teaching, more educated, less sexist, more enthusiastic, competent, organized, easier to understand, prompt in providing feedback, and they are less penalized for being tough graders."


The paper urges administrators to eliminate the use of write-in comments, which have the “strongest evidence of equity bias." The authors recommend that SETs should be used to contextualize students’ experiences, not evaluate teaching. Students arguably can’t evaluate teaching, but they can provide useful feedback on perceptions.


Instead, Berk (2005) has suggested ways to measure effective teaching in the article, Twelve Strategies to Measure Teaching Effectiveness. Strategies include self-evaluation (aligned with an analytical rubric), teaching videos, student interviews (with validated prompts), teaching scholarship, teaching awards (with criteria aligned with research on effective teaching practices), outcome measures, alumni -employer ratings and teaching portfolios.


Resources Tammelleo, S. (2017). Care of Self as Resistance to Normalizing Effects of SETs. Teaching Philosophy 40 (2):255-273. Spooren, P. & Mortelmans, D. (2006). Teacher Professionalism and SETs: Will Better Teachers Receive Higher Ratings and Will Better Students Give Higher Ratings? Ed Studies 32 (2):201-214. Bhattacharyya, N. (2004). SETs and Moral Hazard. J. of Academic Ethics 2 (3):263-271. Verburgh, A., Elen, J & Lindblom-Ylänne, S. (2007). Investigating the Myth of the Relationship Between Teaching and Research in Higher Education: A Review of Empirical Research. Studies in Philosophy and Ed 26 (5):449-465. Hammonds, F., Mariano, G., Ammons, G. & Chambers, S. (2017). SETs: Improving Teaching Quality in Higher Education. Perspectives: Policy and Practice in Higher Ed 21 (1):26-33. Fulda, J. (1997). SETs: Brought to You by Computer.. Acm Sigcas Computers and Society 27 (3):42-43. Grading.Gregory F. Weis. (1995). Teaching Philosophy 18 (1):3-13. Sautter, E., McQuitty, S., Hyman, M. & Pratt, E (2020). Status Quo or Innovation? The Influence of Instructional Variability on SETs. Philosophical Explorations.

Sautter, E., McQuitty, E. & Hyman, M. (2004). The Influence of Perceived Instructional Variability on SETs. Academy of Ed Leadership Journal 7 (2):67--74.

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page