Lenient, Unreliable Grades
Several faculty have asked about the topics of grading, grade-inflation, authentic assessment and reliable/valid ways to assess, measure and evaluate. I would like to share an article pertaining to these topics, “Lenient Grades, Unreliable Grades".
The strong association between grading leniency and reduced grading reliability. A new paper makes the case that easy grading is actually a symptom of poor assessment practices rather than a cause and that, either way, reducing leniency in grading may lead to more accurate assessment. One possible explanation is that grading leniency is the result, rather than the cause, of low grading reliability. Consider faculty members who suspect that their assessment methods are unreliable. This could occur in course subjects in which assessment of student performance requires subjective and complex judgment. Less flattering reasons for low grading reliability include badly designed or poorly executed assessments, the study continues.
The study is based on a data set pertaining to 53,460 courses taught at one North American university over several years. The primary focus was whether grades were reliable measures and whether they were lenient. Results suggest they are often neither.
A leniency score was computed as the difference between the average grade a class earned and the average GPA of the class’s students at the end of the term. The core idea is that high grading reliability within a department should result in course grades that correlate highly with each student’s GPA. Even after accounting for the effects of other variables, grading leniency still had a significant negative association with grading reliability, according to the study. Therefore, the issue is that grading leniency may be a symptom, rather than a cause, of low grading reliability. Perhaps we need a system for reliability by collecting data on assessment types and assessment scope for courses.