top of page

Grading on the curve


This week, the concept of "curving" grades came up, which is a derivative of norm-referenced grading, so I would like to share this SoTL article entitled, "Norm-Referenced Grading in the Age of Carnegie: Why Criteria-Referenced Grading Is More Consistent with Current Trends." Although this paper focuses on legal education, there are highly generalizable aspects for all disciplines.

The author shares background information on the concept of norm-referenced, which is "criticized because it is based on the assumption that teachers cannot improve student competence, and because it increases student stress, interferes with deep learning, and does not adequately inform students whether they have reached a level of competence."

In short, “norm-referenced assessments are based on how students perform in relation to other students in a course rather than how well they achieve the educational objectives of the course. The most classic form of norm-referenced grading is based on the distribution found in a “bell curve,” in which most grades are at the top of the bell, which represents the middle range, with the highest and lowest grades at the extreme ends.”

Keep in mind that norm-referenced grading is “the measurement of a student’s performance in relationship to the performance of other students” and involves a ranking process based on some type of grading curve. Usually, under this system of grading, students are ranked from best to worst, and then grades are awarded based on that ordering, using some set distribution of grades. This grading method does not require that students meet an objective standard of achievement.”

Whereas, criteria-referenced grading measures “a student’s performance against an established standard. Under this approach, a professor can determine a student’s grade based on a numerical scale of quality."

Supporters of criteria-referenced assessment argue that “grades should reflect students’ absolute level of accomplishment, not on the basis of what other students have produced. Critics of this system note that establishing and defending criterion levels for each grade can be challenging and time-consuming for professors.”

To address the challenge, Centers for Teaching can assist instructors to prepare and create reliable assessments (texts, exams, projects, etc.). The key is to create an accurate tool to which measures students knowledge, skills and dispositions. One method to do this is to perform an item analysis, which calculate discrimination indices and difficulty level. Here is a brief summary of Interpreting Item Analysis data, please let us know if you would to discuss in more detail.

Ken Bain, What the Best College Teachers Do 150 (Harv. U. Press 2004)

Roy Stuckey et al., Best Practices for Legal Education: A Vision and a Road Map (Clin. Leg. Educ. Assn. 2007)

Feinman, supra n. 4, at 648; see also Hammons & Barnsley, supra n. 4, at 54–55

Kelley, T. L. (1939). The selection of upper and lower groups for the validation of test items. Journal of Educational Psychology, 30, 17-24.

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page