I've been asking a few students lately what they mean by a phrase most of us profs hear ocasionally at this time of year when going over last semester's exam papers: "I didn't get the grade I was expecting."
And I don't mean this tongue-in-cheek. I'm really interested because I've never understood the concept of a student "expecting" a particular grade in a subject. The student has no idea how other students will perform compared to them - and regardless of whether there's a curve applied in a given subject, there's always a comparison.
So what is the student's expectation based on? It could be past performance. If someone is a straight A student, they may expect an A by default based on past performance in other subjects. It may relate to how much effort they put into the subject. "I put in X amount of effort so I deserve Y grade." Or it may be something else entirely.
A couple of my students suggested to me that in some of their classes professors have told them that all the grades bunched together so it was very difficult for the professor to meet the curve. The notion here is that a particular arbitrariness then comes into grading decisions, which may well be true. But to me, this is potentially a question of poor exam design on the professor's part rather than a reflection of student effort.
In other words if students could assume an ideal world in which grades wouldn't bunch and wouldn't be given arbitrarily, what would the "expectation" of a particular grade be based on? Do grade "expectations" ever arise outside concerns about the curve? And if so, what are they based on?
Assuming consistency in grading accross the curriculum, a student's grade should not vary much accross the same types of courses (e.g., podium, clinical, skills, or research paper). The first year creates the expectation, as a student earns mainly one type of grade. When I hear that a student earned 2 A's, an A-, a B+, and 1 D in a semester filled with podium courses, I wonder what happened in the course with a D. While it is possible that a student "earned" a D or was ill on the day they took the final exam, I at least consider the possibility of inconsistent grading criteria. 70% of the points on my exams are based on the higher levels of Bloom's taxonomy (analysis and synthesis), while some rubrics I have seen primarily reward issue spotting or rote recitation of some Restatement.
A second possibility is that a particular group of students in a course does not represent the school average for a class of that size, thus skewing the results. For example, because of a scheduling issue a particular course is filled with law review students. By applying the school's grade distribution, those students will receive lower grades merely because they all took the same course at the same time. That group, divided among different courses, would have received higher grades. The reverse also happens, when a group of lower ranked students all take the same course at the same time--some get much higher grades than they normally expect. Ideally, a professor should be told when the law review is in the course and grades should be adjusted accordingly to deal with the skew problem.
Posted by: Beau Baez | January 24, 2013 at 05:49 PM
It could just be that the student thought they nailed the exam when they didn't. It happens (I can sadly attest from experience). I can easily see that happening if you don't know what you don't know.
Posted by: Michael Risch | January 24, 2013 at 05:56 PM
What Michael Risch said. When you take an exam, you have a vague sense of how well you're doing, which translates into a vague sense of your likely grade. That vague sense can be wrong, as the better you know material the more you see the difficulties and realize the limitations of your answer. But it's a natural reaction. Also, while professors intuitively think in terms of curved scores, my sense is that students don't -- which is also pretty natural, I think.
Posted by: A Prof | January 24, 2013 at 06:00 PM
I'm bothered by the idea of a curve. If the professor did a terrific job teaching the course, and many, most, or all students were diligent, excellent students who put in the work, understood the material, and all did well on the exam, it is deeply problematic to punish them because of some arbitrary notion of what the curve should be.
I understand the value of a curve, of course, and the problem with grade inflation and signaling in the absence of a curve, but sometimes it has deeply pernicious effects.
Posted by: Professor | January 24, 2013 at 08:25 PM
You could of course conclude that the "curve" as such has no purpose in a professional school. When I hire an associate I want to know that the associate knows evidence, civil procedure, etc. For this reason a large number of doctrinal courses should be pass/fail - or the A, B, C etc. should be based on achieving a certain number in an exam - which the student can re-sit and take at any point, perhaps regardless of whether they have taken the course. Those doctrinal courses and the rigor of the exams should be an accreditation issue.
Indeed one could make the bar exams more effective by requiring the candidate to show they passed certain core subject exams - which is what they do in some other countries.
Posted by: MacK | January 25, 2013 at 06:53 AM
In addition to the criteria already discussed, I also think that in class performance creates grade expectations. I often have a couple of students that perform well in class, but their in class performance does not translate into good exam performance. The opposite is also true, too. I often have a student or two that did much better on the exam than their in class performance would suggest they would.
With that said, I rarely am surprised by the grades students earn. By and large, the highest grades are earned by students who come to class, come prepared, and participate. The lowest scores are earned by the students who miss a lot of class, come to class obviously unprepared, never come to office hours, and do not participate in class discussions. In the end, I believe students earn the grade they receive. So, when discussing grades, I always emphasize that grades are earned.
As for the "bunched" grades referenced earlier, I agree with the earlier comment that it is probably a result of poor exam design. As I tell my students, a hard exam is an opportunity. Thus, I have never needed to force a "curve" to meet any grade distribution requirements.
Posted by: Tough Test | January 25, 2013 at 02:52 PM
If the grades are bunched together, the exam was too easy, or the grading criteria weren't sufficiently rigorous.
That said, I'd like to suggest that there is ALWAYS an element of arbitrariness to grading. If you take an A, B, and C exam from one prof and hand them to another, the second prof would almost certainly agree on the difference, even if her criteria are somewhat different. But where is the line between, say, an A- and a B+? And why does one exam fall on one side of that line, and a different exam on the other? There's no way to answer that question that isn't grounded in personal whim and personal opinion. We can try to apply our criteria consistently and without bias, but we can't grade "objectively." There is no such thing. Even on a multiple choice exam, in which there is clearly a "right" answer to each question, there is still an element of arbitrariness. Why ask this question, and not that one? Why is each question worth what it is worth, and not more or less? Why is 89/100 a B+, while 90/100 is an A-?
Beyond that, another question still ... Why give letter grades at all? Feedback is important; but letter grades serve no real pedagogical purpose. We give them because our students and potential employers expect them, and for no other reason. But our students are learning to be professionals ... and when they leave us they will never receive another letter grade again.
We should drastically retrhink how we grade, and why. And that requires asking the people who use the grade the most -- prospective employers -- what they REALLY need to know.
Posted by: Juris Prudence | January 26, 2013 at 10:08 AM
I too, would reject the curve if I taught a terrific class, all my students were diligent, and they all did wonderfully on the exam.
Perhaps one day, that will happen, for me, but it hasn't yet. Even then, I think there are always harder/nuanced issues that separate the students if you want to test them. The real question is what you want to call an A. Is it a "knows the basic stuff" (like MacK's pass/fail suggestion) or is it someone who excels at the subject matter?
One thing I liked about Chicago's grading system was that there were not clear A/B lines. The best scores could go way high into the A's while leaving lower A's to very good answers and a lot gradation in between. Our school is the exact opposite because we have no A+. Thus, the student with the best exam in the class - far away better than the others - gets the same grade as the really good but not great answers.
Posted by: Michael Risch | January 26, 2013 at 04:13 PM