Search the Lounge


« Junger on the Second Amendment | Main | A Few Thoughts on Life in Administration »

December 28, 2012


Feed You can follow this conversation by subscribing to the comment feed for this post.


1. I may be biased, but well written objective components are great and I think they should be included whenever a class has a significant statutory component. As a former teacher, I can say I understand these are quite difficult to write, and I bet even more so year after year, but I hope professors know it is worth it.

2. This is another point on which I may be biased, but I would prefer professors impose broad curves not thin ones. This issue comes up when a school has either no mandatory curve or, as at my law school, a mandatory average. As a professor you have two choices: give more high and low grades or give more middle grades. I find professors who are confident in there testing and grading abilities opt for the former and professors who are less confident opt for the former. As a student, I prefer professors who are known for high and low grades because it lets me know the professor is confident in their abilities when it comes to exam and I will be less likely to be "screwed." I know giving a C grade is a tough thing to do, and professors who have had to explain why they tanked a person who otherwise has a stellar GPA in a post-exam conference are wary of giving to many of them. I think in that situation the right answer is to get advice from other professors on how to improve the exams and grading, not to push grades towards a medium point.

3. Go for hard and fair. You might think most students like easy exams. We hate them almost universally except for people who know they are not going to put in the work. This is because this produces what we refer to as "tight curves" which is code for "the grades seem to be randomly assigned." Whether you follow the less or more high/low grades curve strategy, the degree of ease matters a great deal in how much noise your testing and grading will produce. For example, one student in my section got a straight A on first year constitutional law exam that puts in no work, had little talent or interest, and has probably never gotten a grade above a B+ in any other class. This kind of outlying grade is the product of an easy exam, and I wish professors were as concerned about this result as a law review student getting a C.

4. Don't misread poor exam performance by students as an indication of how hard your exam is. One of our most beloved professors at my school gives super easy exams (including the one that produced the outlying grade discussed in 3) but believes he gives exams that are very hard is always dumbing exams down to make them easier. This professor tells stories of early exams in which students got nothing that they should have and how this experience has taught them over the years how easy they need to make exams. But here is the thing: the issue lies in the way the course is taught, not the exam. By misreading the poor performance as an exam difficulty issue, the professor makes exams easier and opts for more middle grades under our average exam curve. This makes it undesirable to take his course, and that is unfortunate. In a teaching book I once read, I think by Ira Shore, a first time teacher heard two students in the hallway after a class saying "wow, that teacher is so smart." The teacher didn't congratulate himself as you would think, instead the teacher realized he had done a poor job teaching: he had made the material so difficult to understand he sounded really smart. Let me put it like this: if there are no students who get everything or nearly everything you put on exam, not a single one, that is not an issue of exam difficulty, it is a teaching issue. You only have an exam difficulty issue when five students do really well and everyone else does terribly, because it means that you did provide the needed support but only the best could get there. If no one gets there, it is not because it was to hard.

5. Time is relative. I think most professors read timing issues into exam performance when they should not. In a typical situation, a professor sees students miss a lot of things across the board and that the students don't have that much written. This could mean the exam time given was insufficient for the material presented, but it could also mean there were issues with the exam or the course. I have taken exams in which the room collectively freaked out as everyone finished an hour early. Yet I am sure that the professor in at least one instance would not have thought we had enough time. Unless you talk to people, you don't know if time pressure was an issue or something else. Time pressure is certainly something to be wary of, but it is not as bad to have too much for the students to do than too little because it creates the same curve flattening and grade randomizing issues as I discussed above.

6. Weighting is hard and more nuanced than you think. Here is my advice: make a friend in your universities math department and then run how you weight exams by them. You should do this because the mathematics of weighting is one of the greatest areas of innumeracy that has day to day impacts on people. Say you have a part objective and part essay question. You decide you want to give 75% of the credit for the multiple choice and 25% for the essay. You then assign 75 points to the objective component (say 3 points for each of 25 questions) and 25 points to the other, right? Wrong. This does not take variance into account and will not end up weighting grades in the manner you have decided to weight them because the objective component and essay are unlikely to have equivalent variances. Essay grade variances depend upon the professor, but it something less than an even spread from zero to the full potential credit you have assigned. Multiple choice variances will range from 20% (five options) or 25% (four options) to 100% (or less depending on difficulty). In most instances the multiple choice variance will be higher than the essay and the result of combining them by assigning 75 and 25 points will be to overweight the multiple choice. The same applies to a test with three essay questions of varying difficulty but the same amount of credit: the hardest question will usually have the greatest variance and end up having the most weight. As there are lot of variables here that depend on the individual test, I can't give you a rule of thumb on how to do it other than the best thing to do is to run it by a math person in order to ensure you achieve something close to your intended weighting. The best practice is probably to combine rankings of individual performance components instead of combining weighted raw scores.


Thanks for the great post, Bobby. I hope many people read it.

Jacqueline Lipton

Thanks, Bobby. Some of what you say also feeds into a comment I made on another post about how I think it behooves us professors to teach students about exam technique and what we expect on exam performance including in terms of time management. I know profs will vary in our expectations from class to class, but that's all the more reason for us to be clear throughout the semester about our expectations in terms not only of subject-matter mastery but exam technique.

Jeffrey Harrison

Bobby's comment is an indirect and useful reminder that doing well at a high status law school does not qualify one to be a teacher. By teacher I mean someone well-versed in evaluating student performance. One thing professors get confused by is the difference between validity and reliability. Validity means are you testing on what you taught. Reliability relates to whether the instrument consistently produces the same measure of whatever it is you are measuring. I think it is important to debrief test-takers to assess whether they perceived the questions as the drafter had hoped. This would go to the validity question.


One of the worst exams I took was an issue spotting bonanza with dozens of issues. The professor boasted that a top 20% exam would spot and analyze no more than 25% of the issues in the fact pattern, which made grading easier. Such an exam does not encourage in-depth discussion of the material or measured legal analysis. It encourages students to memorize prepared rule statements and vomit them on the page along with cursory IRAC.

On the one hand, I was glad this professor was brutally honest: grading was a pain in the ass, he didn't like it, and didn't want to spend a lot of time or effort doing it. His turnaround time was by far the fastest of any 1L prof. It was also a welcome admission that the professor didn't really buy into that "thinking like a lawyer" crap, but was going through the motions and giving us the grades we needed to go get jobs. Finally, I couldn't say the format wasn't disclosed- previous exams and model answers were available and everyone know going in that it was going to be a typing race.

But if you are going to teach a traditional doctrinal class, then the exam should really try to evaluate the skills gained. One way to do that is to have a word limit and provide more than enough time to finish the exam, so students can write an outline and then a final product. Another is to have students research and write a brief. The best answers will then be efficient, well-reasoned, and cogent, instead of simply hitting buzzwords.

Jeffrey Harrison

BoredJD only identifies some of the problems with testing of the sort he or she describes. The massive fact pattern means giving different exams to different students. It is the same as giving 5 questions and telling them to select 3. When each student selects a different combination of issues to discuss, how do you compare them? Many students may have seen all the issues and some only the ones they chose to discuss. Yet their grades may be the same. The mystery to me is how the professor could quickly grade and then defend the grades on a massive issue spotting exam in which each student was responding to different questions.

The comments to this entry are closed.


  • StatCounter
Blog powered by Typepad