Daniel Schwarcz and Dion Farganis, both of University of Minnesota Law School, have posted a new article to SSRN that will interest many Lounge readers, The Impact of Individualized Feedback on Law Student Performance
Here’s the abstract:
For well over a century, first-year law students have typically not received any individualized feedback in their core "doctrinal" classes other than their final exam grades. Although this pedagogical model has long been assailed by critics, remarkably limited empirical evidence exists regarding the extent to which enhanced feedback improves law students' outcomes. This Article helps fill this gap by focusing on a natural experiment at the University of Minnesota Law School. The natural experiment arises from the random assignment of first-year law students to sections that take a common slate of classes, only some of which provide individualized feedback. Meanwhile, students in two different sections are occasionally grouped together into a "double section" first-year class. In these double section classes, students in sections that have previously or concurrently had a class providing individualized feedback consistently outperform students in sections that have not received any such feedback. The effect is both statistically significant and hardly trivial in magnitude, approaching about 1/3 of a grade increment even after controlling for students’ LSAT scores, undergraduate GPA, gender, race, and country of birth. The positive impact of feedback also appears to be stronger among lower-performing students. These findings substantially advance the literature on law school pedagogy, demonstrating that individualized feedback in a single class during the first-year of law school can improve law students' performance in all of their other classes. Against the background of the broader literature on the importance of formative feedback in effective teaching, these findings also have a clear normative implication: law schools should systematically provide first-year law students with individualized feedback in at least one “core” doctrinal first-year class.
I’m traveling at the moment and so don’t have time to say more about this paper right now, but expect it to generate some discussion.
Update: Mike Simkovic has an analysis here.
Some potential caveats:
1. Given that most (all?) law schools grade on a mandatory "curve", the net effect of offering all students this kind of intervention will be, mathematically, zero. Because of the curve, law school grading is inherently relative. A 3.3 (B+) is meaningful by reference to the student who got a 2.7 (B-). It is not very meaningful when viewed in isolation. (Indeed, this is the "screening" or "sorting" model of law school; we take a pool of relatively bright people, and put them through tests designed to further sort them into finer categories of brightness).
2. Showing that one application of an intervention increases relative performance does *not*, without more, necessarily support to any claim that law schools should therefore start mandating such an intervention (intensive pre-final-exam feedback) in all classes. To make _that_ argument we would need some evidence (or theory about) the dose-response curve. There may be some optimal level of feedback beyond which any more feedback has diminishing or even reversing-direction effects. There is, I would guess, such a thing as "too much advice".
3. Showing that feedback --> one-third higher grades is *not* the same as showing that such feedback is "worth it". This is because we need to weigh any benefits of offering the feedback against the costs. Larry Solum, in his blog post re: this study, seems quite pleased with the study's apparent validation of the value of the 80 or so hours he spends giving feedback to his civ pro students. But 80 hours is a lot of time, and presumably his time is valuable and can be used for other purposes.
4. Moreover, I would quibble with the notion that a one-third-grade increase in performance is all that meaningful. For one, as I noted already, law school grading is more relative than objective. Moreover, I am not sure we have any real reason to believe that a student who gets a B+ rather than a B in a first year class will be any more likely than he otherwise would have been to be an objectively successful lawyer. This is especially so if the mode of evaluation is the classic first-year "massively complex fact pattern" question, which is a very poor model of what lawyers actually are called upon to do in practice. (But perhaps there is some study showing that a .333 increase in law school GPA does correlate with, say, higher bar passage rates; I don't know).
5. The more direct (and in my view probably better) way to do an experiment on the question at hand would be to randomly assign students in a given class to receive feedback , and then to examine their grades *in that class*. In other words, half of students in Civ Pro 1 would get feedback, half would not, and then the same prof would grade the same exam for all the students (blindly of course). I wonder if any dean would let their profs experiment on their students in this way.
Posted by: Jason Yackee | May 02, 2016 at 03:50 PM
Although the standards for conducting research in an education setting are different, no professor would realistically be able to conduct a true experiment as Jason suggests. The students who do not receive the feedback would feel they are "missing out" and complain, and if you rely on self-selection to sort (so that people who dont want feedback opt into the "no feedback" group) there is a selection bias problem.
Posted by: Anon | May 02, 2016 at 04:22 PM
All this hoopla to prove that coaching works as well on first year exams as it does on the LSAT, etc.
What's worse? The coach is the grader!
Geez ... common sense people.
Anyone who has taught first year students fresh from their undergraduate pampering knows that the first thing they try to figure out is "what the professor wants." Those who play into this game do them a grave disservice.
I would recommend that all first year exams be graded by someone OTHER THAN THE PROF WHO TAUGHT THE COURSE.
And, instead of meaningless student evaluations (which this "study" finds meaningful), the professor should be evaluated by the performance of the students, however measured (e.g., ideally, graders should not know from whose class the exams originated).
Posted by: anon | May 02, 2016 at 05:00 PM
"Given that most (all?) law schools grade on a mandatory "curve", the net effect of offering all students this kind of intervention will be, mathematically, zero."
Zero in terms of relative ranking, but substantial in terms of students' abilities to write thoughtful essays about the law? If so, that's a big step forward for students, right?
Posted by: Anon3 | May 02, 2016 at 05:46 PM
Thanks for the thoughtful responses. I’d be interested to know whether the authors have responses to some of Jason’s caveats. If they do, I’ll try to get them to post them here.
Posted by: Kim Krawiec | May 03, 2016 at 03:23 PM
Jason (and others)
Thanks so much for these comments. I agree with some of them and disagree with others, and appreciate your willingness to engage with us on this.
As to your first comment, I reject the sorting critique of law school grading that is embedded in your first comment. I believe that most law school exams measure something relevant to the practice of law, even if imperfectly (no, the Issue spotter is not perfect, but it’s also not terrible for reflecting what many real lawyers do). Most law school exams – particularly Issue spotters -- generally measure (to varying degrees) how well students can in writing identify legal issues, articulate the relevant rules correctly, apply those rules to the relevant facts, make good and creative policy arguments and counter-arguments, etc. If feedback from professors helps students do this for other professors, then I think that I have done an important part of my job.
Second, although we are still investigating the issue and our results here are preliminary, it is possible that offering all students feedback disproportionately benefits the grades of lower-performing students (in terms of either grades in split-feedback double section or in terms of Median Lsat at the Law School). If so, then our findings have normative implications even if you completely accept the sorting notion of Law School Exams (which we don’t)
Third, we only argue that “law schools should systematically provide first-year law students with individualized feedback in at least one ‘core’ doctrinal first-year class.” This is a very limited intervention, relative to the one you describe (i.e. providing individualized feedback in EVERY CLASS, which we don’t suggest). For many law schools, this may actually be no more difficult than more carefully assigning professors. (though we acknowledge the dose point is relevant)
Fourth, I think it’s likely – but here I am ONLY speculating – that providing students with feedback decreases first-year stress (I would be interested to hear from law students who actually had such feedback). Other have suggested maybe not, as it creates “winner and losers” early on. But I think this is worth investigating.
Fifth, I think that professors SHOULD spent an extra “80 hours of time a year” to give helpful feedback to students that provides a meaningful result in terms of improving their writing. (I probably spend an average of about 800-1000 hours a year teaching, and that's not including semesters off). My own view is that I should spend at least 50% of my time on teaching at least during the semester. I teach 10 credits a year, and only one first year class, which has 40-50 students. That still leaves lots of hours for scholarship, which is also hugely important too (probably about 800-1000 as well). Teaching plus administrative work (which I also do, but would gladly give up) should, in my view, be about 50% of what I do.
Sixth, I don’t think the intervention you describe would be ethical and would get past the IRB (forget the Dean). And the most ethical version of what you describe has been done by people like Andrea Curcio who reach results consistent with those we found. Even more importantly, I think, is there is a huge literature on Education theory across all disciplines that emphasizes the importance of formative assessment. So I would think that we should not proceed on the assumption that law is unique.
Posted by: Daniel Schwarcz | May 03, 2016 at 05:48 PM
I'll supplement Dan's responses to Jason's points with my own. (Numbers correspond to Jason's numbers.)
(1) While it may be true that the net effect from a grade perspective would be zero, our focus is not on grades per se but rather on student performance and preparation. As such, our view is that the advantage of feedback in 1L classes is not that it improves students’ grades — since, as the commenter points out, this is not possible for all students in a curved grading environment — but rather that it improves students’ abilities to handle the kinds of problems that they will encounter on exams and, later, in practice. The focus on differences in grades in our study was a function of the experiment itself, where one group of students received feedback and the other did not; grades were a proxy for improved performance. We would be delighted to learn that we could not replicate this study in the future because all students were now receiving feedback.
(2) It may be true that there is a diminishing-returns aspect to feedback, and we would encourage future efforts to investigate that. Our study does not speak to that issue because it focuses on sections where, in most cases, students received only one “dose” of feedback. The takeaway for us from the single-dose effects that we observed was that even a modest amount of feedback is beneficial.
(3) The extent to which feedback — or any other pedagogical technique — is “worth it” is, of course, highly subjective and condition-specific. From a student’s perspective, we suspect that most 1Ls would certainly hope that their instructors viewed the improvement of student learning outcomes as “worth it,” but we recognize that in a world of limited resources and time, choices will have to be made by instructors regarding which parts of their teaching receive more and less attention and energy. Our goal in this study is merely to provide instructors with evidence that feedback can improve student outcomes; the decision to deem that kind of feedback “worth it” is one that we leave to the instructors themselves.
(4) Here again, I would begin by pointing out that the grade increase itself is not our primary concern (although it was the dependent variable in our study). Rather, we are interested in improving student preparedness for later law school courses and for post-law school employment. We believe that the increase in grade average among the feedback students was indicative of a better level of preparedness, and we endorse increased feedback on those grounds. The question of whether law school exams themselves replicate and anticipate the kinds of work that lawyers will do in the real world is a separate inquiry, but I think Dan's comments on this point are excellent and right.
(5) As another commenter rightly noted, this would be infeasible — not to mention unfair — in a law school setting because of the intentional denial of a known (or likely) benefit to some students. We strongly doubt that any dean, or any institutional review board, would let such an experiment proceed. It is worth reiterating in this context that our experiment did not require any deception or manipulation of either faculty or students. All of our data were gathered after the classes were complete; we merely leveraged an existing quirk in the Law School’s scheduling and staffing of 1L classes to produce “natural” control and treatment groups.
Posted by: Dion Farganis | May 03, 2016 at 09:58 PM
It all comes down to objective proof of progress. Subjective grading, the curve, fixing the selection criteria, etc. render reliance on this sort of "study" bizarre.
In the first year, teach the same d... material across the board (after more than one hundred years, can a bunch of law profs agree on what a civ pro course entails?) Put an end to the idiosyncratic torts prof who wants to spend the entire semester on the aspects of corrective justice exhibited by the economy of Borneo.
Again, have professors grade a mix of ALL sections, including students from sections taught by different profs.
Evaluate profs' teaching ability on whether they can teach, determined by student LEARNING, as determined by experienced professors, not on the bogus bases that students mainly use to evaluate profs on forms. (We all know, or should know, that student evals follow a general pattern, i.e., if the student doesn't care for the prof, singling out one metric as relevant is particularly ridiculous.)
As for coaching students (aka "feedback") of course testing in one form or another the students understanding, before the end of the semester, is worthy. Tutoring for marginal students, which seems to be the point of this jargon laden discourse, can be effective, briefly, but won't work in the long run.
Posted by: anon | May 03, 2016 at 10:50 PM