As my friends and colleagues know, I'm not a huge fan of law reviews. In fact, as you'd know if you'd been subjected to serve on a faculty with me at any time over the past eighteen years, I'm quite critical of them. Some years ago I set out to write a short piece, "law review editorship as training for hierarchy" -- which took its inspiration from Duncan Kennedy's "legal education as training for hierarchy," obviously and also from Fred Rodell. Though in some ways of late I've been a defender of law reviews. I think they provide some training for the students -- they give the 3Ls a chance to get some management experience and some editing experience, they provide an opportunity for both 2Ls and 3Ls to do some writing and research. I think law review can be a meaninful intellectual experience for people who take it seriously. There's something anti-hierarchical about law reviews in that students rather than people more directly connected to the hierarhcy are picking articles. Are there better systems that could be devised for these purposes? I'm sure there are; but I'm not sure law reviews are the complete and utter disaster that so many people make them out to be. I know that doesn't sound like a resounding endorsement of the educational and scholarly value of law reviews -- but in context around here, it is.
One of the questions that I've had for a long time is how good are students at picking law review articles? I think it's pretty hard to evaluate an article's quality -- I struggle with this with some frequency when I do peer reviews for history journals, for instance. And I actually know a lot about the fields I'm reviewing in. The students seem to think they can do a pretty good job with this -- and maybe they're right. You may recall the words of one student editor at the University of Pennsylvania Law Review who wrote a response to Judge Posner's critique of law reviews a few years back:
The issue is not whether students are competent to select only the “best” articles, but whether student editors are able to determine whether a given article meets a basic threshold of validity, thereby creating a portfolio of valid articles for dissemination to the legal community. . . . [B]ecause the article selection process is complex, anyone young and inexperienced will have difficulty with it. The truth is, however, that article selection is not too difficult a task for law students. Deciding whether or not an article is desirable is not an elusive process requiring a refined professional judgment, honed through years of apprenticeship and experience. It is not even like wine tasting or art-gallery visiting, where a certain kind of “taste” or “eye” is needed.
How can we judge the quality of student decisions? Well, one way I guess is to ask the experts. Often when I read an article in a truly elite law journal that's in my area of expertise I'm surprised that it was selected -- this is because I can see and identify problems with it. Of course, I have no idea what the other articles under consideration were, so the fact that I think a particular piece isn't great (and that I'm familiar with better pieces published in "lesser" law reviews around the same time) isn't a great judge.
Another way of looking at this issue is to use citation data. As I wrote about a couple weeks back in the context of Theodore Eisenberg's and Martin Wells' latest on citations, a study of citations also has a lot of problems. But if we would just suspend objections for a moment, I want to talk about a simple study I published a few years back, which looked at citations to articles in a thirteen leading law journals over a 15 year period. The study found that many articles in our nation's most elite journals did substantially less well than articles published in very good, even if not the most elite journals. There are a lot of things to be said about this -- including that, wow, there's not a lot of space in those journals and some articles do great -- absolutely fantastic in citations -- but a lot don't. But I think it also suggests that a lot of judgements, even by editors of the best journals, may not be the best decisions they could have made. This is hindsight -- and it poses all kinds of problems related to field bias in citations -- but it also reminds us that some articles in the best journals may not be as good as many other articles published in other journals. And I think that's an important caveat, especially this time of year as hiring and promotion and tenure committees are gearing up.
The illustration is a graph in the article that shows the citations to each article published in the journal in 1992, over the next fifteen years. I blogged some more about this a while back.
I did a judicial citation study confined to law and economics many years ago and a follow up that will be out soon. I think judicial citations are important in assessing whether we are writing for an audience other than ourselves. The problem is that articles are generally cited that support where the court is going anyway. Thus, perhaps the most important articles -- those suggesting a change as opposed to those (like many in law and economics) adding new support for old ideas -- are not cited. I am not sure how this cuts in terms of article selection but I'd say judicial citation is probably unrelated to quality or at best only moderately related.
Posted by: Jeffrey Harrison | August 09, 2012 at 10:10 AM
I disagree with Jeffrey's comment. While courts will cite to articles supporting the decision - so what? It shows the article was relevant. Moreover, a dissenting opinion (if its an appeals court) can cite to a different article opining a different conclusion. The bottom line of the post is spot on. Too often one writes a great article contributing to the legal profession and it is not published in a top elite school because the author is not associated with a "top school." Yet that article is cited by courts. I realize student editors have to manage their time and of course - its human nature - will defer to the author from an elite institution and will pick his or her article because its a safer pick. Look, is the good looking babe going to go out with the guy in the Prius or the guy with the Z4 convertible? The latter might be a jerk but its the curbside appeal that at least initially - wins.
Posted by: Craig | August 09, 2012 at 10:31 AM
I don't want to be misunderstood. I value judicial citations more that law review citations or (god forbid, SSRN downloads) because it means someone out there in the world of actual law -- a clerk, a lawyer, a judge -- has seen the article and found it useful in some way. I am just not sure that means it is an article of superior quality or that law review editors can be complimented. This may be a distinction between relevance and quality.
Posted by: Jeffrey Harrison | August 09, 2012 at 10:49 AM
Thanks for this post, Al. I think its an important issue. I commented on a post on this topic by Brando below, so some of this will be redundant, but I wanted to say it here as well.
I publish in international law, which is one of the fields in which there are a significant number of peer reviewed journals to publish in. In my field, almost all of these are headquartered in the UK and Europe. I know in other fields there are more US based ones. So I have experience in publishing just about 50% of my work in peer reviews, and 50% in U.S. student edited journals.
I think we in U.S. legal academia should not be blind to how unique we are among academic disciplines in having student edited journals as even an option for publication, let alone the primary option. Another way to say this is that we should recognize how much of an outlier we are to mainstream academic practice in this area. No other discipline that I know of, in the U.S. or elsewhere, has this institution. And I think there are very good reasons why other disciplines have not embraced it. Reading the comment from the UPLR student editor that you quote above, I just couldnt disagree more with that student's analysis of the qualifications likely to produce a quality judgment about scholarship. And even more troublingly, this quote seems to me to be evidence that at least some student editors of law reviews arent even aware of how unqualified they are to be judges of scholarly quality of law review articles.
I think that we should as a discipline make the fundamental decision to essentially do away with student edited law reviews, and move to a system of primarily or exclusively peer reviewed journals, like the systems all other academic disciplines maintain.
Now, before everyone jumps on this idea, I know the standard objections. Peer review doesnt necessarily equate to quality review. I get this - believe me. I've had some ridiculous peer reviews of my work.
Another standard objection is that peer reviewed journals require too much time and effort of us faculty, and that inevitably a switch to peer reviews would mean fewer journals overall. I do understand these concerns, and they do raise some possible problems.
So, like all other important things in life, we're not dealing with a clear and easy choice. But would the benefits of a switch to peer reviews outweigh the costs? I personally think it probably would. It would put us on par with all other academic disciplines, in which this calculus has indeed resulted in primarily or exclusively peer reviewed journals.
Posted by: Dan Joyner | August 09, 2012 at 04:12 PM
Dan, your main point is that the U.S. style journal is unique as compared to the European peer reviewed system. Lets not follow Europe into the peer review system. Less innovative thinking and self-interested reviewers putting the screws on academic "enemies". I like the U.S. style - new ideas ideas are explored and there is great diversity among editors. Besides, most U.S. journals consult with faculty members. There are individuals I believe the U.S. journals have served the legal community well. Not everything European is "superior" to our system. Just take a look at their economy - is a stakeholder system of governance really "superior". Lets stick with our U.S. law journal process.
Posted by: Dave | August 10, 2012 at 02:43 AM
I'm curious regarding the citations, how many of the citations were in federal district court opinions versus the Supreme Court? I've found very few law articles of professional relevance in litigation practice. At best, some were topically relevant, but didn't offer any insight in practical application or were disguised case summaries. But, then again, it's law school, so perhaps yet another law review article on statutory interpretation or the First Amendment is encouraged over a law review article on something of more practical use.
Posted by: lawgrrl | August 10, 2012 at 02:10 PM
Lawgrrl,
The article I linked to above dealt separately with citations by journals and citations by courts. I've forgotten now the number that were US Supreme Court as opposed to federal courts of appeals -- I think the preponderance was the latter.
Posted by: Alfred Brophy | August 10, 2012 at 03:30 PM
Dave, that actually was not my main point. My main point is that the U.S. student law review institution is different from every other academic discipline's publishing institutions, and that includes academic disciplines in the U.S. as well as elsewhere in the world. So no, the provincialism which seeps through your comments is not an effective rebuttal to my argument.
Posted by: Dan Joyner | August 10, 2012 at 04:02 PM
It looks like this discussion is over but one more thought. The current system is flawed but the politics of a peer review system for those writing in law also seems hazardous. Many if not most law review articles are briefs for one position or another as opposed to research designed to find an answer. Evaluation by objective peers seema unlikely. Student review would not be as much a problem if we stopped putting so much stock in where a piece is published. It seems like many say they hate the system but then they worship it.
Posted by: Jeffrey Harrison | August 10, 2012 at 08:13 PM