As my friends and colleagues know, I'm not a huge fan of law reviews. In fact, as you'd know if you'd been subjected to serve on a faculty with me at any time over the past eighteen years, I'm quite critical of them. Some years ago I set out to write a short piece, "law review editorship as training for hierarchy" -- which took its inspiration from Duncan Kennedy's "legal education as training for hierarchy," obviously and also from Fred Rodell. Though in some ways of late I've been a defender of law reviews. I think they provide some training for the students -- they give the 3Ls a chance to get some management experience and some editing experience, they provide an opportunity for both 2Ls and 3Ls to do some writing and research. I think law review can be a meaninful intellectual experience for people who take it seriously. There's something anti-hierarchical about law reviews in that students rather than people more directly connected to the hierarhcy are picking articles. Are there better systems that could be devised for these purposes? I'm sure there are; but I'm not sure law reviews are the complete and utter disaster that so many people make them out to be. I know that doesn't sound like a resounding endorsement of the educational and scholarly value of law reviews -- but in context around here, it is.
One of the questions that I've had for a long time is how good are students at picking law review articles? I think it's pretty hard to evaluate an article's quality -- I struggle with this with some frequency when I do peer reviews for history journals, for instance. And I actually know a lot about the fields I'm reviewing in. The students seem to think they can do a pretty good job with this -- and maybe they're right. You may recall the words of one student editor at the University of Pennsylvania Law Review who wrote a response to Judge Posner's critique of law reviews a few years back:
The issue is not whether students are competent to select only the “best” articles, but whether student editors are able to determine whether a given article meets a basic threshold of validity, thereby creating a portfolio of valid articles for dissemination to the legal community. . . . [B]ecause the article selection process is complex, anyone young and inexperienced will have difficulty with it. The truth is, however, that article selection is not too difficult a task for law students. Deciding whether or not an article is desirable is not an elusive process requiring a refined professional judgment, honed through years of apprenticeship and experience. It is not even like wine tasting or art-gallery visiting, where a certain kind of “taste” or “eye” is needed.
How can we judge the quality of student decisions? Well, one way I guess is to ask the experts. Often when I read an article in a truly elite law journal that's in my area of expertise I'm surprised that it was selected -- this is because I can see and identify problems with it. Of course, I have no idea what the other articles under consideration were, so the fact that I think a particular piece isn't great (and that I'm familiar with better pieces published in "lesser" law reviews around the same time) isn't a great judge.
Another way of looking at this issue is to use citation data. As I wrote about a couple weeks back in the context of Theodore Eisenberg's and Martin Wells' latest on citations, a study of citations also has a lot of problems. But if we would just suspend objections for a moment, I want to talk about a simple study I published a few years back, which looked at citations to articles in a thirteen leading law journals over a 15 year period. The study found that many articles in our nation's most elite journals did substantially less well than articles published in very good, even if not the most elite journals. There are a lot of things to be said about this -- including that, wow, there's not a lot of space in those journals and some articles do great -- absolutely fantastic in citations -- but a lot don't. But I think it also suggests that a lot of judgements, even by editors of the best journals, may not be the best decisions they could have made. This is hindsight -- and it poses all kinds of problems related to field bias in citations -- but it also reminds us that some articles in the best journals may not be as good as many other articles published in other journals. And I think that's an important caveat, especially this time of year as hiring and promotion and tenure committees are gearing up.