This was an excellent read. I myself submitted a peer to a UK peer reviewed law journal 2 or 3 years ago I do not even recall the name. Within an hour or two I received a rejection. I am fairly confident that the editor saw an affiliation on my CV that the editor was not fond of. I have a track record of over a dozen really high end US journals I am not junior with one or two publications. At a minimum, the editor should have sent the paper out. In any event, the I sent to US journals, received multiple offers, and accepted with a great US journal. The article while only out 1 or 1.5 years on Westlaw has already been cited a few times so I know it was good. I would not have a problem if the editor forwarded the paper and the reviewers rejected it but he did not even send it out. He made a unilateral decision and I highly suspect prejudice was at work. My impression of the peer review process is not high. The US system is superior.
Interesting because the argument that is always made by defenders of marginal law schools is that a "peer reviewed" labor economics paper found law school benefited students. Of course that paper was published in a legal journal, not an economics journal. Perhaps the editors are legal journals are more ethical and rigorous in reviewing economic analyses than those of economics journals.
Thanks for this, Steven. I had heard nothing of this. I find this statement important — conspiratorial is too strong a term — "The editor, by selectively picking which referees will review the paper, has a lot of influence over how the 'peer review process' turns out." I have heard a similar critique of peer review quite frequently. Or at least with some frequency.
While I don't want to sound like a company man — or a representative of "the man" or of "the establishment" — I think that comment mischaracterizes how editors usually behave. I also think it is inaccurate about the ability of editors to shape a peer review even if they wanted to. We are looking for people who know the area of the paper under review and will turn in a thoughtful review on time. While it might be *possible* to pick referees who are "soft" or (I don't think anyone's doing this deliberately) "tough," I doubt that editors do this. Moreover, even if editors wanted to do this, it would be hard to engineer. I don't know as there are a lot of people who will be consistently "easy" on a paper.
My experience on the editor side of peer review is less than a year, so I may change as I get more experience, but as the recipient of lots and lots of peer reviews over I guess it's decades now I don't think I've ever had the sense that an editor was setting me up for a soft review (I especially haven't had that feeling of late), or a particularly hard/unfair one. And while it may be that *in this case* there was a set up, I really wonder about editors' abilities to do this with any frequency.
Having said that, I'd love to hear a lot more discussion about the peer review process.
Alfred, a very seious problem is that often revuewers make criticisms based upon seeki g to have the author cite to his prior work. Example a reviewer previously authored a book or article on the subject and in the review claims the paper needs work on a certain area which happens to be a topic near and dear to the reviewer. And the author incorpoartes but doesnt cite to the reviewer's work as there are tons of on point articles. In the next review, the reviewer again points to this area with cites including what i imagine is the reviewer's own work. Message received to get approved cite to these specific articles because one of them is the reviewer's, you do and lo and behold …reviewer approves. It happened to me so i know it happens.
I'm speaking from the outside – pretty well everything I have published has been commissioned by the press in question (Thomson-Sweet&Maxwell, OUP, UNCTAD) or published in a subscription legal journal that is pretty expensive – but I have friends and family who are academic, mostly STEM and go through the peer review process.
I think the idea that peer review alone will solve the problems with US Law Reviews and Journals misstates the nature of the problem. US law reviews and journals largely exist to print an article, to publish but not to be read. Being read is to some degree irrelevant to their mission, which is to allow US legal academics to list the publications that their law school crave as indicia of faculty quality and "eliteness." US law reviews and journals are not supported by subscription, rather they are published by law schools because, so to speak, a serious law school must have a law review and journal, even if no one reads it.
That is not to say that in crude terms, this is what non-Law faculties want too, but for them the h-index and citation counts, as well as the impact factor for the journal is of major importance; in short their measures address the question of whether anyone in fact reads the article or the journal. In STEM publications there are other issues. One is the very obvious tendency towards circular citation, that it is widely asserted that certain authors tend to cite friends and colleagues who reciprocate, another is that an author with an already high h-index will get more favourable consideration. In other words, since citations are what non-law departments seem to look for, they often get gamed too.
In Europe there is another factor – European legal journals are commercial propositions – they need subscriptions to keep publishing. In that respect they are much closer to say BNA (now part of Bloomberg.) The subscription cost to many of these journals is very high – typically north of $400-600 p.a., for maybe 12 numbers and a special or two. For this reason the journals are very focused on articles that their readers want to see, on issues their readers are immediately concerned about and want information to address. In that respect it is also worth remembering that the European case reporting system is – well – for the most part cr@p. The UK and Ireland have the best availability (and the EU (but it is slow to publish certain things in a case such as pleadings)) but even then, it can be tricky to find a case, and then the writing style is frequently opaque (especially in the English courts of Appeal and the Supreme Court (formerly House of Lords.) Thus there is a market for interpretive articles about recent judgments in the UK and Ireland – while in the rest of Europe there is a market for telling you there even was a case, let alone a judgment.
This is of course a factor when UK or European journals reject an article – don't even send it out for peer review. Editors make commercial choices about submitted articles – is it something the readers will want to read – and remember these are not in the legal journals academics, but rather practicing lawyers. The peer reviewers too are often different from what a US academic might expect, in that many will be either practitioners or judges, and again their viewpoint will be driven by what is of interest to them and the group they represent. Finally, a lot of European legal academics also practice to a lesser or greater degree and have a relationship with a law firm or barristers chambers.
mack you are completely wrong regarding US law reviews when you claim they are not meant to be read. US law journal articles are routinely cited in major litigation and in fact there are rankings specifically as to court cites.
US law journals do not need fixing – they are an excellent venue and it is obviously a more ethical process than the "peer-review" political and bias nonsense that at least "some" UK peer review law journal editors clearly engage in.
The event detailed in Prof. Borjas' posting involved a serious ethical lapse by the editor but it was a lapse that was specifically precluded by the governing standards of the Journal so it is hard to fault the peer review process for her error. Peer review is not without its problems but it remains far superior to student editors, who are often prone to select articles based on letterhead, resumes (some of which include pictures!) or the political inclinations of the student. These proxies are often used because the students lack any substantial knowledge on either the substance of the article or the existing literature. One way we can surmise the superiority of peer review is that no other discipline has opted for the lunacy of having 2nd and 3rd year students select articles.
For no other reason, other than to snark – has anyone asked whether http://themodernembalmer.com is peer reviewed – I have to say, the article have some snappy titles. Which politician do people think was the subject of:
CALL OF THE WILD: TAXIDERMY TANNIC ACID/TANNINS IN EMBALMING.
A DEAD-END ROAD TO FORMALDEHYDE-FREE CHEMICALS.
@6Train: it's called a desk rejection and it's not that uncommon in peer review. In one sense, the editor may have done you a great favor. If your paper was not a good fit for the journal, or if you were submitting a policy piece to an empirical journal, etc., then why waste everyone's time–including yours–by going through the formality of sending it out for review?
Peer review and the law school approach each have their pros and cons, but your assessment falls short. What percentage of all law review articles have ever been cited in a court opinion? I'll guess 1% or less and will be happy to be proved wrong. Also, are you suggesting that a metric of a "good" paper is that it has been cited a few times in 1.5 years?
Can someone answer Cynthia? I too would like to know:
1. What percent of articles published in the reviews/journals of ABA approved law schools each year are cited by courts within five years of publication?
2. What percent of articles published in the reviews/journals of ABA approved law schools each year are cited in the reviews/journals of ABA approved law schools within five years of publication? Of these, in what percentile does three citations fall?
Patrick
Thanks, but, as you say, these stats don't really capture what Cynthia and I were looking for.
Do your stats compare the "citations" to ALL law review articles – for the past 100 years or so? – to the number of new law review articles in one year?
I'm not sure what that would tell us, but it definitely doesn't tell us whether, for example, 3 cites in 2 years is a metric of a "good" article, and doesn't tell us how many new articles are cited by courts within five years.
Moreover, we have the "sunstein/posner" problem. I'll leave to others to characterize the consistency of the quality of their work. In this context, however, we know that these authors garner huge numbers of citations. Thus, it is important to know what the distribution looks like within the cohort of law review articles cited. Averages don't really tell us much of anything, as we know when we ask the average net worth of the patrons of a sleazy bar when Warren Buffet walks in.
Just giving what I have. There are no studies I'm aware of that break the numbers out the way you want. I'm not sure what a 100 year survey would even tell us, though, given the dramatic increase in the number of schools and journals over that period.
FWIW, Jim Chen's paper on citation distribution does tend to confirm your intuition that citations are heavily clumped among a small percentage of articles published at the top ranked journals.
Also FWIW, I would personally consider two or three judicial citations over five years a strong indication that an article is "good". Two or three citations in other journals would be a much weaker signal to me, without knowing more about who the citation was from and what the citation is for. On that issue, you might also want to look at the Harrison and Mashburn article.
I wasn't clear in this sentence: "Do your stats compare the "citations" to ALL law review articles – for the past 100 years or so? – to the number of new law review articles in one year?"
I was asking whether that was the basis for your statements.
We weren't asking about comparing citations to the universe of existing articles to the number of new articles each year, so, I just wanted to make sure that I understood your data. As stated, I don't know what that data tells us. Are you saying that 1/8 of new law review articles are cited by a court (over what period?), or that every year, there are about 1/8 as many court citations to law review articles as there are new articles?
The articles you mention don't really answer the questions, but I appreciate the "citations." Thank you.
I'm saying the latter. I.e. my rough math concluded there are 1/8 as many court citations to articles as new articles. It is not the case that one in eight articles are cited by a court. I'm writing from a phone and don't have the details of how I got there in front of me, but it's explained in the paper if you are that interested.
A few years back I had a short piece that looked at citations to a single year of articles in a dozen or so leading law reivews. I looked at this over over a 15 year period. While this isn't quite what anon is asking for, it gives a picture of how often articles in some leading law reviews are cited. Table 1 on page 239 has summary data, including the median number of citations in each journal over those years.
I haven't communicated this clearly enough, it appears. So, forgive me for being blunt. I never asked about the stat you are repeating, and I don't understand how the stat to which you refer is relevant to anything, whether I asked or not!
An article that asserts that there are 1/8 as many court citations to all law review articles each year as new articles, for reasons that I think are obvious and stated above, is not relevant to my inquiry and, IMHO, of questionable utility and problematic in any event.
Citations have always struck me as a good metric for the value of articles, though not necessarily for their public impact. I looked into this a few years back for a paper I was planning to do but never executed and it was clear that (1) a large number of articles, even in top journals, are never cited by anyone anywhere; (2) the absence of citations are even greater for student notes, which often occupy more pages in a Journal than articles; (3) court citations are relatively rare, not because articles are of little use but because courts simply don't look to articles for authority. It might be nice if articles were cited more frequently but the absence of such cites does not seem to me to reflect a lack of quality or utility, most articles are written for other academics and that is true of all disciplines and is the nature of academic work (some of that will translate into classroom material via casebooks lectures etc and may then have an indirect impact). (4) Two areas that citation counts don't capture are classroom use and policy purposes, and many articles while never garnering citations might be used in class or by policymakers, though it is also quite possible that the most cited works are also most likely to be used in classrooms or in policy circles.
The continuing misguided and ill informed effort to undermine the research of legal scholar Michael Simkovic and finance scholar Frank McIntyre appears here in its latest guise: the notion that there was something flawed in the peer review process because their research demonstrating the positive NPV of a JD appeared in the Journal of Legal Studies. While it is true that JLS is edited by three law professors, all three are also economists with duly granted Ph.D.'s in economics. There are very few (if any) economists in economics departments who would not be happy to be published in JLS if their work were within the subject matter of that journal. Any academic with even the slightest awareness of the field of law and economics would understand and agree with this which confirms in my mind that "Matt" – who submitted the comment above – is not an academic and should peddle his non-sense elsewhere than a "faculty lounge" (even one in cyberspace).
Cynthia wrote: "it's called a desk rejection and it's not that uncommon in peer review. In one sense, the editor may have done you a great favor. If your paper was not a good fit for the journal…"
Response: It was a perfect fit. The journal concentrates on finance and international law and the paper was precisely on the topic. A better fit could not be found.
Cynthia: "your assessment falls short. What percentage of all law review articles have ever been cited in a court opinion? I'll guess 1% or less and will be happy to be proved wrong. Also, are you suggesting that a metric of a "good" paper is that it has been cited a few times in 1.5 years?
Response: I do not have stats handy but if your read US court opinions you will generally find law journal citations in the "heavy" cases such as securities, constitutional, international law, etc. Again, in routine garden variety litigation no but when grey areas arise or a novel claim/defense is interposed, yes you will often find law journal references.
As to the "good" metric….Yes if an article is "out there" on Westlaw for 1 or 1.5 years even one cite is excellent because of the lag time between for other scholars researching, writing, having the paper submitted to courts and/or for their own articles. In other words, other cites are likely in the pipeline. Let me add that many many fine or even brilliant articles are not cited "right away" sometimes the author writes on a topic that becomes relevant to a court or to another author 5, 10 or 20 years later.
I found from my (admittedly limited) experience with the UK peer review process – namely the immediate rejection from the editor as an indication that prejudice and bias is very real. Maybe it was only to this journal or maybe this editor did not like an affiliation listed on my CV. But I believe the bias was real. Again, a rejection by peer reviewers is perfectly legit. But an immediate rejection for this paper based solely on the editor's decision raised the probability of prejudice particularly in the context of its subsequent receipt of offers from prestigious US law school journals and its citations. Sure I am subjective it was my paper but what I am saying makes sense objectively.
Did your study distinguish between "quality" and "junk" citations?
I remember there was a study a couple years ago on how many citations in law review articles were there to provide real support to an argument. IIRC, the number was around 2%. Most citations seem to be saying little more than "This guy also wrote on this topic" or "I'm pretty sure I'm supposed to average 6 citations per page."
You seem to be under the misapprehension that I do not read US case law – I do, it's my job to be familiar with it, I work on briefs and submissions too. You suggestion that "US law journal articles are routinely cited in major litigation and in fact there are rankings specifically as to court cites" is to put it mildly, exaggerated. The vast majority of US law review articles are never cited by any court – and a small number of legal writers dominate the citations – toped by Posner, pere et fils, Mark Lemley and a few others. This is largely because their writing is in fact useful to the courts. I cannot speak to your personal experience with the UK journal, since I do not know which one it was and I have not seen your article – but I can say that even a journal devoted to economics and international finance published in the UK asks the question, is this useful to our readers? Your positive view of the US journals that have published your work can also perhaps be responded to with a misquotation (or Mandy Rice Davies): "well he would say that, wouldn't he?
As for Steve Diamond's little venture (again) into the wilds of conspiracy theories – why would anyone bother to conspire against someone who so effectively conspires against himself?
I would agree that citations have their uses as a measure of value for articles (in law and elsewhere) but as a measure, they do have drawbacks, circular citation for example (heavy hints during informal review has also been reported to me (as in "B suggested that I really should cite to A's article; if I don't it will get back to A.") Similarly, peer review has its problems as the linked article suggest and to be fair 6Train suggests – when the peers consist of the "great and the good" of the profession there is a certain tendency to reject articles that criticise the establishment.
Mack, W&L specifically ranks law journals per citations. This is an excellent indication of a journal's quality (although as I mentioned many great articles are not cited for years because they simply were not relevant until later). The W&L rankings are for a variety of factors: case citations, journal citations, impact factor, etc. As I mentioned, but you gloss over, I specifically stated journals are not often cited in routine cases, but they ARE cited with reasonable frequency in complex litigation particularly in the context of novel claims/defenses. So articles on obscure land use or estate issues are less likely to be cited by courts than articles on Constitutional law, international law, securities litigation, etc and those cases that involve novel claims and defenses where a journal article has previously discussed.
It's a bit like having professors vote on which articles they like, and then giving every professor an unlimited number of votes. And there's no way to distinguish between what you liked a little, what you liked a lot, and what you found absolutely essential to furthering the discourse on your topic. And you also give the exact same vote to articles you thought were absolutely wrong.
An article with a lot of citations is probably better than an article with few or no citations, but that's about all we can say.
"And you also give the exact same vote to articles you thought were absolutely wrong"
"Numerous commentators have speculated on Prof. Smith's work, and the possible bases of his bizarre legal theories. Whether they arise from gross stupidity, a profound lack of the rudiments of legal education, or a mental disorder caused by one of the many venereal diseases his degraded personal life has afflicted him and those degenerates he comes in contact with, is a matter of endless debate in the literature."
Alternately, for a real-world example, the famous "A Mathematical Model for the Determination of Total Area Under Glucose Tolerance and Other Metabolic Curves" article has hundreds of citations, but I am guessing many of them cite the article in a way the author does not appreciate…
I don't think the linked piece is enough to demonstrate that peer review is inherently bad and the US law review process inherently good. There are numerous limitations to the law review process that have been raised elsewhere, including (inter alia) reliance on the author's existing CV as a filtering tool, reliance on acceptance by lower ranked journals as a filtering tool, reviews being undertaken by students who can favour novelty over rigour, and the sheer volume of law review scholarship. US law reviews may serve valuable pedagogical goals, save US professors the time cost of reviewing articles and print some brilliant and original works, but a single ethical issue in peer review barely proves it is a flawed system, or one that law reviews should ignore.
I fling out, without naming names, a situation I'm familiar with of a senior STEM professor with a vast number of citations. Most of those citations were for articles written with graduate students and post-docs based on those grad students and post-docs research. However, relatively little involved significant technical breakthroughs – rather they were more accurate measurements of values (characteristics) which were useful for other research – the paper would then be repeatedly cross referenced in articles detailing the other research as either confirmatory of the measurement of the value (in preliminary prep work "Z was measured and found to be Y (consistent with ….)" or "for Z we used the value Y (source A, see also …)"
It's important to realise that these were useful articles – but there is a side issue – what does a citation mean as compared to a more novel piece of work? Of course science is not a chain of eureka moments – it is mostly hard graft, and that too should be rewarded. Do grant givers (and hence the departments who crave them) care? There is a lot of reason to think that to them a citation is a citation.
This was an excellent read. I myself submitted a peer to a UK peer reviewed law journal 2 or 3 years ago I do not even recall the name. Within an hour or two I received a rejection. I am fairly confident that the editor saw an affiliation on my CV that the editor was not fond of. I have a track record of over a dozen really high end US journals I am not junior with one or two publications. At a minimum, the editor should have sent the paper out. In any event, the I sent to US journals, received multiple offers, and accepted with a great US journal. The article while only out 1 or 1.5 years on Westlaw has already been cited a few times so I know it was good. I would not have a problem if the editor forwarded the paper and the reviewers rejected it but he did not even send it out. He made a unilateral decision and I highly suspect prejudice was at work. My impression of the peer review process is not high. The US system is superior.
Interesting because the argument that is always made by defenders of marginal law schools is that a "peer reviewed" labor economics paper found law school benefited students. Of course that paper was published in a legal journal, not an economics journal. Perhaps the editors are legal journals are more ethical and rigorous in reviewing economic analyses than those of economics journals.
Thanks for this, Steven. I had heard nothing of this. I find this statement important — conspiratorial is too strong a term — "The editor, by selectively picking which referees will review the paper, has a lot of influence over how the 'peer review process' turns out." I have heard a similar critique of peer review quite frequently. Or at least with some frequency.
While I don't want to sound like a company man — or a representative of "the man" or of "the establishment" — I think that comment mischaracterizes how editors usually behave. I also think it is inaccurate about the ability of editors to shape a peer review even if they wanted to. We are looking for people who know the area of the paper under review and will turn in a thoughtful review on time. While it might be *possible* to pick referees who are "soft" or (I don't think anyone's doing this deliberately) "tough," I doubt that editors do this. Moreover, even if editors wanted to do this, it would be hard to engineer. I don't know as there are a lot of people who will be consistently "easy" on a paper.
My experience on the editor side of peer review is less than a year, so I may change as I get more experience, but as the recipient of lots and lots of peer reviews over I guess it's decades now I don't think I've ever had the sense that an editor was setting me up for a soft review (I especially haven't had that feeling of late), or a particularly hard/unfair one. And while it may be that *in this case* there was a set up, I really wonder about editors' abilities to do this with any frequency.
Having said that, I'd love to hear a lot more discussion about the peer review process.
Alfred, a very seious problem is that often revuewers make criticisms based upon seeki g to have the author cite to his prior work. Example a reviewer previously authored a book or article on the subject and in the review claims the paper needs work on a certain area which happens to be a topic near and dear to the reviewer. And the author incorpoartes but doesnt cite to the reviewer's work as there are tons of on point articles. In the next review, the reviewer again points to this area with cites including what i imagine is the reviewer's own work. Message received to get approved cite to these specific articles because one of them is the reviewer's, you do and lo and behold …reviewer approves. It happened to me so i know it happens.
A truss will fix it.
I'm speaking from the outside – pretty well everything I have published has been commissioned by the press in question (Thomson-Sweet&Maxwell, OUP, UNCTAD) or published in a subscription legal journal that is pretty expensive – but I have friends and family who are academic, mostly STEM and go through the peer review process.
I think the idea that peer review alone will solve the problems with US Law Reviews and Journals misstates the nature of the problem. US law reviews and journals largely exist to print an article, to publish but not to be read. Being read is to some degree irrelevant to their mission, which is to allow US legal academics to list the publications that their law school crave as indicia of faculty quality and "eliteness." US law reviews and journals are not supported by subscription, rather they are published by law schools because, so to speak, a serious law school must have a law review and journal, even if no one reads it.
That is not to say that in crude terms, this is what non-Law faculties want too, but for them the h-index and citation counts, as well as the impact factor for the journal is of major importance; in short their measures address the question of whether anyone in fact reads the article or the journal. In STEM publications there are other issues. One is the very obvious tendency towards circular citation, that it is widely asserted that certain authors tend to cite friends and colleagues who reciprocate, another is that an author with an already high h-index will get more favourable consideration. In other words, since citations are what non-law departments seem to look for, they often get gamed too.
In Europe there is another factor – European legal journals are commercial propositions – they need subscriptions to keep publishing. In that respect they are much closer to say BNA (now part of Bloomberg.) The subscription cost to many of these journals is very high – typically north of $400-600 p.a., for maybe 12 numbers and a special or two. For this reason the journals are very focused on articles that their readers want to see, on issues their readers are immediately concerned about and want information to address. In that respect it is also worth remembering that the European case reporting system is – well – for the most part cr@p. The UK and Ireland have the best availability (and the EU (but it is slow to publish certain things in a case such as pleadings)) but even then, it can be tricky to find a case, and then the writing style is frequently opaque (especially in the English courts of Appeal and the Supreme Court (formerly House of Lords.) Thus there is a market for interpretive articles about recent judgments in the UK and Ireland – while in the rest of Europe there is a market for telling you there even was a case, let alone a judgment.
This is of course a factor when UK or European journals reject an article – don't even send it out for peer review. Editors make commercial choices about submitted articles – is it something the readers will want to read – and remember these are not in the legal journals academics, but rather practicing lawyers. The peer reviewers too are often different from what a US academic might expect, in that many will be either practitioners or judges, and again their viewpoint will be driven by what is of interest to them and the group they represent. Finally, a lot of European legal academics also practice to a lesser or greater degree and have a relationship with a law firm or barristers chambers.
mack you are completely wrong regarding US law reviews when you claim they are not meant to be read. US law journal articles are routinely cited in major litigation and in fact there are rankings specifically as to court cites.
US law journals do not need fixing – they are an excellent venue and it is obviously a more ethical process than the "peer-review" political and bias nonsense that at least "some" UK peer review law journal editors clearly engage in.
The event detailed in Prof. Borjas' posting involved a serious ethical lapse by the editor but it was a lapse that was specifically precluded by the governing standards of the Journal so it is hard to fault the peer review process for her error. Peer review is not without its problems but it remains far superior to student editors, who are often prone to select articles based on letterhead, resumes (some of which include pictures!) or the political inclinations of the student. These proxies are often used because the students lack any substantial knowledge on either the substance of the article or the existing literature. One way we can surmise the superiority of peer review is that no other discipline has opted for the lunacy of having 2nd and 3rd year students select articles.
For no other reason, other than to snark – has anyone asked whether http://themodernembalmer.com is peer reviewed – I have to say, the article have some snappy titles. Which politician do people think was the subject of:
CALL OF THE WILD: TAXIDERMY TANNIC ACID/TANNINS IN EMBALMING.
A DEAD-END ROAD TO FORMALDEHYDE-FREE CHEMICALS.
I'll admit it looks like a captive journal of a vendor of – well embalming fluid, but hey, I bet morticians read it….
@6Train: it's called a desk rejection and it's not that uncommon in peer review. In one sense, the editor may have done you a great favor. If your paper was not a good fit for the journal, or if you were submitting a policy piece to an empirical journal, etc., then why waste everyone's time–including yours–by going through the formality of sending it out for review?
Peer review and the law school approach each have their pros and cons, but your assessment falls short. What percentage of all law review articles have ever been cited in a court opinion? I'll guess 1% or less and will be happy to be proved wrong. Also, are you suggesting that a metric of a "good" paper is that it has been cited a few times in 1.5 years?
Can someone answer Cynthia? I too would like to know:
1. What percent of articles published in the reviews/journals of ABA approved law schools each year are cited by courts within five years of publication?
2. What percent of articles published in the reviews/journals of ABA approved law schools each year are cited in the reviews/journals of ABA approved law schools within five years of publication? Of these, in what percentile does three citations fall?
anon:
This isn't exactly the data you asked for, but based on some rough math I did for a paper last year:
(1) there is about one new court citation to a law review article generated for every eight new law review articles published in a given year.
(2) there are about five new citations from other journal articles to a law review article for each law review article published in a given year.
I'm not going to link the paper because I don't want to end up the spam filter, but it's on ssrn and if you Google me it should pop up easily enough.
Patrick
Thanks, but, as you say, these stats don't really capture what Cynthia and I were looking for.
Do your stats compare the "citations" to ALL law review articles – for the past 100 years or so? – to the number of new law review articles in one year?
I'm not sure what that would tell us, but it definitely doesn't tell us whether, for example, 3 cites in 2 years is a metric of a "good" article, and doesn't tell us how many new articles are cited by courts within five years.
Moreover, we have the "sunstein/posner" problem. I'll leave to others to characterize the consistency of the quality of their work. In this context, however, we know that these authors garner huge numbers of citations. Thus, it is important to know what the distribution looks like within the cohort of law review articles cited. Averages don't really tell us much of anything, as we know when we ask the average net worth of the patrons of a sleazy bar when Warren Buffet walks in.
Anon:
Just giving what I have. There are no studies I'm aware of that break the numbers out the way you want. I'm not sure what a 100 year survey would even tell us, though, given the dramatic increase in the number of schools and journals over that period.
FWIW, Jim Chen's paper on citation distribution does tend to confirm your intuition that citations are heavily clumped among a small percentage of articles published at the top ranked journals.
Also FWIW, I would personally consider two or three judicial citations over five years a strong indication that an article is "good". Two or three citations in other journals would be a much weaker signal to me, without knowing more about who the citation was from and what the citation is for. On that issue, you might also want to look at the Harrison and Mashburn article.
Patrick
I wasn't clear in this sentence: "Do your stats compare the "citations" to ALL law review articles – for the past 100 years or so? – to the number of new law review articles in one year?"
I was asking whether that was the basis for your statements.
We weren't asking about comparing citations to the universe of existing articles to the number of new articles each year, so, I just wanted to make sure that I understood your data. As stated, I don't know what that data tells us. Are you saying that 1/8 of new law review articles are cited by a court (over what period?), or that every year, there are about 1/8 as many court citations to law review articles as there are new articles?
The articles you mention don't really answer the questions, but I appreciate the "citations." Thank you.
Anon:
I'm saying the latter. I.e. my rough math concluded there are 1/8 as many court citations to articles as new articles. It is not the case that one in eight articles are cited by a court. I'm writing from a phone and don't have the details of how I got there in front of me, but it's explained in the paper if you are that interested.
A few years back I had a short piece that looked at citations to a single year of articles in a dozen or so leading law reivews. I looked at this over over a 15 year period. While this isn't quite what anon is asking for, it gives a picture of how often articles in some leading law reviews are cited. Table 1 on page 239 has summary data, including the median number of citations in each journal over those years.
http://law-wss-01.law.fsu.edu/journals/lawreview/downloads/362/brophy.pdf
Patrick
I haven't communicated this clearly enough, it appears. So, forgive me for being blunt. I never asked about the stat you are repeating, and I don't understand how the stat to which you refer is relevant to anything, whether I asked or not!
An article that asserts that there are 1/8 as many court citations to all law review articles each year as new articles, for reasons that I think are obvious and stated above, is not relevant to my inquiry and, IMHO, of questionable utility and problematic in any event.
Citations have always struck me as a good metric for the value of articles, though not necessarily for their public impact. I looked into this a few years back for a paper I was planning to do but never executed and it was clear that (1) a large number of articles, even in top journals, are never cited by anyone anywhere; (2) the absence of citations are even greater for student notes, which often occupy more pages in a Journal than articles; (3) court citations are relatively rare, not because articles are of little use but because courts simply don't look to articles for authority. It might be nice if articles were cited more frequently but the absence of such cites does not seem to me to reflect a lack of quality or utility, most articles are written for other academics and that is true of all disciplines and is the nature of academic work (some of that will translate into classroom material via casebooks lectures etc and may then have an indirect impact). (4) Two areas that citation counts don't capture are classroom use and policy purposes, and many articles while never garnering citations might be used in class or by policymakers, though it is also quite possible that the most cited works are also most likely to be used in classrooms or in policy circles.
The continuing misguided and ill informed effort to undermine the research of legal scholar Michael Simkovic and finance scholar Frank McIntyre appears here in its latest guise: the notion that there was something flawed in the peer review process because their research demonstrating the positive NPV of a JD appeared in the Journal of Legal Studies. While it is true that JLS is edited by three law professors, all three are also economists with duly granted Ph.D.'s in economics. There are very few (if any) economists in economics departments who would not be happy to be published in JLS if their work were within the subject matter of that journal. Any academic with even the slightest awareness of the field of law and economics would understand and agree with this which confirms in my mind that "Matt" – who submitted the comment above – is not an academic and should peddle his non-sense elsewhere than a "faculty lounge" (even one in cyberspace).
Cynthia wrote: "it's called a desk rejection and it's not that uncommon in peer review. In one sense, the editor may have done you a great favor. If your paper was not a good fit for the journal…"
Response: It was a perfect fit. The journal concentrates on finance and international law and the paper was precisely on the topic. A better fit could not be found.
Cynthia: "your assessment falls short. What percentage of all law review articles have ever been cited in a court opinion? I'll guess 1% or less and will be happy to be proved wrong. Also, are you suggesting that a metric of a "good" paper is that it has been cited a few times in 1.5 years?
Response: I do not have stats handy but if your read US court opinions you will generally find law journal citations in the "heavy" cases such as securities, constitutional, international law, etc. Again, in routine garden variety litigation no but when grey areas arise or a novel claim/defense is interposed, yes you will often find law journal references.
As to the "good" metric….Yes if an article is "out there" on Westlaw for 1 or 1.5 years even one cite is excellent because of the lag time between for other scholars researching, writing, having the paper submitted to courts and/or for their own articles. In other words, other cites are likely in the pipeline. Let me add that many many fine or even brilliant articles are not cited "right away" sometimes the author writes on a topic that becomes relevant to a court or to another author 5, 10 or 20 years later.
I found from my (admittedly limited) experience with the UK peer review process – namely the immediate rejection from the editor as an indication that prejudice and bias is very real. Maybe it was only to this journal or maybe this editor did not like an affiliation listed on my CV. But I believe the bias was real. Again, a rejection by peer reviewers is perfectly legit. But an immediate rejection for this paper based solely on the editor's decision raised the probability of prejudice particularly in the context of its subsequent receipt of offers from prestigious US law school journals and its citations. Sure I am subjective it was my paper but what I am saying makes sense objectively.
CORRECTION: the UK peer reviewed journal concentrates on economics and international finance not international law.
Al,
Did your study distinguish between "quality" and "junk" citations?
I remember there was a study a couple years ago on how many citations in law review articles were there to provide real support to an argument. IIRC, the number was around 2%. Most citations seem to be saying little more than "This guy also wrote on this topic" or "I'm pretty sure I'm supposed to average 6 citations per page."
6train
You seem to be under the misapprehension that I do not read US case law – I do, it's my job to be familiar with it, I work on briefs and submissions too. You suggestion that "US law journal articles are routinely cited in major litigation and in fact there are rankings specifically as to court cites" is to put it mildly, exaggerated. The vast majority of US law review articles are never cited by any court – and a small number of legal writers dominate the citations – toped by Posner, pere et fils, Mark Lemley and a few others. This is largely because their writing is in fact useful to the courts. I cannot speak to your personal experience with the UK journal, since I do not know which one it was and I have not seen your article – but I can say that even a journal devoted to economics and international finance published in the UK asks the question, is this useful to our readers? Your positive view of the US journals that have published your work can also perhaps be responded to with a misquotation (or Mandy Rice Davies): "well he would say that, wouldn't he?
As for Steve Diamond's little venture (again) into the wilds of conspiracy theories – why would anyone bother to conspire against someone who so effectively conspires against himself?
MLS,
I would agree that citations have their uses as a measure of value for articles (in law and elsewhere) but as a measure, they do have drawbacks, circular citation for example (heavy hints during informal review has also been reported to me (as in "B suggested that I really should cite to A's article; if I don't it will get back to A.") Similarly, peer review has its problems as the linked article suggest and to be fair 6Train suggests – when the peers consist of the "great and the good" of the profession there is a certain tendency to reject articles that criticise the establishment.
Mack, W&L specifically ranks law journals per citations. This is an excellent indication of a journal's quality (although as I mentioned many great articles are not cited for years because they simply were not relevant until later). The W&L rankings are for a variety of factors: case citations, journal citations, impact factor, etc. As I mentioned, but you gloss over, I specifically stated journals are not often cited in routine cases, but they ARE cited with reasonable frequency in complex litigation particularly in the context of novel claims/defenses. So articles on obscure land use or estate issues are less likely to be cited by courts than articles on Constitutional law, international law, securities litigation, etc and those cases that involve novel claims and defenses where a journal article has previously discussed.
Mack,
It's a bit like having professors vote on which articles they like, and then giving every professor an unlimited number of votes. And there's no way to distinguish between what you liked a little, what you liked a lot, and what you found absolutely essential to furthering the discourse on your topic. And you also give the exact same vote to articles you thought were absolutely wrong.
An article with a lot of citations is probably better than an article with few or no citations, but that's about all we can say.
"And you also give the exact same vote to articles you thought were absolutely wrong"
"Numerous commentators have speculated on Prof. Smith's work, and the possible bases of his bizarre legal theories. Whether they arise from gross stupidity, a profound lack of the rudiments of legal education, or a mental disorder caused by one of the many venereal diseases his degraded personal life has afflicted him and those degenerates he comes in contact with, is a matter of endless debate in the literature."
Alternately, for a real-world example, the famous "A Mathematical Model for the Determination of Total Area Under Glucose Tolerance and Other Metabolic Curves" article has hundreds of citations, but I am guessing many of them cite the article in a way the author does not appreciate…
I don't think the linked piece is enough to demonstrate that peer review is inherently bad and the US law review process inherently good. There are numerous limitations to the law review process that have been raised elsewhere, including (inter alia) reliance on the author's existing CV as a filtering tool, reliance on acceptance by lower ranked journals as a filtering tool, reviews being undertaken by students who can favour novelty over rigour, and the sheer volume of law review scholarship. US law reviews may serve valuable pedagogical goals, save US professors the time cost of reviewing articles and print some brilliant and original works, but a single ethical issue in peer review barely proves it is a flawed system, or one that law reviews should ignore.
I fling out, without naming names, a situation I'm familiar with of a senior STEM professor with a vast number of citations. Most of those citations were for articles written with graduate students and post-docs based on those grad students and post-docs research. However, relatively little involved significant technical breakthroughs – rather they were more accurate measurements of values (characteristics) which were useful for other research – the paper would then be repeatedly cross referenced in articles detailing the other research as either confirmatory of the measurement of the value (in preliminary prep work "Z was measured and found to be Y (consistent with ….)" or "for Z we used the value Y (source A, see also …)"
It's important to realise that these were useful articles – but there is a side issue – what does a citation mean as compared to a more novel piece of work? Of course science is not a chain of eureka moments – it is mostly hard graft, and that too should be rewarded. Do grant givers (and hence the departments who crave them) care? There is a lot of reason to think that to them a citation is a citation.