Theodore Eisenberg and Martin T. Wells have a new paper up on ssrn, "Ranking Law Journals and the Limits of Journal Citation Reports." Their abstract is as follows:
Rankings of schools, scholars, and journals emphasize ordinal rank. Journal rankings published by Journal Citation Reports (JCR) are widely used to assess research quality, which influences important decisions by academic departments, universities, and countries. We study refereed law journal rankings by JCR, Washington and Lee Law Library (W&L), and the Australian Research Council (ARC). Both JCR’s and W&L’s multiple measures of journals can be represented by a single latent factor. Yet JCR’s rankings are uncorrelated with W&L’s. The differences appear to be attributable to underrepresentation of law journals in JCR’s database. We illustrate the effects of database bias on rankings through case studies of three elite journals, the Journal of Law & Economics, Supreme Court Review, and the American Law & Economics Review. Cluster analysis is a supplement to ordinal ranking and we report the results of a cluster analysis of law journals. The ARC does organize journals into four large groups and provides generally reasonable rankings of journals. But anomalies exist that could be avoided by checking the ARC groups against citation-based measures. Entities that rank should use their data to provide meaningful clusters rather than providing only ordinal ranks.
They have done a lot of very serious work to look at how to rank journals. They use citations data from Thomson Reuters' Journal Citations Report (JCR), John Doyle of Washington and Lee, who uses Westlaw's law journal's database, and also the Australian Research Council. There's a lot to talk about here and I hope to return to this important article again soon, but right now I want to focus on this paragraph:
Do the different systems for ranking journals based on impact provide consistent results? One expects to observe consistency, but a major difference between W&L and JCR is the groups of journals they count in computing impact measures. W&L specializes in law journals; JCR’s journal pool spans many fields. Bao et al. (2010; p.352) provide evidence that combining articles in all research fields to generate rankings can introduce bias into rankings. They construct a new journal ranking using econometrics articles as a group of specialty articles. They find that the intellectual influence of an article as measured by citations to it using the new ranking is much higher than if it were published in higher-ranked general interest economics journals such as American Economic Review. “[U]sing the existing economics journal rankings to evaluate econometricians’ research productivity is an error-ridden system because it imposes a substantial downward bias against them.” They observe that the prevailing practice by academic institutions of judging article quality by where articles are published, in contrast to their impact as measured by citations is problematic.
In a much more modest paper than Eisenberg's and Wells', I looked at citations to articles published in thirteen leading law journals over a fifteen year period. That paper found that citations to articles even in our most elite law journals varied widely -- and that there were a lot of articles in less prestigious journals that received substantially more citations than many articles in the most prestigious journals. (Of course, if I'd looked beyond the Westlaw database for citations there might have been a different picture -- see Eisenberg and Wells' warning that I discuss in the next paragraph.) I agree completely with the idea that we should evaluate articles -- perhaps even read them and look at citations to them -- not just look at the place they appeared.
Another of the many important points that Eisenberg and Wells make is that, to the extent that we are looking at citations it matters greatly which journals we look at when looking for citations. Looking in the wrong -- or an incomplete list of -- places can yield misleading results. (The article by Bao et al. referred to in the paragraph above is: Yong Bao, Melody Lo, and Franklin G. Mixon, Jr., "General-Interest Versus Specialty Journals: Using Intellectual Influence of Econometrics Research to Rank Economics Journals and Articles," 25 Journal of Applied Econometrics 345-353 (2010)).
Counting citations is fine, but it misses the mark significantly in measuring an article's impact. Three examples:
1. For the piece on which I am currently working, I will cite to a small handful of articles, but I have read many additional articles that will not be cited. Law articles don't provide a bibliography of sources consulted (and maybe they should); instead, we cite to authorities that support or contradict our statements, particularly if they are written by our friends or by scholars we respect. For my article, I read many articles on patent law’s inequitable conduct doctrine as it parallels the ethical issues I’m discussing, but will not include these in my paper as that is not what my paper presents. Counting citations will understate the importance of these background articles.
2. Counting citations, unless done with great care, also doesn’t capture the importance of the citation itself. If an article is cited with a “cf.” or “see also,” its importance is being minimized. If, on the other hand, the author of the second piece quotes extensively from the first and spends time in the article itself discussing the article, its importance is higher.
3. Citations in other journal articles also misses the impact an article might have outside of the next scholar’s article. Sometimes, our articles are influential in the broader world of thought and may be cited in books and casebooks, the general media, or various blogs including this one. Additionally, counting citations does not measure the increasing importance of SSRN as a mechanism of distribution. All of these also demonstrate the importance of the article.
Posted by: Ralph D. Clifford | July 20, 2012 at 09:49 AM
I tend to agree with Ralph's second observation above, namely that the nature of the citation matters perhaps more than simply its existence in the paper's notes. (I am reminded of a similar phenomenon in case law, where certain cases are repeatedly cited for a clear framing of a black-letter principle rather than the facts and specific analysis of the case itself - raising their profiles among those who look at Lexis/Westlaw citation records but missing the real impact the case ruling had on subsequent courts.) It would be a huge (perhaps even prohibitively so) undertaking to read through the hundreds of articles each publishing cycle and manage the cross-references to the original work. As a gut feeling, it seems the "fair" option - though of course fair doesn't always count for much when it comes to the practicalities.
I'd be interested to hear how others feel about handling speciality law review journals. While perhaps not as prominent in the general scholarly community, articles in these publications may have tremendous influence in their area even without a high number of citations as compared to a piece published in a general journal. Should these journals be somehow separately ranked? Likely there are often not enough in each speciality to make ranking as much of an issue, but the question still remains as an author: how to handle submission offers? If one is fortunate enough to get an offer of publication from a speciality journal as well as one from a mid-ranked general journal, what would be the strategic calculus? Citations alone might be hard to use as a benchmark of prestige for the journal - or the level of the influence of the author. Thoughts?
Finally, might there also be a danger of a feedback loop when it comes to ranking journals? Elite journals receive and publish work from the most regarded scholars in the field, in turn raising their citation statistics and reinforcing their elite status, which brings back those same scholars to start the loop again. If we rely on cite counts, unless a journal is already in the Top 10 or 20 (say) it seems to face huge hurdles to break through its current tier by getting the necessary citations.
Posted by: Rita_Trivedi | July 20, 2012 at 12:06 PM
http://classbias.blogspot.com/2012/07/10000-reward-for-anyone-who-built-it.html
Posted by: Jeffrey Harrison | July 20, 2012 at 03:52 PM