Theodore Eisenberg and Martin T. Wells have a new paper up on ssrn, "Ranking Law Journals and the Limits of Journal Citation Reports." Their abstract is as follows:
Rankings of schools, scholars, and journals emphasize ordinal rank. Journal rankings published by Journal Citation Reports (JCR) are widely used to assess research quality, which influences important decisions by academic departments, universities, and countries. We study refereed law journal rankings by JCR, Washington and Lee Law Library (W&L), and the Australian Research Council (ARC). Both JCR’s and W&L’s multiple measures of journals can be represented by a single latent factor. Yet JCR’s rankings are uncorrelated with W&L’s. The differences appear to be attributable to underrepresentation of law journals in JCR’s database. We illustrate the effects of database bias on rankings through case studies of three elite journals, the Journal of Law & Economics, Supreme Court Review, and the American Law & Economics Review. Cluster analysis is a supplement to ordinal ranking and we report the results of a cluster analysis of law journals. The ARC does organize journals into four large groups and provides generally reasonable rankings of journals. But anomalies exist that could be avoided by checking the ARC groups against citation-based measures. Entities that rank should use their data to provide meaningful clusters rather than providing only ordinal ranks.
They have done a lot of very serious work to look at how to rank journals. They use citations data from Thomson Reuters' Journal Citations Report (JCR), John Doyle of Washington and Lee, who uses Westlaw's law journal's database, and also the Australian Research Council. There's a lot to talk about here and I hope to return to this important article again soon, but right now I want to focus on this paragraph:
Do the different systems for ranking journals based on impact provide consistent results? One expects to observe consistency, but a major difference between W&L and JCR is the groups of journals they count in computing impact measures. W&L specializes in law journals; JCR’s journal pool spans many fields. Bao et al. (2010; p.352) provide evidence that combining articles in all research fields to generate rankings can introduce bias into rankings. They construct a new journal ranking using econometrics articles as a group of specialty articles. They find that the intellectual influence of an article as measured by citations to it using the new ranking is much higher than if it were published in higher-ranked general interest economics journals such as American Economic Review. “[U]sing the existing economics journal rankings to evaluate econometricians’ research productivity is an error-ridden system because it imposes a substantial downward bias against them.” They observe that the prevailing practice by academic institutions of judging article quality by where articles are published, in contrast to their impact as measured by citations is problematic.
In a much more modest paper than Eisenberg's and Wells', I looked at citations to articles published in thirteen leading law journals over a fifteen year period. That paper found that citations to articles even in our most elite law journals varied widely -- and that there were a lot of articles in less prestigious journals that received substantially more citations than many articles in the most prestigious journals. (Of course, if I'd looked beyond the Westlaw database for citations there might have been a different picture -- see Eisenberg and Wells' warning that I discuss in the next paragraph.) I agree completely with the idea that we should evaluate articles -- perhaps even read them and look at citations to them -- not just look at the place they appeared.
Another of the many important points that Eisenberg and Wells make is that, to the extent that we are looking at citations it matters greatly which journals we look at when looking for citations. Looking in the wrong -- or an incomplete list of -- places can yield misleading results. (The article by Bao et al. referred to in the paragraph above is: Yong Bao, Melody Lo, and Franklin G. Mixon, Jr., "General-Interest Versus Specialty Journals: Using Intellectual Influence of Econometrics Research to Rank Economics Journals and Articles," 25 Journal of Applied Econometrics 345-353 (2010)).