Search the Lounge

Categories

« I Call Your Name [UPDATED with the Mamas and the Papas] | Main | Help for Asylum-Seeking Afghan Women »

October 02, 2021

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Jason Yackee

I dunno. The HeinOnline measure was certainly a decent measure of how much authors who publish in law reviews get cited in law reviews. Isn't that a potentially valid criteria for people applying to law school to care about? Why should a prospective law student care if a law faculty member is publishing in, say, a peer-reviewed history journal? Why should a law school want to encourage its faculty to publish in other fields? Or in books? Or, god forbid, book chapters, the universally recognized black hole of law faculty scholarship? If I were a prospective law student I'd want law faculty who publish about law in law-focused journals, and who get cited by other law professors in law-focused journals.

I'm not convinced that adding a Hein-based citation score would have made USNWR *more* "biased" than it already must be (though we can't say it's biased without knowing what the true rankings would be under an unbiased measure). Indeed, isn't it fair to assume that "peer reputation" is "biased" in some sense? Adding citations to the formula, if they are less "biased" than other data, might actually improve the validity of the overall rankings, depending on how the directions and magnitudes of bias in the various inputs shake out.


AnonProf

“Law focused journals” edited by law students?

So this begins to sound circular: prospective law students will be interested in law schools where students just two years older than they are sit in judgment of tenure stream faculty, many with not only JD’s but PhD’s and other advanced degrees.

Should we get rid of student edited journals? Maybe not entirely but attempting to wall off law schools from the rest of the academy is, in part, why law schools often still have trouble being taken seriously on many campuses. Many of my non law school colleagues still express surprise and confusion when they hear that many law faculty don’t publish much at all and almost never do so in peer reviewed journals.

Kudos to the LSA for pushing back and prevailing against the USNWR.

A non

"If I were a prospective law student I'd want law faculty who publish about law in law-focused journals, and who get cited by other law professors in law-focused journals."

If I were a prospective law student, looking to get the best law job that I could, why would I care about any of this? How are my professors' generally non-peer-reviewed articles published in journals run by students of any concern to me? I might care about how their publications impact my law school's ranking, which may in turn impact my marketability. But if the the law school ranking's own criteria were altered to now only include peer-reviewed publications, wouldn't I be very concerned that my law school's profs were publishing in the student-run law reviews? Why would I care about my profs' citing each other in student-run journals if those no longer counted?

What if certain students want something more: a clerkship, a coveted NGO slot, a prestigious government placement, etc. Will their profs' having published in student-run journals help with this, or will their connections do the trick? And if the connections are themselves a product of pedigree (rather than intellectual merits, God forbid), all we care about are the criteria which generate and buttress that pedigree, regardless of what they happen to be.

On the other hand, maybe your school has a vested interested in preserving the existing criteria BECAUSE it preserves the pedigree, and fuck knows what will happen to the school's standing if new metrics are employed. So maybe you care about law reviews for that reason. So, maybe students would care about law reviews and the preservation of the status quo to that extent?

anon3

I publish in both student-edited law reviews and peer-reviewed journals. Both have serious problems and limitations. I think things generally tend to average out in each -- i.e., I wouldn't put too much stock in one particularly great or one particularly bad publication, but a collection of 3 or 5 or so does give you a rough sense of the quality of someone's scholarship. There are obviously some misses (we all know scholars who publish better work than their publications seem to indicate), but especially if we're averaging things out over a faculty, I think such a metric has real meaning.

If you're a student who's looking to practice as a lawyer, and this is the overwhelming majority of law students at most schools, except maybe Yale, then you should probably care most about whether your professors are publishing in law journals. I think my interdisciplinary research is really wonderful and important, don't get me wrong, but it has far less of an impact on the materials that I teach that directly or indirectly prepare my students for practice than does my legal research. (And the work I do is very law-adjacent for interdisciplinary work.)

Put simply, given that these rankings are primarily used by students to select between schools, and given that most students care about professor research quality (to the extent they do) primarily because of its impact on their future lawyering abilities, I don't think it's remotely unreasonable to rely on Hein metrics as one factor in ranking these schools. And I'm someone who will be relatively disadvantaged by such reliance.

a non a non

"If I were a prospective law student I'd want law faculty who publish about law in law-focused journals, and who get cited by other law professors in law-focused journals."

what "A non" said.

A non

A non of Oct 3 @ 2:25am back again.

anon3's post invites some empirical queries. For one thing, does Daniel Subnotik's scholarship about which profs place in top student-run journals hold up to scrutiny? E.g., his claim that top-ranked schools' profs more than just disproportionately place in the top journals. If so, then anon3's claims about the metric's merits are actually immeritorious; for then it's a thoroughly rigged system. Even if Subotnik's right, are there more recent data about the matter?

One would also like further information about the extent to which students actually give a damn about professorial research quality. Do, and can, pre-matriculants, most harboring zero legal training (by definition), adjudge their relative prospects of securing improved legal training ("future lawyering abilities") based upon what profs at a given school publish and in which journals? How many pre-1Ls are competent, capable, willing to, and do in fact scrutinize a given faculty's scholarly placements, let alone assess the content for quality? Furthermore, to what extent could they then infer (let alone ascertain ex ante/pre-matriculation) that a given school's profs' publications INFORM their teaching of, and the content of, specific courses?

Last empirical question: what percentage of practicing lawyers and judges in the United States look at a student-run law review even once a year? If it turns out that the super-majority don't give damn about those journals at all, then why should most candidate law students?


anon3

A non (@9:08pm) -

Would it matter for these purposes if there is an unjustified causal link between USNWR rankings and journal placement/citation? I guess it would be a factor resistant to rankings changes (i.e., it will make it more difficult for Stanford to eclipse Yale). But if there are meaningful differences in the publication metrics for similarly USNWR-ranked schools--and I think there are--USNWR-ranking favoritism wouldn't explain that. The question really is: should Brooklyn Law School (ranked 81 in the current USNWR rankings) get a boost over Kentucky Law School (also currently ranked 81) if its faculty are cited relatively more in law reviews? It's worth noting that, I think, most of us here would agree that there are certain schools -- like Brooklyn -- with significantly stronger faculties than their USNWR rankings imply.

Further, your point about the inability of pre-1Ls to independently evaluate professorial research quality seems, to me, to cut in favor of including a publication metric in the USNWR ranking: pre-1Ls aren't going to otherwise be able to meaningfully account for this factor in making decisions between schools, meaning that if professorial research quality should be relevant for law students, it's going to be regularly left out of law student decision making. FWIW, I do think that my teaching is significantly better and richer because of my research -- but maybe that's not true for everyone.

Finally, I'm not sure what your point about practicing lawyers and judges looking at student-run law reviews is an argument against. I'm assuming that we agree that faculty scholarship is an important part of what we do (and that the primary disagreement here is how or whether to account for this in USNWR rankings). And are you suggesting that practicing lawyers and judges *and policymakers* look at interdisciplinary journals, book chapters, books, etc. more? I seriously doubt that's true.

anon3

It's worth noting that the alternative to including these metrics and their biases in the USNWR rankings isn't a bias-free ranking. It's a ranking influenced relatively more heavily by *different* biases. Including law review citations in USNWR is far from perfect, but I guess I don't see how it wouldn't improve the overall quality of USNWR rankings.

anon

"And are you suggesting that practicing lawyers and judges *and policymakers* look at interdisciplinary journals, book chapters, books, etc. more? I seriously doubt that's true."

Of course it isn't true. The VAST majority of "legal scholarship" is held to be basically worthless by the legal system and the rest of academia.

For the VAST majority of law review articles, the only audience, if there is ANY audience, is a tiny group: a sliver of society.

The "scholarly impact" measure, if adopted, would only confirm that the work of a tiny number of law professors that produce scholarship that is read and cited by anyone (almost always other law professors) is cited by a tiny group of other law professors at mostly the already more highly rated law schools.

Harvard, Stanford Yale need to know, every year, that they retain their ratings. So pitiful, isn't it? Still like kids, chasing prestige.

Want to measure "scholarly impact" on the LEGAL PROFESSION? That would be useful. Start with citations in practice books and judicial decisions.

a non a non

According to Liptak at the NYT, "About 43 percent of law review articles have never been cited in another article or in a judicial decision."

A non

Hi again anon3,

No. Instead, like "anon," I'm suggesting that far and the away the preponderance of scholarship produced in student-run journals is perceived to be of zero utility by practitioners. To be sure, that could just a function of their hubris or prejudice. That's actually doubtful, though; and, more importantly, its immaterial: since they're the ones doing the hiring, practitioners either already don't, or shouldn't, give a damn about journal metrics (forming part of the criteria for assessing a law school's quality or the relative merits). This is in turn part of the reason why the notion of a "information saving" benefit of any such metric (for pre-matriculants, to assess which school to attend) would also be BS.

Faculty legal scholarship is indeed an integral part of what we do. Almost all of it may nevertheless fail to inform legal practice, let alone help to shape the well-being, functionality, or even reform of the law - to say the least.

Were I less cynical, moreover, I would believe that friends, colleagues, ideological allies, and others don't SYSTEMATICALLY and REGULARLY cite each other for strategic reasons, knowing full well that citation counts matter for such metrics. (The flip-side being: do others consciously ignore, omit certain citations, for strategic and political reasons, too?) Were I less cynical, I would therefore think that citation counts accurately and credibly signal the quality, merit, and importance of the cited scholarship in law reviews. But my head isn't completely up my own ass, so I don't.

All the best,
A non

anon3

@A non (@ 01:02 AM) -

So you're arguing that we shouldn't attempt in any way to include faculty legal scholarship -- full stop -- in our assessment of a school's quality because (a) it "is perceived to be of zero utility by [most] practitioners," who are the ones who hire our students, and because (b) it's potentially manipulable by "friends, colleagues, ideological allies" citing "each other for strategic reasons?"

I don't buy it. (a) is a straw man; nobody is arguing that faculty legal scholarship is a relevant metric because of its direct impact on our students' employment. Instead, I understand the primary reason to consider a metric like this is because it's one of the only measures of faculty quality we can get -- and faculty quality should be considered in ranking schools.

(b) groups "friends" and "colleagues," whose citations may mean little, with "ideological allies," whose citations seem like a solid measure of article quality. If people in my field cite my work, it's a sign that it might be valuable, whether they're my "ideological all[y]" (whatever that means) or not. Regardless, though, I just don't buy that such practices move the needle much. My latest publication had over 200 FNs, many with multiple citations. In total, I probably cited over 300, maybe 400, sources. If I were to work in a couple citations to my colleagues or friends -- and I never have -- it would be a small drop in the overall bucket. And I remember once on law review deciding that it would be fun to work in some citations to my soon-to-be judge, who had published widely in that area. I did not succeed. It's not that easy to manufacture citations to inapposite authority. I don't doubt that this nevertheless happens some, but I just can't believe that this sort of manipulation can have a significant impact, especially if we're talking about something like 10/400 citations in a given article. (At the very least, this strikes me as less manipulable than many of the USNWR metrics.)

A non

anon3,

I'll let my claims, levied as they actually were and NOT as you try to re-characterize them and their aims, stand. The readers of this blog can, of course, ascertain such matters for themselves. Still, it ought to be mentioned that your effort in this regard constitutes bad form.

The comments to this entry are closed.

StatCounter

  • StatCounter
Blog powered by Typepad