Search the Lounge

Categories

« Duke Renaming Aycock Dormitory | Main | Penn State Law Splits Apart & Rebrands Carlisle Campus »

June 19, 2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

JLK

You make the same mistake US News does here in counting school funded jobs in your employment numbers. As such you severely overrank schools such as Emory, William and Mary, George Washington and UVa, which have hired many of their own graduates -- over 20% in some cases. If you look at the LST numbers and click on the asterix, you can see how many school funded jobs are included in those totals. Or you could go by this list: http://lawschooltuitionbubble.wordpress.com/2014/04/09/class-of-2013-employment-report/ In W&M's case, for example, you have their employment rank at #20 in your rankings, when they would fall to #102 if you take out the school funded jobs, Emory and GWU would both be in the 60's. As such, you claim the school is underranked by US News, but if you took out the school funded jobs you would see they are overranked by US News. (your own school would move up a couple of slots if you did this)

Alfred L. Brophy

JLK,

I spent a lot of time thinking about what to do with school-funded positions. I agree that they're not the jobs students go to school hoping/expecting, but I do think that they can be helpful for graduates struggling to find permanent work as a lawyer. So part of me thinks this should be included in some way. If I do any more on this, I'm going to remove both school-funded and solos from the employment variable. Will be interesting to see how much that changes the new ranks.

Barry

"If I do any more on this, I'm going to remove both school-funded and solos from the employment variable. Will be interesting to see how much that changes the new ranks."

From everything I've heard, that is the way that things should go - schools are hiring students to boost their stats, and a brand-new JD going solo is just a fancy way of saying 'unemployed'.

test

Yes. Those school-funded positions should be removed. Schools should be ashamed for including them in their employment statistics.

anon

I doubt the "number of citations to each school’s main law review" fall within the category of factors that "what prospective students care about (or should care about)."

Remove that, take out the school-funded positions, and things get more interesting.

And once people can figure out a way to include data that address the cost side of the equation (average debt borrowed per student, etc), you start to approach something that will approach the real calculus reflecting exactly how students have begun to compare law schools in the new era. LST is a good start on this.

PreLaw Advisor

In 10 years as a pre-law advisor, I have never had a single prospective law student ask about the prestige -- however measured -- of the school's law review(s). To the extent that faculty scholarship matters at all to the handful of academia-oriented prospective students, it's never in the aggregate, but rather focused more squarely on the research and engagement of specific professors whose interests match the particular student's. For example, "How respected is this prof's work in the field of Chinese law and policy?" This makes sense: prospective law students aspiring to academic careers should, just like their grad student counterparts, try to find a good match between their own research interests and the program(s) to which they are applying. Which articles the school's law journals publish is largely irrelevant to that question.

Alfred L. Brophy

Thanks for everyone for these comments -- they're useful and I hope to set aside some time to rework some of the variables (particularly employment). I wonder about whether cost should be included in this ranking; given the variable pricing that schools seem to be using, I think it may be best to get a rank of schools independent of cost -- which then allows individuals to compare schools in an ideal world. When students have their financial aid packages they can then factor cost of attendance into their individual decision.

I want to respond in particular to Pre-Law Advisor's comments on citations to law reviews. I'm guessing something that your students ask a great deal about is the prestiga of law schools, The law review citations are -- as I pointed out in my paper -- designed as a substitute for the notoriously static U.S. News peer assessment scores. That is, citations offer a measure of the intellectual culture of schools. Citations, thus, are a proxy for something I think your students care about. Citations, moreover, are highly correlated with the U.S. News peer assessment (and to a lesser extent lawyer/judge scores). I wrote about this some a few years back in a paper called "The Emerging Importance of Law Review Rankings for Law School Rankings, 2003-2007," which is available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=896313

At some point this summer I hope to revisit that paper, looking at more recent data in some depth.

Anon

I'm still not sold on law school citations as a relevant proxy for a number of reasons.

I often see comments here and other places that favor deleting solo practitioners and employment at a student's law school. The premise is that including them skews the employment figures. I'm not disputing that in certain instances it may do so, but discounting them entirely would skew the figures as well. Some people do come to law school with the goal of becoming a solo practitioner (and I've met a number of them) or decide during law school that they would like to set up their own shop. This is more likely for students who plan to return to their home communities after law school and establish a small practice. (And I imagine it's also more likely at regional schools.)

As far as school funded positions go, I can see how it can be used for nefarious purposes, but for some students, some of the positions that open up for recent graduates really do align with students' long term goals (particularly public interest work). At the institution where I work, these positions open up when there is a need at the school and our students, along with recent graduates from any institution, can apply. Sometimes they get the positions, often they don't. But discounting the ones that do would skew the data as well.

Alfred L. Brophy

Anon, those are good points on the employment data and they reflect my initial thinking. I do want to take out the solos and school-funded positions and see how much of a difference that makes.

Anona_LR_Member

While I'm ambivalent as to whether one includes a citation metric in a ranking like this, if you do you should probably use a more nuanced metric than a raw citation count. At the least I think mean cites/article published would be more appropriate. If you want to capture more information though you may want to consider using the mean eigenvector centrality or pagerank for each journal's articles.

Anon234

A really interesting project.

If I understand the methodology correctly, my strongest concern with using total law review cites as a proxy for prestige is that smaller schools whose law reviews take fewer articles per year are disadvantaged. I agree with the commenter above that cites/article makes more sense, if you are going to include this as a ranking input at all.

Also: since authors use U.S. News rankings to make article placement decisions, I am not at all surprised that the citation ranking tracks the U.S. News rankings, since articles more likely to be cited will tend to place in higher ranked journals, as measured by U.S. News. Authors usually aren't super concerned with analyzing the prestige of the articles they cite; however, they care much more about prestige, usually as reflected by U.S. News rankings, when they make publication decisions for their own articles. So I would expect these two sets of rankings to be very similar, but with a significant delay in the cite-count ranking catching up to any major gains or losses in U.S. News ranking.

Which is a long-winded way of suggesting that perhaps the cite-count method of incorporating prestige into your rankings internalizes historical U.S. News rankings quite a lot, and is a valid indication of prestige to the extent the U.S. News rankings at the time those cited articles were placed (which may have been decades ago) is a valid indicator of current prestige.

Finally: national prestige is really less important to students who are attending regional schools and plan to practice in that region. Employment figures might reflect the relevant, regional reputation of these schools more accurately than attempts to ordinally rank the national reputation of schools that don't really *have* a national reputation.

Alfred L. Brophy

Thanks for joining the conversation, Anon LR Member and Anon234. Very helpful comments.

I thought about using Doyle's impact factor rather than rank for total citations for 2006-13 and may yet use that in a future iteration. I think overall size (in addition to how well cited articles in smaller journals are) tells us something about the energy and culture at a law review. (Though I noticed that some terrific journals like the University of Chicago Law Review are disadvantaged by my metric).

Anon234 -- you're certainly correct that there's a feedback in operation here, which suggests that the "better"/more prestigious journals have a great pick of articles. They still have to pick good articles -- and this is something that they're sometimes good at and at other times the articles they pick don't do all that well on citations. I had a short piece a while back that looked at citations to the most elite law reviews and some other very good ones and made the point that the most-cited articles in the very good journals frequently were cited more than many of the articles in the most elite journals. FWIW, here's a link to that short piece: http://diginole.lib.fsu.edu/cgi/viewcontent.cgi?article=1125&context=fsulr

Moreover, I agree that there is self-fulfilling prophecy aspect of citations -- but in some ways that's helpful in showing what reviews scholars who have choices prefer. That is, the revealed preferences part of this tells us something about law reviews and the schools that publish them.

Anona_LR_Member

I also think you shouldn't be using ranks as scores. Your outcome would be more reliable if you used percentiles instead. The advantage is that the percentile retains differences in magnitude, whereas rank ignores that valuable information. It's fine to rank things at the final stage, but you're using ranks in various metrics as an input into your final ranking. This makes it unnecessarily coarse.

All that said, the Internet is full of critics and I'm not trying to denigrate your project.

Alfred L. Brophy

Anona LR Member,

Thanks for this. I thought a lot about whether to use z scores of the three inputs or ranks. You're of course correct that the ranks throw out some data, but the problem with the z scores is that (particularly for the law review citations, where a few journals were way ahead of most others) some of the outliers would distort the rankings. That would have given a disproportionate weight to law review scores. Given the coarseness/unreliability of the data the reversion to ranks seemed appropriate (or less bad than the alternatives I tried). I thought that treatment by ranks was all that the rough data deserved.

The comments to this entry are closed.

StatCounter

  • StatCounter
Blog powered by Typepad