Given all the talk about the U.S. News law school rankings I've been thinking for a while that I wanted to look at how a ranking of law schools based on a few of the factors that prospective students might care about (and that I certainly care about) would look. I have put together a quick look at schools based on an average of ranks of the median LSAT score of students entering in fall 2013, the rank of schools based on the percentage of their 2013 graduates who had full-time, permanent JD-required jobs, and the rank of their main law review based on the number of citations it received from 2006 to 2012. There's a lot one could argue about with regard to the selection (and weighting) of those three variables -- and if I do anything else with this paper I may very well refine the weight assigned to each variable (or even change the variables). But I do think this is a nice start in the direction of ranking schools based on the factors that matter in quality of an institution.
One of the things that I found of particular interest is how highly correlated a lot of the variables are. For instance, correlations between each of the three variables and the overall U.S. News rank are also high, though the U.S. News rank and LSAT median rank are correlated most highly of the three (.93). The correlation between U.S. News rank and full time, permanent JD required jobs rank is .76 and the correlation between U.S. News rank and law review citations rank is .87.
Cribbing now from my abstract on ssrn:
This paper returns to the perennially favorite topic of ranking law schools. Where U.S. News & World Report includes a wide variety of factors – some of which are criticized as irrelevant to what prospective students care about (or should care about) – this paper looks to three variables. They are median LSAT score of entering students, which seeks to capture the quality of the student body; the percentage of the graduating students who are employed at 9 months following graduation at full-time, permanent JD required jobs; and the number of citations to each school’s main law review. This paper rank orders each of those variables, averages those ranks to obtain a new ranking, and then compares those new rankings to the U.S. News & World Report rankings.
If you're interested in this, you can find the full paper here, along with tables that list the schools that are the biggest winners in this new ranking and those that decline the most. Down the road I'd like to extend this ranking to all ABA-accredited law schools. The image is a series of scatter plots of ranks on U.S. News, LSAT medians for students entering fall 2013, percentage of class employed at JD-required long-term, full time jobs, and citations to law reviews, 2006-13.
Update: Over at taxprof Paul Caron has some discussion of this paper and there are a lot of comments on what should be counted in a ranking system. One of the things that I hoped this initial post would do is help me refine the factors to use and I appreciate the comments there and here -- and I'm particularly interested in the range of responses to the use of citations to schools' law reviews.
Michael Smith is also talking about the new ranking method at his blog -- especially the virtue of avoiding the peer assessment scores.
You make the same mistake US News does here in counting school funded jobs in your employment numbers. As such you severely overrank schools such as Emory, William and Mary, George Washington and UVa, which have hired many of their own graduates -- over 20% in some cases. If you look at the LST numbers and click on the asterix, you can see how many school funded jobs are included in those totals. Or you could go by this list: http://lawschooltuitionbubble.wordpress.com/2014/04/09/class-of-2013-employment-report/ In W&M's case, for example, you have their employment rank at #20 in your rankings, when they would fall to #102 if you take out the school funded jobs, Emory and GWU would both be in the 60's. As such, you claim the school is underranked by US News, but if you took out the school funded jobs you would see they are overranked by US News. (your own school would move up a couple of slots if you did this)
Posted by: JLK | June 19, 2014 at 10:24 AM
JLK,
I spent a lot of time thinking about what to do with school-funded positions. I agree that they're not the jobs students go to school hoping/expecting, but I do think that they can be helpful for graduates struggling to find permanent work as a lawyer. So part of me thinks this should be included in some way. If I do any more on this, I'm going to remove both school-funded and solos from the employment variable. Will be interesting to see how much that changes the new ranks.
Posted by: Alfred L. Brophy | June 19, 2014 at 11:37 AM
"If I do any more on this, I'm going to remove both school-funded and solos from the employment variable. Will be interesting to see how much that changes the new ranks."
From everything I've heard, that is the way that things should go - schools are hiring students to boost their stats, and a brand-new JD going solo is just a fancy way of saying 'unemployed'.
Posted by: Barry | June 19, 2014 at 10:20 PM
Yes. Those school-funded positions should be removed. Schools should be ashamed for including them in their employment statistics.
Posted by: test | June 20, 2014 at 01:51 AM
I doubt the "number of citations to each school’s main law review" fall within the category of factors that "what prospective students care about (or should care about)."
Remove that, take out the school-funded positions, and things get more interesting.
And once people can figure out a way to include data that address the cost side of the equation (average debt borrowed per student, etc), you start to approach something that will approach the real calculus reflecting exactly how students have begun to compare law schools in the new era. LST is a good start on this.
Posted by: anon | June 20, 2014 at 02:06 AM
In 10 years as a pre-law advisor, I have never had a single prospective law student ask about the prestige -- however measured -- of the school's law review(s). To the extent that faculty scholarship matters at all to the handful of academia-oriented prospective students, it's never in the aggregate, but rather focused more squarely on the research and engagement of specific professors whose interests match the particular student's. For example, "How respected is this prof's work in the field of Chinese law and policy?" This makes sense: prospective law students aspiring to academic careers should, just like their grad student counterparts, try to find a good match between their own research interests and the program(s) to which they are applying. Which articles the school's law journals publish is largely irrelevant to that question.
Posted by: PreLaw Advisor | June 20, 2014 at 06:50 AM
Thanks for everyone for these comments -- they're useful and I hope to set aside some time to rework some of the variables (particularly employment). I wonder about whether cost should be included in this ranking; given the variable pricing that schools seem to be using, I think it may be best to get a rank of schools independent of cost -- which then allows individuals to compare schools in an ideal world. When students have their financial aid packages they can then factor cost of attendance into their individual decision.
I want to respond in particular to Pre-Law Advisor's comments on citations to law reviews. I'm guessing something that your students ask a great deal about is the prestiga of law schools, The law review citations are -- as I pointed out in my paper -- designed as a substitute for the notoriously static U.S. News peer assessment scores. That is, citations offer a measure of the intellectual culture of schools. Citations, thus, are a proxy for something I think your students care about. Citations, moreover, are highly correlated with the U.S. News peer assessment (and to a lesser extent lawyer/judge scores). I wrote about this some a few years back in a paper called "The Emerging Importance of Law Review Rankings for Law School Rankings, 2003-2007," which is available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=896313
At some point this summer I hope to revisit that paper, looking at more recent data in some depth.
Posted by: Alfred L. Brophy | June 20, 2014 at 08:32 AM
I'm still not sold on law school citations as a relevant proxy for a number of reasons.
I often see comments here and other places that favor deleting solo practitioners and employment at a student's law school. The premise is that including them skews the employment figures. I'm not disputing that in certain instances it may do so, but discounting them entirely would skew the figures as well. Some people do come to law school with the goal of becoming a solo practitioner (and I've met a number of them) or decide during law school that they would like to set up their own shop. This is more likely for students who plan to return to their home communities after law school and establish a small practice. (And I imagine it's also more likely at regional schools.)
As far as school funded positions go, I can see how it can be used for nefarious purposes, but for some students, some of the positions that open up for recent graduates really do align with students' long term goals (particularly public interest work). At the institution where I work, these positions open up when there is a need at the school and our students, along with recent graduates from any institution, can apply. Sometimes they get the positions, often they don't. But discounting the ones that do would skew the data as well.
Posted by: Anon | June 20, 2014 at 10:17 AM
Anon, those are good points on the employment data and they reflect my initial thinking. I do want to take out the solos and school-funded positions and see how much of a difference that makes.
Posted by: Alfred L. Brophy | June 20, 2014 at 10:49 AM
While I'm ambivalent as to whether one includes a citation metric in a ranking like this, if you do you should probably use a more nuanced metric than a raw citation count. At the least I think mean cites/article published would be more appropriate. If you want to capture more information though you may want to consider using the mean eigenvector centrality or pagerank for each journal's articles.
Posted by: Anona_LR_Member | June 20, 2014 at 02:53 PM
A really interesting project.
If I understand the methodology correctly, my strongest concern with using total law review cites as a proxy for prestige is that smaller schools whose law reviews take fewer articles per year are disadvantaged. I agree with the commenter above that cites/article makes more sense, if you are going to include this as a ranking input at all.
Also: since authors use U.S. News rankings to make article placement decisions, I am not at all surprised that the citation ranking tracks the U.S. News rankings, since articles more likely to be cited will tend to place in higher ranked journals, as measured by U.S. News. Authors usually aren't super concerned with analyzing the prestige of the articles they cite; however, they care much more about prestige, usually as reflected by U.S. News rankings, when they make publication decisions for their own articles. So I would expect these two sets of rankings to be very similar, but with a significant delay in the cite-count ranking catching up to any major gains or losses in U.S. News ranking.
Which is a long-winded way of suggesting that perhaps the cite-count method of incorporating prestige into your rankings internalizes historical U.S. News rankings quite a lot, and is a valid indication of prestige to the extent the U.S. News rankings at the time those cited articles were placed (which may have been decades ago) is a valid indicator of current prestige.
Finally: national prestige is really less important to students who are attending regional schools and plan to practice in that region. Employment figures might reflect the relevant, regional reputation of these schools more accurately than attempts to ordinally rank the national reputation of schools that don't really *have* a national reputation.
Posted by: Anon234 | June 20, 2014 at 04:14 PM
Thanks for joining the conversation, Anon LR Member and Anon234. Very helpful comments.
I thought about using Doyle's impact factor rather than rank for total citations for 2006-13 and may yet use that in a future iteration. I think overall size (in addition to how well cited articles in smaller journals are) tells us something about the energy and culture at a law review. (Though I noticed that some terrific journals like the University of Chicago Law Review are disadvantaged by my metric).
Anon234 -- you're certainly correct that there's a feedback in operation here, which suggests that the "better"/more prestigious journals have a great pick of articles. They still have to pick good articles -- and this is something that they're sometimes good at and at other times the articles they pick don't do all that well on citations. I had a short piece a while back that looked at citations to the most elite law reviews and some other very good ones and made the point that the most-cited articles in the very good journals frequently were cited more than many of the articles in the most elite journals. FWIW, here's a link to that short piece: http://diginole.lib.fsu.edu/cgi/viewcontent.cgi?article=1125&context=fsulr
Moreover, I agree that there is self-fulfilling prophecy aspect of citations -- but in some ways that's helpful in showing what reviews scholars who have choices prefer. That is, the revealed preferences part of this tells us something about law reviews and the schools that publish them.
Posted by: Alfred L. Brophy | June 20, 2014 at 04:38 PM
I also think you shouldn't be using ranks as scores. Your outcome would be more reliable if you used percentiles instead. The advantage is that the percentile retains differences in magnitude, whereas rank ignores that valuable information. It's fine to rank things at the final stage, but you're using ranks in various metrics as an input into your final ranking. This makes it unnecessarily coarse.
All that said, the Internet is full of critics and I'm not trying to denigrate your project.
Posted by: Anona_LR_Member | June 20, 2014 at 05:18 PM
Anona LR Member,
Thanks for this. I thought a lot about whether to use z scores of the three inputs or ranks. You're of course correct that the ranks throw out some data, but the problem with the z scores is that (particularly for the law review citations, where a few journals were way ahead of most others) some of the outliers would distort the rankings. That would have given a disproportionate weight to law review scores. Given the coarseness/unreliability of the data the reversion to ranks seemed appropriate (or less bad than the alternatives I tried). I thought that treatment by ranks was all that the rough data deserved.
Posted by: Alfred L. Brophy | June 20, 2014 at 07:34 PM