I've been looking some more at Gregory Sisk et al.'s "Scholarly Impact of Law School Faculties: Extending the Leiter Rankings to the Top 70." One of the things that I'm interested in is the methodology--taken from Brian Leiter's previous study of the top 25 faculties--that creates a weighted score by multiplying the mean citations of the tenured faculty over the past five years by 2 and then adding the median. Sisk's table 2 lists the mean and median for each of 70 schools, as well as the weighted score and rank for the weighted score (that Sisk et al. adjusted downward slightly to make it comparable to Leiter's earlier study, because they ran their citation study a few months after Leiter).
I've re-ranked the schools based just on mean and just on median (pdf of the ranking available here). My initial thought was that the median ranking might be the best snapshot of a faculty overall, because it's not susceptible in the way the mean is to one or a few highly cited faculty. Brian Leiter made this point about Erwin Chemerinsky at UC-Irvine here (fourth paragraph down).
What's interesting about this is that some of the schools that are already performing above their US News peer assessment rank do even better on the median ranking than the Sisk overall ranking -- take Florida State, which is ranked 19 in median citations to tenured faculty over the past five years. The University of Missouri also improves, for instance, from 55 overall to 43.
Previous coverage of the Sisk study at the faculty lounge has focused on the correlations between Sisk's weighted scores and other attributes of the schools -- like student credentials and bar pass rate -- and the relationship between Sisk's weighted scores and the citations to schools' main law journals. Since Brian Leiter posted on the study on Monday this has sparked a lot of commentary, including at ELS blog and Bainbridge.
Update: In response to Brian's comment about the problems with use of median, here's the list of the schools ranked by weighted score, mean, and median, with a final column that lists the difference between mean rank and median rank. The schools with a positive score had a higher median rank (they performed less well) than mean rank. While many schools had similar ranks for mean and median, there are a few that performed much better on median (like Washington University, Indiana Bloomington, Missouri, and Boston College) than mean; while some others performed much better on mean (Hawaii, Illinois, San Diego) than median -- which I take it suggests that they have some particularly highly cited faculty.
The problem with median is that it's not really a snapshot of "the school," it's just one person! And while, in most cases, the one who turns out to be the median isn't anomalous, in some cases they are, and may obscure the presence of a significant number of high 'impact' scholars on the same faculty. (I haven't looked, by the way, at your re-ranking, so I'm simply reporting what I recall from reviewing the data for the initial study.)
Posted by: Brian | September 17, 2010 at 06:51 PM
Brian--thanks for your comment and for everything you're doing to bring rationality to rankings.
A combination is likely helpful, of course. Why throw away data? A weighted score that combines the mean and median make a ton of sense. But the median (middle person) is selected by the balance of the school and so that person can give us a sense of the entire school. What impressed me about the re-ranking is that there are some schools whose median is rather different from their weighted rank. I've just put up another pdf, which has a final column that lists the difference between the mean and median for each school, so it's easier to pick out the schools with large differences between mean and median ranks.
Posted by: Alfred Brophy | September 17, 2010 at 07:18 PM
One note about your list - if you go to the Sisk, et al. paper, it turns out that there was a fairly serious error made in calculating Notre Dame's score. The error has been corrected, and so Notre Dame's rank moves up about 18 spots. The mean and median have changed, too. You might want to change your own list accordingly, FWIW.
Posted by: Rick Garnett | September 19, 2010 at 04:03 PM
Thanks for this, Rick. I now see there's a new version of the paper up on ssrn. I'll get to preparing revised tables.
Posted by: Alfred Brophy | September 19, 2010 at 04:49 PM