Search the Lounge

Categories

« Being Jewish at UCLA | Main | Do Employers Actually Look at U.S. News Rankings? »

March 07, 2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Barry

I like your analysis also, Derek, and I'm a statistician.

rose

Please someone address the many things law schools do to game the rankings, in addition to employment.

These include: emailing students that have no chance for admission with or withoutfee waivers, to increase the number of applicants. This assists the applicant to admitted ratio.

Giving out gift cards just for applying, same reason as above. Alabama has offered iTunes cards to applicants. I don't know if they did that this year, but they have in the past.

Yield protecting students with better numbers who they think won't attend. I feel this is ranking driven.

Claiming that the admission process is holistic, when the alone numbers are key. (Thanks to rankings.)

Claiming they only look at the first LSAT score, when they use the highest score. This discourages people who don't know better from retaking.

John C. Kunich

The much-anticipated/dreaded annual ranking of law schools has been unleashed by U.S. News to the usual howls of outrage mingled with squeals of glee. There are few, if any, other lists that are simultaneously so heavily used and so widely criticized. The disproportionately immense influence these rankings exert on a law school’s volume of applications, applicants’ credentials, job placement results, and student retention, must be considered in light of the many assumptions, biases, and inaccuracies that taint the numbers.

Any metric that purports to assess the relative merit of institutions of higher education should be subject to the highest possible means of maximizing objectivity, transparency, proper focus, and freedom from manipulation. It is a poorly kept secret among law professors and deans that the U.S. News rankings are deeply flawed in every one of these criteria. This allows some law schools to game the system and exploit their inflated rank to their own advantage, while many others are thrown into a desperate whirlpool of negative publicity and unfair preconceptions.

This matters in several ways, but most notably in the distortion of alternative analysis for potential law students contemplating the law schools to which they may apply, and for rising 2Ls considering whether to transfer to a “better” school for the remainder of their law school career. Each year, thousands of such decisions are made. To the extent law school rankings are an important factor in shaping these decisions, it is imperative that these rankings be reformed to correct the multiple profound deficiencies that render the results both misleading and dangerous.

John C. Kunich

To build on my previous comments, I think the “methodology” employed to concoct the annual U.S. News law school rankings is as flawed as any widely-used assessment in any field. The well-known innumeracy of many people, including bright, well-educated individuals, produces a large and unwarranted aura of reliability for a system like the law school rankings that purport to digest multifarious complex factors into a single number. The fact that U.S. uses a weighted mix of “12 measures of quality” to determine each school’s numerical score is all that many observers will care to know. Like Colonel Sanders’ famous “blend of 11 herbs and spices,” this concatenation of specially-chosen factors is assumed to be a guarantor of excellence.

The fact that the “assessment scores” obtained by surveying some unnamed number of legal academics and lawyers/judges combine for 40% of a school’s overall score should be cause for concern by anyone who considers the underlying validity of the rankings. Without precautions regarding sample size, ballot-box stuffing, collusion, insider bias, and subjective guessing, these “assessment scores” are as enormously imprecise, manipulable, and vulnerable to prejudice as any popularity contest. With only a reported 58% response rate among the legal academics surveyed, one must wonder how many people were invited to submit their opinions, how these invitees were distributed among all law schools, to what extent respondents were not evenly distributed, and what influenced that 58% to turn in their views.

An entire book could be written about the defects in the ranking methodology, but suffice it to say that the factors chosen, and the relative weight assigned to each factor, are extremely arbitrary and prone to game-playing. For example, why is “selectivity” given a weight of 25% of the total, while bar passage rate receives a weight of 2% and faculty-student ratio has a 3% weight? How was this comparative importance determined? Who decided, and on what basis, that a law school’s selectivity is more than 12 times as significant as its bar pass rate? And why should the possibly uninformed and unsubstantiated opinion of 58% of legal academics who were somehow invited to participate in the survey count for more than 8 times as much as a school’s actual results in the bar exam?

By what scientific or pseudo-scientific means was it determined that the median LSAT scores of entering 1Ls is worth 15% of a school’s total value, the median UGPA or entering 1Ls counts for 10%, and the LSAT/UGPA of any 2Ls, 3Ls, and actual graduates do not matter at all? What rigorous methodology produced the decision to count library resources as a minuscule ¾ of 1%, while diversity of students, diversity of faculty, experiential learning, academic support, community service, and innovative pedagogy do not even merit a fraction of 1%? The guesses, unsubstantiated opinions, rumor-fueled prejudices, and insider-trading mutual back-scratching of the fraction of invitees who submit opinions for the two “assessment scores” crush the scales at a combined 40% of the overall value, while these meaningful and impactful measures of a school’s merit are entirely ignored. Is it any wonder that the rankings largely replicate a certain form of hierarchy and privilege, year after predictable year?

The comments to this entry are closed.

StatCounter

  • StatCounter
Blog powered by Typepad