Accreditation standards in the U.S. seem to be moving in the direction of worrying more (only?) about the results of an educational program rather than being concerned about how the education itself is delivered. At the elementary and high school level, this is leading to systems that seem to care more about a student’s score on the mandated standardized test (MCAS in Massachusetts) than on anything else. The attitude seems to be, “if our students prove they are competent in English language skills by doing well on a multiple choice test, why should we care if they cannot write a paragraph?”
The same emphasis is also arriving at the college level, though not quite in the “one size fits all” mode that has overtaken the earlier educational system. Accreditation standards now require institutions to “assess” student achievements at the end of the program (not just each class) to insure that the students achieved the institution’s expectations. See Standard 4.48.
It is also part of the law school accreditation process. The most direct way this is done is by requiring law schools to report their students success on the bar examination. See Interpretation 301-6. An insufficient passage rate is equated with failure and can lead to a law school losing its ABA accreditation.
Overall, the idea of measuring outcomes makes sense. If a law school is training “good” lawyers, do we really care how they are doing it? Of course, if this were the case, the ABA evaluation of the quality of an incoming class of law students would cease, see Standards 501–03, as would the reliance on the quality of the incoming class by the U.S. News.
The ultimate question remains, “How do you know that a graduate of a particular law school is going to be a good (or bad) lawyer?” An easy prediction is based on the bar exam. Someone who fails it multiple times has issues with their knowledge of the law or, as likely, perseverance in preparing for or undertaking stressful events. Any of these would seem to call their future as a competent attorney into question. But passing the bar, in itself, tests little of the new attorney’s abilities and allows people to enter into the practice of law who will prove incompetent.
Unfortunately, although relatively easily calculated, post-law school placement rates are not a good measurement. It seems as every legal employer assumes that a graduate of a U.S. News highly ranked law school will be a good lawyer. See Henderson & Zahorsky, The Pedigree Problem. Unlike an open marketplace where a better product can overtake a larger, but inferior one, the law school ranking system places so much reliance on self-reinforcing statistics, that better education fails to be measured.
How, then, to measure? In some ways, legal education has an advantage over many other fields in that we have a builtin, although admittedly seriously defective measurement of our academic success: the bar exam. Unfortunately, it is locked in time and differs little from the bar exams I took in the early 1980s. The subjects tested are the same as are the procedures used to do the testing (multistate, essay and MPRE), at least in most states. We have learned more about testing techniques over the last thirty years and certainly have a broader model of legal education now. Rather than rehashing the law school accreditation rules, therefore, wouldn’t the better route be to seriously re-evaluate the bar examination itself and try to make it a better evaluation mechanism?
Comments
You can follow this conversation by subscribing to the comment feed for this post.