A few days ago, I argued that, even
assuming that the costs of the tenure system outweighed any benefits, law
schools would derive few benefits from abolishing tenure. Most schools show
little willingness to implement less extreme measures designed to shame,
intimidate, or otherwise induce faculty to perform consistently with valued
institutional goals, so why should we expect them to take the even more drastic
measure of firing someone they’ve worked with for many years, even when legally
entitled to do so? Isn’t it more
realistic, I argue, to ask law schools to take the intermediate steps of (1)
defining coherent institutional goals, and (2) coming to some consensus on when
faculty are meeting those goals? I
believe that the first of these tasks is much harder than the second, because
it requires a disparate (and, sometimes, disagreeable) group to come to some concurrence
about who we are and what we value as a unit.
This task may be even more
difficult at law schools than within other academic departments for a variety of reasons (although, maybe not. If you want to feel good about the functionality of
your own department, read
this hilarious thread at Historiann on nightmare interviews – the comments
are priceless). But the
traditional lack of peer-reviewed publishing by law faculty (which is increasingly
changing), the high rate of tenure at law schools, the fact that we are a professional
school, and a variety of other factors may mean that law schools face special
difficulty in defining and meeting institutional goals. Brian
Leiter best summarizes some of these challenges in this now-dated but, I think,
largely still-accurate description.
Let me briefly run through the
typical objections to the suggestion that law schools should do more to tailor
incentives to institutional goals and illustrate why they are off-base.
(1) (1) “But
the quality of what we do is subjective.”
Yes. But it’s not supposed
to be standardless. Every law
school I know endeavors at both the hiring and tenure stage to make some
assessment of a candidate’s scholarship and teaching quality. The rigor and methods by which this are
done vary significantly across institutions (with estimating the quality of even a seasoned scholar and teacher solely on the basis of a one-hour job talk
of a paper no one has read constituting one end of the spectrum), but I
haven’t seen any schools openly abandon that mission because it’s too hard or
subjective. It makes no sense to
raise this objection only with respect to our existing ranks.
(2) (2) “You’ll
just wind up counting [insert: articles, downloads, student numbers, here].” Sometimes this is not such a bad
idea. Most of us have intuitions
or biases about how hard or how well others are or are not working based on
pretty sketchy information, often formed after initial impressions, that turn
out to be off-base over the long term.
Updating them with new information is not an entirely bad idea. Having said that, there is always a
danger of “managing the numbers,” especially if we convince ourselves that
quality is not discernable, or if we’re simply too lazy to make more detailed and
time consuming assessments. But I
think that at a well-run school, there will be sufficient pushback (especially
from those with more time-intensive fields or projects – i.e. a big book
project, archival legal history, experimental work, or the compilation of
original data sets that may take years to complete) to prevent the simple
counting tendency from completely taking hold.
(3) (3) “But
we don’t have good means of assessing these qualities.” No, we don’t. And we need to work on that. As noted, the traditional lack of peer reviewed publishing (which
I don’t mean to romanticize either – it has its own problems) means that law
schools may need to work harder to come up with other mechanisms of scholarly
quality assessment. And as to
teaching, I suspect that many of our B-school colleagues would gladly rant at
length about the consumer-driven culture there that has turned teaching
assessment into a popularity contest based on teaching evaluations that often
reflect likability, rather than teaching skill. (More on this in my next, and last, post in this
series). But the right answer
can’t be to simply give up.
(4) (4) “Such
assessments would disadvantage women and minorities.” In the wrong hands, yes
they could. Much has been made in
the blogosphere lately of the underrepresentation of women in various measures
of impact, including downloads
and citations. And hats off to Ann Bartow, Bridget
Crawford and the other Feminist Law Professors for their on-going “Where Are The Women?”
series. Moreover, it has
been argued that women and minorities may bear a disproportionate service
burden, which is unlikely
to be rewarded to the same degree as the other parts of the academic “trilogy”
(i.e. scholarship and teaching). Again, however, I don’t think that the
answer is simply to throw up our hands in defeat, but to work at creating
better mechanisms for evaluating each other. Moreover, I’m probably biased on this point, but I think
that some systematic evaluation may reveal women and minorities to be more, rather than less, productive than our institutions assume, if only because so
many of us may be working in fields that are relatively unfamiliar to relevant
decision-makers at the school.
In my last post in this
installment, “We All Contribute In Our
Own Ways” Is Not A Valid Institutional Goal, I’ll add some final
ruminations on law schools as institutions with cultures, incentive structures,
and habits that are amenable to manipulation in furtherance of a coherent
academic agenda.
Related Posts:
I.
My
Tenure’s For Sale. How About Yours?
II.
Incentives
And Institutions: Why Stop With The Banks?
IV. "We
All Contribute In Our Own Ways" Is Not A Valid Institutional Goal
Recent Comments