Brian Leiter’s recent thoughts about Empirical Legal Studies are worth reading. I can’t offer much perspective on ELS in the law schools, but as someone who came to graduate work in political science out of legal practice, I was utterly blindsided by its dominance in law-related scholarship in that discipline.
There is, of course, good work being done by good scholars in this area in and out of political science departments. But Leiter rightly raises the flag of “a self-reinforcing mutual-admiration society” in which method itself validates the quality and usefulness of scholarship.
I see three
problems with the flavor of ELS that originates in political science. First, as I noted
in an earlier post about political theory and law, the divide between normative
and empirical scholarship has splintered the discipline of political science in
unhelpful ways, and nowhere is this more apparent than in law-oriented
political science scholarship.
Second, I worry that some empirical political scientists have forgotten
(or never learned) a caution voiced by Catherine MacKinnon (among many others):
our method “organizes the apprehension of truth; it determines what counts as
evidence and defines what is taken as verification.” Finally, I wonder whether the extensive and rigorous
methodological training required by many political science graduate programs
comes at the cost of even a rudimentary understanding of legal reasoning and
legal institutions.
Because this
last charge may be particularly bristling, let me offer a few examples from
published political science scholarship: (1) a study of federal death penalty appeals
that lumps together direct appeals and collateral habeas appeals without any
understanding of their differences; (2) a study of state supreme courts that
includes the D.C. Circuit in its data set on the assumption that the District
of Columbia in many ways “functions as a state”; (3) multiple studies of the
workloads of Supreme Court justices that fail to account for the role of their clerks.
These shortcomings are relatively straightforward and would be fairly easy to remedy. A
more difficult issue is the extent to which some political scientists fail to
appreciate legal reasoning. In a
recent essay in the Duke Law Journal titled “Are Empiricists Asking the Right Questions About Judicial
Decisionmaking?,” Jack Knight observes
that “in the last few years the overwhelming majority of empiricists have not
incorporated other elements of judicial reasoning and substantive argumentation
into their analyses in a systematic way.”
Engaging seriously with these elements of the law will require far
greater investment than learning factual and procedural aspects of legal
institutions.
To reiterate, these critiques are not directed at all empirical political scientists. But they are worth noting, particularly given the current push toward ELS in law schools. They caution against a certain dogmatism that has captured segments of political science scholarship. That kind of dogmatism might facilitate tidy research questions, but those questions may be the wrong ones to be asking. As Jeff Powell notes in a comment on Knight’s essay: “the empiricists frequently appear to be battling a straw man who believes that law can be done by following rules that do not allow for discretion in their interpretation or application. I do not know anyone who thinks that.”
I think many of the problems also arise from uncritical adoption of the methods found in contemporary economics by Law & Economics devotees, a problem not wholly transcended by the recent turn to behavioral economics among some in the profession. People writing in this genre often seem wholly unaware of critiques of the methods of these economists by S.M. Amadae, Deirdre McCloskey, Philip Mirowski, Amartya Sen, Daniel Hausman, Michael S. Mcpherson, Ian Shapiro (Political Science and Political Philososphy), Elizabeth Anderson, among others.
And there's not enough appreciation of the role of philosophy of science (of both the natural and social sciences), hence litte appreciation of the meanings of induction, the use of models, the role of analogical and metaphorical reasoning, hermeneutics, debates surrounding methodological individualism, and so forth and so on.
Relatedly, there's often uncritical but faddish adoption of the latest fashion in the sciences, be it neuroscience, cognitive science, evolutionary psychology, what have you. The nascent character of such sciences should give one pause but....
As I've said elsewhere, it perhaps goes without saying that one of the more recalcitrant issues here revolves around the belief that the natural sciences are the repository for the kinds of models and standards, the analytical "robustness" and "rigor," that we should imitate in the social sciences. Now we need not draw hard and fast boundaries between these two basic kinds of science (after all, we have sufficient reason to label them both 'science'), but I think there are a host of reasons that we should take care not to elide the very real distinctions here between natural and social science.
For example, when folks hear the word "empirical" in this context they often call to mind "quantitative social science," of which, after Elster, there are three principal varieties: measurement, data analysis (i.e., statistical analysis), and modeling. Such social science is often oversold if only because it trades too heavily on the mantle and mitre of "hard" science (i.e., the epistemic authority of the natural sciences). Elster himself discusses many of the neglected problems of such science in his book, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences (2007). Elster avers, "An interesting question in the psychology and sociology of science is how many *secret practitioners* there are of economic science fiction--hiding either from themselves or from others the fact that this is indeed what they are practicing." Here, what counts for epistemic rigor or robustness has to do with "numbers" or mathematics, specifically, "ingenious mathematical models" that have little or no anchor in everyday "reality" and thus are utterly irrelevant with respect to social policy (cf. several books by McCloskey, critiques by Nicholas Rescher in his works on epistemology and objectivity, as well as Theodore M. Porter's Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, 1995).
Posted by: Patrick S. O'Donnell | July 07, 2010 at 10:17 AM
As to "self-reinforcing mutual admiration society," to give it a charitable interpretation, we can parse it by phrase. First, "self-reinforcing." Whether or not you are skeptical (as a Kuhnian) of the current paradigm's claim to truth, it's still the case that all academic disciplines are self-reinforcing. That's the basis of peer review. The discipline establishes standards and then enforces and reinforces them. (To adopt my friend Patrick's M.O.: see Louis Menand's recent little book on this - The Marketplace of Ideas: Reform and Resistance in the American University. As well as Michele Lamont's empirical (!) study "How Professors Think: Inside the Curious World of Academic Judgment." Also Thomas Haskell's historical work on the rise of professional and academic disciplines, "The Emergence of Professional Social Science: The American Social Science Association and the 19th Century Crisis of Authority".)
Second, "society." Is a community of scholars like ELS a "society" or a "discipline?" The difference is one of degree not kind. We can ask the question whether lots of developing but not yet entrenched disciplines are worth the candle (neuro-economics? evolutionary psychology?) but whether they are self-reinforcing or self-regulating is beside the point, because to be disciplines they have to be. But if you trace them back, all the present disciplines were once fledgling - for example, 150 years ago there was no "sociology."
Third, "mutual admiration." That's really the issue, isn't it? It could be a suggestion that the standards of competence developed within the discipline are insufficiently rigorous. To use a different example, I might think that analytic philosophy is ultimately pointless, and that its practitioners are misguided in congregating with each other as though there really were a point to it, but I'd use the flourish "mutual admiration" if nobody in the discipline ever thought somebody else did sub-standard work. I think you can make the argument that ELS may be flawed or pointless or overstated, but I doubt that it is without professional standards within the discipline.
Or it could be a suggestion that the community, society, or discipline resists "outsiders" questioning whether the discipline itself asks questions worth asking. (And there, those who live in glass houses...) Personally, I welcome the fact that people like Bill Henderson have made themselves experts on statistical and empirical techniques; like all interdisciplinary inquiry, however, the trick is to know enough about it to ask the right questions, but remain distant enough to supply competing concepts or questions. As in all disciplines, that distance may be tough once you've co-opted yourself into the discipline.
I do think that complex empirical modeling can mask a lot of questionable analysis. I posted a blog note a few years back after reading the fine print in Bebchuk's "Lucky Directors" and "Lucky CEOs" that purported to show they had to be manipulating or backdating option issuance. I thought that the assumptions the authors made about what constituted normal or recurring option grants was simply inaccurate. But one had to dig extensively to get there.
I've also encountered the empirical equivalent of the joke about somebody searching under a streetlight for a lost piece of jewelry. An observer says, "why are you looking there? You lost it across the street." To which the searcher replies, "yes, but there's no light over there." You hear from time to time this response to the complaint that the data is not very robust: "yes, but it's all we have."
Posted by: Jeff Lipshaw | July 07, 2010 at 12:32 PM
“a self-reinforcing mutual-admiration society”
That pretty much sums up the legal academy as a whole in a sentence. At least, that's how you are viewed by many of the real scholars with PhDs in rigorous disciplines on your campuses. And the fact that law professors are paid so much more is particularly infuriating.
Posted by: mms | July 07, 2010 at 09:24 PM
On the notion that it is a problem if "method itself validates the quality and usefulness of scholarship":
(1) How is that different from what happens in science generally?
(2) Is it better to *not* allow method to validate the quality and usefulness of scholarship? What social-science discipline apart from (non-empirical) law currently does a *worse* job of assessing the quality and usefulness of scholarship, or has *more* opaque standards for doing so?
Reasonable minds obviously differ, particularly when the prestige of their own scholarship is at stake, but personally, I will choose the devil that exposes itself for everyone to see, over the devil that hides from view, thank you very much.
Posted by: D | August 10, 2010 at 12:07 AM
For me, each job is another job and if you really want it, if it's something you desperately feel that you need to do, you'll work as hard as you possibly can to do it.you feed everything to your fire.
Posted by: Nike Shox Navina | September 09, 2010 at 04:46 AM