Search the Lounge


« Taxing Eggs: What Have We Learned? | Main | Rankings and Employment Data »

February 27, 2014


Feed You can follow this conversation by subscribing to the comment feed for this post.


Academic "research" in the law school community has strayed too far too often from it core mission, and thus cut off the principal source of "evaluators" - the legal community writ large. (Listed above, one supposes, as "other evaluators.") The currency among legal scholars of late is (not entirely to be sure) in the main scholarship of little or no value to the legal community. What is being written about subjects of interest and benefit to legal community, who are experiencing the law as it is experienced in the US, is not the predominant content of current legal scholarship, to be sure.
Thus, the constituency that should be the main consumers of legal scholarship is largely oblivious to the work of much of the law academy. This makes it much harder to gain traction.
SSRN downloads notwithstanding, of course. Those are really important.


I'm baffled. Sagalnick's study purports to show that whether songs became popular in what might be described as experimental cultural environments was due to "chance," not the intrinsic aesthetic merits of the songs -- suggesting that such essential values don't exist. The comment about the failures of supposed experts to predict the successes of the Harry Potter novels and the Beatles is presumably meant to make the same point: they didn't succeed because they were inherently "good," they succeeded by "chance" and were then defined as "good" because they succeeded. That makes the last paragraph a non sequitur. If the only way academic research can be deemed "successful" is by chance, how can law review editors or anyone else use selection criteria that counterbalance rather than reinforce chance? The notion seems to be that pieces of academic research actually have "true" values that could be discerned by sufficiently sophisticated readers. Isn't that supposition exactly what the rest of the post refutes?

David Orentlicher

My concern is that editors and others assume they are operating on the basis of quality rather than chance, so they make distinctions that are not warranted. By doing so, they unduly narrow their pool of candidates and disproportionately reward those who remain in the pool. By recognizing the role of chance, they would more fairly distribute their benefits.

Orin Kerr

I think most people understand that success in contests with very low odds of success require a lot of luck. The world is way too random, and there's too much disagreement about what is good and bad, for the good (however defined) to magically prevail in low-probability situations. But I'm curious about David's last comment. What are the "distinctions that are not warranted"? What is the fair distribution? I gather you have some scenarios in mind, but I'm not sure what they are.


One can't presume to answer for David.
David may be asking whether development of "brand names" may incorporate an element of chance or "luck."
Perhaps there is a paradox here.
David states: "Movie producers, television executives, and book publishers are notoriously ineffective at sorting the likely to succeed from the unlikely to succeed."
But, it is the sorting that often leads to success. In other words, if we suppose the existence of many works with the "essential value" to garner "benefits," then the luck of "a big break" that brings such work to the fore can be a determinative factor of future success. Some works may garner more success than other, undiscovered or unrecognized work simply as a result of this sorting. When we speak of movies, for example, are we not speaking of a huge number of really awful movies with a few exceptions, rather than the other way around?
Are we asking whether this is so, and whether the sorting is at times fortuitous, or at times based on factors other than quality? Or, do we believe that one makes one's own "luck" and that the cream will always rise to the top? (Luck being a version of chance, as used here.)
In legal publishing, we know about letterhead bias. And, name recognition. Have not we all read some real stinkers in major law journals that quite obviously were accepted because of the "brand" on the article? Are not many law review articles, even in major journals, like so many movies (i.e., unreadable and not worth reading)?
Is this David's point? Perhaps not. He states: "My concern is that editors and others assume they are operating on the basis of quality rather than chance."

John Nicholas

Anon is missing the point. There is a difference between success in academic research and success in the marketplace. What is considered good academic research should be based well-recognized criteria; what is successful in the market can be based on "taste" for which there might not be recognized criteria or where the criteria changes depending on the "market". I admit however that even in academics there is a snow-ball effect: once someone gets name recognition, it opens many doors, even if the quality of the research (based on well-recognized criteria)is not so good (Orin Kerr's comment). And the "well-recognized" criteria in academia can still allow for a lot of wiggle-room, depending on the discipline: the soft sciences have more wiggle than the hard, and the arts and humanities have still more. Isn't that what we're talking about--the Beatles and the Mona Lisa? Nonetheless, whatever the criteria, David's findings are fascinating.

The comments to this entry are closed.


  • StatCounter
Blog powered by Typepad