Search the Lounge

Categories

« Vermont Law Seeks Earthjustice Clinical Professor | Main | Penn State Law Seeking Director, Intellectual Property Clinic or Intellectual Property Clinical Professor of Practice »

February 24, 2017

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Enrique Guerra Pujol

"... at least [law reviews] exercise some reasonable control over authorship." Not always! Did President Obama really write this 56 page law review article, including all 300 plus footnotes, all by himself:
https://priorprobability.com/2017/01/15/professor-obama/ ?

James Grimmelmann

You're running into the difference between experimental science and other disciplines. In law, the great bulk of the work is typically writing the paper; the system of attribution is designed to signal who did that work, because those are the people who should be recognized and promoted based on it. In biology or physics, the work also includes conducting bench research itself, and it's important to the workings of the profession that the people who do that be recognized and promoted based on their contributions to it. So the purpose of byline credit is not to signal who wrote the paper, but also who did the underlying research. Those numbers can be much larger. There is, as you note, a danger of handing out authorship credits like candy, but simply limiting the number of authors -- without some other system for signalling in a well-standardized way who did the work -- would undermine important professional institutions.

[M][@][c][K]

The basic difference is that law review articles are rarely read, often speculative - and the article usually the work of the authors alone. In science the article is simply a way of broadcasting the results of the research they describe; it is therefore considered a serious scientific sin to fail to credit collaborators in a science journal because the article is not the point - the research was.

Writers of an scientific article that leave off the names of researches are regularly criticised and even censured by scientific publications and the societies behind them. There is the famous Rosalind Frank commission in DNA which probably cost her the Nobel prize. When In 1953, James Watson and Francis Crick published their landmark discovery of the structure of DNA, for which they and Maurice Wilkins later received the Nobel Prize. However, save an acknowledgement at the end of the paper, Rosalind Franklin was not recognised. Watson and Crick had, without her permission, gained access to Franklin’s high-quality X-ray crystallographic photographs of DNA, which allowed them to correct their model and deduce the true structure of DNA. This has come to be regarded as a scandal. Similarly Robert Gallo's failure to credit Luc Montagnier and Françoise Barré-Sinoussi was a similar controversy, though they justly did receive the Nobel.


PaulB

Mac, your story as to why Frank didn't receive the Nobel Prize is wrong. She most certainly would have won the award when Wilkins did but she had died. No posthumous awards are made.

PaulB

Correction-it's Rosalind Franklin

Steve L.

James and Mack: Yes, of course I understand the difference, but let me suggest that some form of recognition -- say, "contributor" or "collaborator" -- would be more accurate than "author," at least once we get beyond the first few dozen.

The disciplines themselves seem to recognize that they have a problem, otherwise there would have been no proposal to fix it.

justme

"It seems, however, that coauthors in science journals routinely run into the hundreds and even the thousands."


Routinely? No, papers with hundreds of authors are extremely rare. Thousands almost unheard of.

[M][a][c][K]

I'd forgotten that Franklin died remarkably young (37), but it has nonetheless, been considered a scandal, and her work did not get, until much later, the recognition it deserved - in the 1970s.

The principle remain, leaving a collaborator off is considered scandalous - it's also dangerous legally (the unnamed inventor problem has surfaced a few times in major patent cases.) A lot of scientific research involves large teams. If a researcher gets a reputation for such activity, it can hurt their ability to secure collaborators. Still you hears stories of junior researchers being left off articles descrbinpg their work, or alternately of professors forcing their names into articles for work they had nearly nothing to do with.

[M][a][c][K]

I'd put the number of authors on most scientific papers between 3 and 8. More is rare and is usually in fields like high energy physics where teams can be very large.

The comments to this entry are closed.

StatCounter

  • StatCounter
Blog powered by Typepad