Inside Higher Ed has this story about a proposal in the sciences to limit “credit” for articles with too many co-authors:
Under the proposal, those who contribute to co-authored papers would receive only a fraction of a research credit for their contribution, instead of the full authorship credit that they now enjoy -- when universities or government agencies use counts of papers as part of a review process.
This will seem a little strange to law professors for a couple of reasons. First, most U.S. law schools do not have strict numerical measures for tenure or promotion review. There is certainly an expectation of scholarly productivity, but it is usually determined more holistically than by just adding up the number of publications.
Even more surprising, however, is the sheer number of coauthors who are evidently listed on some scientific papers. In law journals, it is not unusual to see two coauthors, and occasionally three or four. It seems, however, that coauthors in science journals routinely run into the hundreds and even the thousands. According to the Inside Higher Ed article, “the current record stands at 5,154 for a 2015 paper published by the team at the Large Hadron Collider at the European Organization for Nuclear Research (CERN).”
In that light, it seems pretty reasonable to limit any single author’s “credit” once the number of coauthors goes beyond a handful. In fact, the proposal is pretty generous, as the minimum credit would never be less than one-third.
To me, the more interesting question is how peer-reviewed journals have gotten to the point of allowing such “gift authorship” or “byline banditry.” (The quoted terms are used in the Inside Higher Ed article.) Perhaps 5000 scientists contributed significantly to the CERN research that led to the paper, but that is why we have star footnotes, in which all contributors are thanked. It is inconceivable that 5000 individuals (or even 500) could play any meaningful role in actually writing an article.
Complain as we do about law reviews, at least they exercise some reasonable control over authorship.
"... at least [law reviews] exercise some reasonable control over authorship." Not always! Did President Obama really write this 56 page law review article, including all 300 plus footnotes, all by himself:
https://priorprobability.com/2017/01/15/professor-obama/ ?
Posted by: Enrique Guerra Pujol | February 24, 2017 at 11:57 AM
You're running into the difference between experimental science and other disciplines. In law, the great bulk of the work is typically writing the paper; the system of attribution is designed to signal who did that work, because those are the people who should be recognized and promoted based on it. In biology or physics, the work also includes conducting bench research itself, and it's important to the workings of the profession that the people who do that be recognized and promoted based on their contributions to it. So the purpose of byline credit is not to signal who wrote the paper, but also who did the underlying research. Those numbers can be much larger. There is, as you note, a danger of handing out authorship credits like candy, but simply limiting the number of authors -- without some other system for signalling in a well-standardized way who did the work -- would undermine important professional institutions.
Posted by: James Grimmelmann | February 24, 2017 at 12:33 PM
The basic difference is that law review articles are rarely read, often speculative - and the article usually the work of the authors alone. In science the article is simply a way of broadcasting the results of the research they describe; it is therefore considered a serious scientific sin to fail to credit collaborators in a science journal because the article is not the point - the research was.
Writers of an scientific article that leave off the names of researches are regularly criticised and even censured by scientific publications and the societies behind them. There is the famous Rosalind Frank commission in DNA which probably cost her the Nobel prize. When In 1953, James Watson and Francis Crick published their landmark discovery of the structure of DNA, for which they and Maurice Wilkins later received the Nobel Prize. However, save an acknowledgement at the end of the paper, Rosalind Franklin was not recognised. Watson and Crick had, without her permission, gained access to Franklin’s high-quality X-ray crystallographic photographs of DNA, which allowed them to correct their model and deduce the true structure of DNA. This has come to be regarded as a scandal. Similarly Robert Gallo's failure to credit Luc Montagnier and Françoise Barré-Sinoussi was a similar controversy, though they justly did receive the Nobel.
Posted by: [M][@][c][K] | February 24, 2017 at 03:53 PM
Mac, your story as to why Frank didn't receive the Nobel Prize is wrong. She most certainly would have won the award when Wilkins did but she had died. No posthumous awards are made.
Posted by: PaulB | February 24, 2017 at 04:38 PM
Correction-it's Rosalind Franklin
Posted by: PaulB | February 24, 2017 at 05:11 PM
James and Mack: Yes, of course I understand the difference, but let me suggest that some form of recognition -- say, "contributor" or "collaborator" -- would be more accurate than "author," at least once we get beyond the first few dozen.
The disciplines themselves seem to recognize that they have a problem, otherwise there would have been no proposal to fix it.
Posted by: Steve L. | February 24, 2017 at 06:47 PM
"It seems, however, that coauthors in science journals routinely run into the hundreds and even the thousands."
Routinely? No, papers with hundreds of authors are extremely rare. Thousands almost unheard of.
Posted by: justme | February 25, 2017 at 07:44 PM
I'd forgotten that Franklin died remarkably young (37), but it has nonetheless, been considered a scandal, and her work did not get, until much later, the recognition it deserved - in the 1970s.
The principle remain, leaving a collaborator off is considered scandalous - it's also dangerous legally (the unnamed inventor problem has surfaced a few times in major patent cases.) A lot of scientific research involves large teams. If a researcher gets a reputation for such activity, it can hurt their ability to secure collaborators. Still you hears stories of junior researchers being left off articles descrbinpg their work, or alternately of professors forcing their names into articles for work they had nearly nothing to do with.
Posted by: [M][a][c][K] | February 26, 2017 at 08:18 AM
I'd put the number of authors on most scientific papers between 3 and 8. More is rare and is usually in fields like high energy physics where teams can be very large.
Posted by: [M][a][c][K] | February 26, 2017 at 08:20 AM