Legal academics who work across disciplines sometimes find themselves in the uncomfortable position of explaining to their stunned colleagues the process by which second- and third-year law students, armed with author c.v.s, decide what gets published and where.
Well, get ready to get your schadenfreude on. For the past 10 months, John Bohannon, a contributing correspondent for Science magazine, has been conducting a sting of (other) science journals and their peer review processes. Much like the famed Sokal hoax, Science submitted to 304 journals a bogus paper written by a fictitious researcher from a nonexistent institution. The paper described "the anticancer properties of a chemical that [the fictitious researcher] had extracted from a lichen," and according to Bohannon, "[a]ny reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short-comings immediately" and rejected it promptly. And yet, over half of the journals accepted the paper. Recall that the bogus paper purports to report on the discovery of the anticancer properties of lichen. Let the prospect of bogus cancer research published in peer reviewed medical journals sink in.
Finally, given my IRB obsession, I can't help but comment on that angle. Science, like most journals, requires that any study involving human subjects have received their informed consent as well as IRB approval. The editors of the targeted journals are pretty clearly unwitting "human subjects," as federal regulations define that term. An IRB would have had to waive the usual requirement of informed consent, and signed off on the privacy, psycholgical, and financial risks to those editors that agreed to publish the bogus paper. The Science article, after all, names names. Science even published supplementary data containing email correspondence between Bohannon and various editors (only bank account numbers are redacted). And Bohannon reports that at least one publisher has vowed to shutter its offending journal's doors by the end of the year as a result of the sting. Trust me when I tell you that many IRBs would absolutely worry about these kinds of risks to subjects.
Now, I don't know that Bohannon in fact omitted to get IRB approval. But there is no mention of such approval that I've seen. My guess is that both he and Science regard this as investigative journalism rather than as human subjects research, perhaps because, although Bohannon is a Ph.D.-trained scientist and published the results of his study in one of the premier science journals in the world, he makes his living as a science writer. It's one of the oddities of our IRB system that what some can do, if at all, only after often-protracted prospective third-party review, others can do at will. Academics are usually based in institutions that have contracted with federal regulators to subject to IRB review all human subjects research conducted the institution's auspices, while journalists and other writers, for instance, are typically based in no such institutions.
It's another oddity of the IRB system that all human subjects are created equal: when determining how much risk investigators ethically may ask subjects to face, the federal regulations do not allow IRBs to discriminate between, on one hand, patient-subjects in a cancer trial and, on the other hand, incompetent editors reviewing a bogus paper for a medical journal about lichen's anticancer properties—or terrorists, or ICU staff who fail to practice good hygiene in caring for their patients.
For all its methodological flaws, the Science sting at a minimum provided more details of what we already knew to be the seedy underbelly of some "peer review" journals. Exposing this underbelly is important. I'm not saying it shouldn't have been done. To the contrary, my point is that it's worth pausing to reflect on the fact that similarly critical work often cannot be done by those best trained to do it.
Incidentally, Science's peer review sting is part of a special issue, Communication in Science: Pressures and Predators. The whole thing is worth a read.
[Cross-posted at Bill of Health]
Thanks for this Michelle- it's very interesting. I pretty regularly get emails from highly dubious "peer review" "open access" journals in both philosophy and law. These charge high fees to publish, and they pretty clearly survive on these fees. The offers are often enough in pretty bad English. It's very hard for me to believe that publishing in them would be worth anything in any school that had quality control. There are some very good open-access journals, and some of the best, like Philosophers' Imprint do not charge author's fees, either, though some reputable ones do charge. It's clear, though, that there is a market for publishing crap, and people willing to pay to have their bad work published.
Posted by: Matt | October 04, 2013 at 09:13 AM
Great post Michelle. NPR actually reported this morning that he is "molecular biologist and visiting researcher at Harvard University" so I wonder if this shouldn't have been subject to IRB approval by Harvard.
Posted by: Ethan White | October 04, 2013 at 11:14 AM
Forgot to post the link:
http://www.npr.org/2013/10/04/229103215/open-access-journals-hit-by-journalists-sting
His affiliation is reported starting at around 5:13.
Posted by: Ethan White | October 04, 2013 at 11:15 AM
I'm not sure what you mean by saying that IRBs treat all subjects as equal. Yes, all subjects must be treated with the usual standards of informed consent, privacy, and so on. However, subjects who are considered particularly vulnerable for one reason or another are routinely given MORE protection, and there are often procedural differences in the way such studies are considered. For example, studies on minors, on prisoners, on crime victims, on people with limited mental status, etc. will generally receive much more scrutiny and are not eligible for expedited reviews. It would not be surprising for an IRB to approve one study with college-student adults as subjects but to deny an equivalent study with elderly hospital patients, for example--or to demand additional safeguards for the second.
However, I think you are right that IRBs don't have the ability to say something like, "These subjects don't deserve the protection of the usual rules," even for subjects identified as criminals or evil people. But I suspect that this is something that in the long run we wouldn't want IRBs deciding.
Posted by: Gregory | October 04, 2013 at 04:36 PM
I'm a little confused about why IRB review is required. If the research was not federally funded, there's no need for IRB review unless an institution requires it. If the institution, or in this case the journal, doesn't receive any HHS or FDA funding, they're not even required to _have_ an IRB. (Maybe it's the use of federal grant funds to pay publication fees that kicks in the "you receive federal funds" rule, in which case my argument falls apart.)
If, however, Bohannon was doing the research with his Harvard hat on, he'd need IRB review and approval as long as Harvard had not "un-checked the box" on its OHRP assurance or had any other more restrictive policy.
Posted by: janel kay | October 04, 2013 at 07:32 PM
Matt and Ethan: Thanks.
Gregory: Yes, IRBs absolutely treat some groups of subjects as more in need or protection than others, variously according to federal regulations, agency guidance, or the inclinations of individual IRBs. I meant, as you guessed, that IRBs are not free to "define protection down" from the baseline for groups of subjects that they deem evil, etc. And yes, I agree that that's a good thing; IRBs already make too many subjective decisions and other decisions they lack expertise to make.
Janel: Sorry to have been unclear. I'm not necessarily arguing that IRB "was required" in this case. When and why is IRB review required for human subjects research (HSR)? When that HSR: (1) is conducted by a Common Rule agency; (2) is funded by a Common Rule agency; (3) involves an institution "engaged in" HSR that has checked the box; (4) is conducted at an institution with its own internal policy requiring IRB review (often governed by faculty handbooks incorporated by reference into employment contracts); or (5) is published in a journal that requires IRB review of any HSR. 1 and 2, as you note, don't apply. 3 and 4 might, depending on the nature of Bohannon's affiliation with Harvard.
Mostly, though, I was referring to #5. Science's publication policy provides: "Informed consent must be obtained for studies on humans after the nature and possible consequences of the studies were explained. All research on humans must have IRB approval." I don't know how Science defines "research on humans," but if they're borrowing the widely used Common Rule definitions (which would make sense), the sting was clearly "research" (systematic, designed to produce generalizable knowledge) and it clearly involved "human subjects" (interaction, intervention).
Now, as I said, Science's/Bohannon's argument (if they even thought about the IRB issue at all, which I highly doubt--and that itself is illuminating) is likely that this wasn't HSR, it was investigative journalism. If you asked them why it ought to count as one versus the other, I suspect they point to Bohannon's status as a journalist (let's bracket his apparent Harvard connection). Although it's true that the Common Rule scheme ends up regulating HSR--or not--largely based on the status of the researcher (e.g., academic or not), that's an artifact of the limits of federal jurisdiction, not a normative principle (to the contrary, many, occasionally including the federal government itself, have argued that all HSR should be subject to IRB review). Surely, the overwhelming majority of manuscripts reporting the results of HSR that Science publishes (or reviews) were conducted at academic institutions where IRB review is probably already required. But what if an "independent scholar" with no institutional affiliation and thus not subject to the Common Rule were to submit such a manuscript after having tested human subjects? Would you expect Science to waive its IRB requirement on grounds of the researcher's status? I wouldn't. It's an ethical principle that journals have voluntarily adopted, not a federal regulation whose reach is necessarily limited. And if that analysis is correct, then it's at least a little weird that Science published the results of HSR without either IRB review or informed consent, presumably simply because of the arbitrary status of the investigator as a "writer" rather than "academic."
Posted by: Michelle Meyer | October 04, 2013 at 09:16 PM
WHERE THE FAULT LIES
To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.
But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason is simple: Fee-based OA publishing (fee-based "Gold OA") is premature, as are plans by universities and research funders to pay its costs:
Funds are short and 80% of journals (including virtually all the top, "must-have" journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).
What is needed now is for universities and funders to mandate OA self-archiving (of authors' final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online ("Green OA").
That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.
The natural way to charge for the service of peer review then will be on a "no-fault basis," with the author's institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.
That post-Green, no-fault Gold will be Fair Gold. Today's pre-Green (fee-based) Gold is Fool's Gold.
None of this applies to no-fee Gold.
Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume -- without testing it -- that non-OA journals would have come out unscathed, if they had been included in the sting.
But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:
Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record -- and without paying an extra penny.
But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA -- which, with Green OA, are identical to those of non-OA.
For some peer-review stings of non-OA journals, see below:
Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.
Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press
Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.
Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).
Posted by: Stevan Harnad | October 05, 2013 at 08:06 AM
"Legal academics who work across disciplines sometimes find themselves in the uncomfortable position of explaining to their stunned colleagues the process by which second- and third-year law students, armed with author c.v.s, decide what gets published and where."
There isn't really much of an explanation. Frankly, most law professors (at least the ones who only have a JD and have limited practice experience) wouldn't be particularly competent at evaluating articles themselves. Law students are well beyond their competence. I'll be honest, I never realized how academically substandard law schools were until I returned to school after practicing law for a few years to get my PhD.
Posted by: Zorba | October 05, 2013 at 01:55 PM