Search the Lounge


« Justice Kennedy on Law Reviews and Law Blogs | Main | Civil War Politician's House Trivia »

October 04, 2013


Feed You can follow this conversation by subscribing to the comment feed for this post.


Thanks for this Michelle- it's very interesting. I pretty regularly get emails from highly dubious "peer review" "open access" journals in both philosophy and law. These charge high fees to publish, and they pretty clearly survive on these fees. The offers are often enough in pretty bad English. It's very hard for me to believe that publishing in them would be worth anything in any school that had quality control. There are some very good open-access journals, and some of the best, like Philosophers' Imprint do not charge author's fees, either, though some reputable ones do charge. It's clear, though, that there is a market for publishing crap, and people willing to pay to have their bad work published.

Ethan White

Great post Michelle. NPR actually reported this morning that he is "molecular biologist and visiting researcher at Harvard University" so I wonder if this shouldn't have been subject to IRB approval by Harvard.

Ethan White

Forgot to post the link:

His affiliation is reported starting at around 5:13.


I'm not sure what you mean by saying that IRBs treat all subjects as equal. Yes, all subjects must be treated with the usual standards of informed consent, privacy, and so on. However, subjects who are considered particularly vulnerable for one reason or another are routinely given MORE protection, and there are often procedural differences in the way such studies are considered. For example, studies on minors, on prisoners, on crime victims, on people with limited mental status, etc. will generally receive much more scrutiny and are not eligible for expedited reviews. It would not be surprising for an IRB to approve one study with college-student adults as subjects but to deny an equivalent study with elderly hospital patients, for example--or to demand additional safeguards for the second.

However, I think you are right that IRBs don't have the ability to say something like, "These subjects don't deserve the protection of the usual rules," even for subjects identified as criminals or evil people. But I suspect that this is something that in the long run we wouldn't want IRBs deciding.

janel kay

I'm a little confused about why IRB review is required. If the research was not federally funded, there's no need for IRB review unless an institution requires it. If the institution, or in this case the journal, doesn't receive any HHS or FDA funding, they're not even required to _have_ an IRB. (Maybe it's the use of federal grant funds to pay publication fees that kicks in the "you receive federal funds" rule, in which case my argument falls apart.)

If, however, Bohannon was doing the research with his Harvard hat on, he'd need IRB review and approval as long as Harvard had not "un-checked the box" on its OHRP assurance or had any other more restrictive policy.

Michelle Meyer

Matt and Ethan: Thanks.

Gregory: Yes, IRBs absolutely treat some groups of subjects as more in need or protection than others, variously according to federal regulations, agency guidance, or the inclinations of individual IRBs. I meant, as you guessed, that IRBs are not free to "define protection down" from the baseline for groups of subjects that they deem evil, etc. And yes, I agree that that's a good thing; IRBs already make too many subjective decisions and other decisions they lack expertise to make.

Janel: Sorry to have been unclear. I'm not necessarily arguing that IRB "was required" in this case. When and why is IRB review required for human subjects research (HSR)? When that HSR: (1) is conducted by a Common Rule agency; (2) is funded by a Common Rule agency; (3) involves an institution "engaged in" HSR that has checked the box; (4) is conducted at an institution with its own internal policy requiring IRB review (often governed by faculty handbooks incorporated by reference into employment contracts); or (5) is published in a journal that requires IRB review of any HSR. 1 and 2, as you note, don't apply. 3 and 4 might, depending on the nature of Bohannon's affiliation with Harvard.

Mostly, though, I was referring to #5. Science's publication policy provides: "Informed consent must be obtained for studies on humans after the nature and possible consequences of the studies were explained. All research on humans must have IRB approval." I don't know how Science defines "research on humans," but if they're borrowing the widely used Common Rule definitions (which would make sense), the sting was clearly "research" (systematic, designed to produce generalizable knowledge) and it clearly involved "human subjects" (interaction, intervention).

Now, as I said, Science's/Bohannon's argument (if they even thought about the IRB issue at all, which I highly doubt--and that itself is illuminating) is likely that this wasn't HSR, it was investigative journalism. If you asked them why it ought to count as one versus the other, I suspect they point to Bohannon's status as a journalist (let's bracket his apparent Harvard connection). Although it's true that the Common Rule scheme ends up regulating HSR--or not--largely based on the status of the researcher (e.g., academic or not), that's an artifact of the limits of federal jurisdiction, not a normative principle (to the contrary, many, occasionally including the federal government itself, have argued that all HSR should be subject to IRB review). Surely, the overwhelming majority of manuscripts reporting the results of HSR that Science publishes (or reviews) were conducted at academic institutions where IRB review is probably already required. But what if an "independent scholar" with no institutional affiliation and thus not subject to the Common Rule were to submit such a manuscript after having tested human subjects? Would you expect Science to waive its IRB requirement on grounds of the researcher's status? I wouldn't. It's an ethical principle that journals have voluntarily adopted, not a federal regulation whose reach is necessarily limited. And if that analysis is correct, then it's at least a little weird that Science published the results of HSR without either IRB review or informed consent, presumably simply because of the arbitrary status of the investigator as a "writer" rather than "academic."

Stevan Harnad


To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.

But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason is simple: Fee-based OA publishing (fee-based "Gold OA") is premature, as are plans by universities and research funders to pay its costs:

Funds are short and 80% of journals (including virtually all the top, "must-have" journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).

What is needed now is for universities and funders to mandate OA self-archiving (of authors' final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online ("Green OA").

That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.

The natural way to charge for the service of peer review then will be on a "no-fault basis," with the author's institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

That post-Green, no-fault Gold will be Fair Gold. Today's pre-Green (fee-based) Gold is Fool's Gold.

None of this applies to no-fee Gold.

Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume -- without testing it -- that non-OA journals would have come out unscathed, if they had been included in the sting.

But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:

Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record -- and without paying an extra penny.

But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA -- which, with Green OA, are identical to those of non-OA.

For some peer-review stings of non-OA journals, see below:

Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.

Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press

Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).


"Legal academics who work across disciplines sometimes find themselves in the uncomfortable position of explaining to their stunned colleagues the process by which second- and third-year law students, armed with author c.v.s, decide what gets published and where."

There isn't really much of an explanation. Frankly, most law professors (at least the ones who only have a JD and have limited practice experience) wouldn't be particularly competent at evaluating articles themselves. Law students are well beyond their competence. I'll be honest, I never realized how academically substandard law schools were until I returned to school after practicing law for a few years to get my PhD.

The comments to this entry are closed.


  • StatCounter
Blog powered by Typepad