Search the Lounge


« Hannah Arterian Stepping Down As Syracuse Law Dean | Main | New Mexico Law Settles On Co-Deans »

June 23, 2015


Feed You can follow this conversation by subscribing to the comment feed for this post.

Enrique Guerra-Pujol

I'm actually quite interested, fascinated even, by this topic, and I think the point about "moral illusions" is a very important one, but it's going to take me some time to read through this, digest it, and offer some specific feedback ...

Enrique Guerra-Pujol

I think the problem with A/B testing (versus just "imposing" a single option, as in the 401k example) is that one still has a choice when there is only a single option--e.g. I don't have to sign up for Facebook if I don't want to--whereas I have no choice if I am being tested without my consent, as in the Facebook mood contagion experiment. We can spend thousands of words parsing nuances upon nuances, but ultimately, the issue comes down to one's view of ethics: if you are "consequentialist" you are more likely to approve of nonconsensual A/B testing; if, however, you take a "duty-" or "principles-based" view of ethics, you are less likely to approve. In any case, to the extent this is an ethical issue, then don't we have to concede there are no "right answers" up front, instead of pretending that there is?

Michelle Meyer

Hi Enrique. Thanks for the comments.

Let me take your latter comments first. I think that there is a more or less correct interpretation of the Belmont Report and of the Common Rule, and of both of these to the various fact patterns I've analyzed, and you won't be surprised to hear that I believe that my interpretations are sound. For instance, it is simply a legal and historical fact, not my mere preference, that these documents permit nonconsensual human subjects research under certain circumstances and that IRBs have, since these documents were in place, blessed such studies accordingly.

Are these documents--which, I again hasten to emphasize, do not mandate consequentialism (or else the balancing test would hardly be that consent is the default rule and can be waived only when the research can't feasibly be carried out any other way, involves only minimal risk, and does not violate any other rights of subjects) but instead lay out *prima facie* principles to be balanced--themselves morally "right"? No more and no less than a zillion other policy decisions, some of which are codified, such as mandatory vaccination, seatbelt, and helmet laws, which also infringe liberty in order to protect welfare (sometimes the person's whose liberty is being infringed; sometimes others'). I don't think it's reasonable to insist on an absolute rule of consent in as broad a category of activity as "research," as the fed regs define it (must Google obtain the informed consent of every visitor to its website before it A/B tests several shades of blue?). It's not a reasonable way to balance liberty and welfare (in the case of QI/QA A/B testing) or corporate autonomy (in the case of A/B testing website design), and it fails to cohere with the many other ways we allow people to do things to other people without their permission. I don't think I "pretend" to be doing anything more than making that argument. I am certainly not offering a mathematical proof of the ethics of quality improvement A/B testing, because I don't think ethics, policy, or law work that way.

On the other hand, let's not be too nihilistic. The fact that ethics, policy, and a whole lot of law do not lend themselves to proof of singularly "right" answers doesn't mean that there aren't better and worse answers and better and worse arguments in support of various answers.

As for your first comment about having a choice about whether to be subject to untested innovations and existing practices versus having no choice to be subject to A/B testing, I think there's certainly something to what you're saying here, but I have a couple of push backs:

(1) I'm more skeptical than you are of the claim that we always have meaningful choices about whether to be subject to untested innovations and existing practices. Sometimes this will be true, but other times, not so much. Untested laws are an obvious example: we generally don't get to opt out of those. Re: workplace policies like 401(k) matching letters, an employee may or may not have a meaningful choice of whether to hold a job (probably not). If she does need to work, she will necessarily be subject to countless decisions made by management, and if she's lucky, receiving some sort of 401(k) matching letter will be among these choices. As for Facebook, some of the fiercest critics of the its experiment believe that Facebook should be regulated like a public utility precisely because, for so many people, there is effectively no choice about whether to have an account or not.

(2) Even where there is some element of choice over whether to subject oneself to an untested innovation or existing practice (say, OkCupid or Facebook--if you reject the public utility argument), this choice is, by definition, not an informed one. We don't know the effects of untested As without rigorously comparing them to Bs. Would you want to say that, in 2006 when News Feed was rolled out, Facebook users assumed its psychological risks because they could have chosen to quit Facebook at that time (or not sign up thereafter)?

Ideally, when someone rolls out an untested innovation (law, policy, product, service, other practice), those affected would be told--or simply realize--that there are risks and uncertainties involved. When an innovator adopts a culture of continual testing, as I think they generally should, they would tell those affected of this culture up front, even if, in order to maintain the validity of the results or due to the sheer volume, they cannot fully disclose the details of each A/B test in advance. There would be some sort of specific statement about the substantive parameters of A/B testing (only QI/QA-type research, or also to test orthogonal questions?) and the process behind it (advance corporate IRB review? commitment to publishing the results? to acting on them in certain circumstances?).

The comments to this entry are closed.


  • StatCounter
Blog powered by Typepad