Here are a few clarifications and further thoughts arising out of my co-written Sunday NYT op-ed based on my forthcoming law review article (here, I speak only for myself, and not for my op-ed co-author).
First, as is usually the case with op-eds, we didn’t write, or even have any advance notice of, the headline. It’s easy to see why the Times chose it, as it’s clickbaity, but “Corporations, Please, Experiment On Us. You don’t even need our consent” is not very reflective of the nuances of our position. Now on to some of those nuances.
A Clarification About the Op-Ed’s Position on IRB Review and Debriefing
On Twitter, James Grimmelmann worries that New York Times readers will interpret one part of the op-ed, as he did, to be making a problematic claim. In the piece, we ask readers to imagine a hypothetical CEO who worries that some of her employees aren't taking optimal advantage of the company's generous 401(k) matching program. She has a hunch that adding to the usual matching letter information telling each employee what his or her similarly-aged peers are saving would nudge the low-savers to save more. We explain that the CEO could implement this peer information letter policy in the next enrollment cycle and observe the result, but that, regardless of whether savings went up or down, she'd never really know what effect, if any, the policy had without running a randomized, controlled experiment in which half the employees receive letters with no peer information and half receive letters with peer information. We also explain that telling employees in advance about that the company would be sending out different letters and why would badly bias the results by altering employees’ behavior. We then explain that our hypothetical CEO isn’t so hypothetical:
Economists do help corporations run such experiments, but many managers chafe at debriefing their employees afterward, fearing that they will be outraged that they were experimented on without their consent. A company’s unwillingness to debrief, in turn, can be a deal-breaker for the ethics boards that authorize research. So those C.E.O.s do what powerful people usually do: Pick the policy that their intuition tells them will work best, and apply it to everyone.
It is this paragraph that has James worried:
Your argument appears to run: (1) people will be outraged about being experimented on if they're debriefed; (2) managers "chafe" at de-briefing because of the outrage, (3) IRBs would require debriefing; so (4) it’s reasonable for companies not to go through IRBs at all [and to conduct the nonconsensual A/B test of 401(k) letters with no debriefing afterward].
(The above is three tweets strung together but otherwise verbatim; the bits in brackets are my words, but I think they fairly reflect James’s other tweets and the logic of his argument. I trust he’ll tell me if I’m wrong.)
To be clear: No, this is not the position we intended to take in the op-ed. I thank James for bringing this possible misreading to my attention so that I can correct it here, at least. More after the jump.
But none of this logically entails conclusion 4: that the solution to this lamentable situation is for academic economists to either lie to their IRBs about whether debriefing occurred or not submit their protocol to IRBs in the first place (in most cases, a violation of university policy and faculty contracts, if not federal regulations). It doesn’t even entail the conclusion that researchers not subject to IRB review by law or institutional policy should forgo either IRB review (or some sort of corporate analog) or debriefing.
In fact, at one point a few hours before the piece went live online, we were still fiddling with the wording for purposes of space and the copy desk’s style requirements, and an editor changed some wording in one of the sentences at issue so that it read: “Unfortunately, a company’s unwillingness to debrief can be a deal-breaker for the ethics boards that authorize research.” I specifically said that I did not want to imply that it was “unfortunate” that IRBs sometimes require debriefing, and reworded the sentence to read as it now does.
No, the solution we advocate—and the point of our writing for a public audience—is to try to explain what the A/B illusion is and why we should all try to overcome it, much as others have advocated that people learn to recognize and try to stem the ill effects of hyperbolic discounting and confirmation bias. If no one were afflicted with the A/B illusion, then perfectly reasonable debriefing requirements would stand as no obstacle to the outcome we want: evidence-based employment retirement savings policy. And so, in our concluding paragraph, we call for all of us (because, as these other cognitive biases and heuristics, we think that the A/B illusion afflicts all of us from time to time) to “cast off the A/B illusion and applaud” “low-risk efforts…to rigorously determine the effects of” “unilateral choices that affect us” made by “people in power.”
On my reading of our op-ed, we’re saying that low-risk research for the purpose of determining the effects of innovations on those affected by them (not of learning more about some random scientific question that is orthogonal to the company-user/employee relationship) shouldn’t be viewed as per se more morally suspicious than simply unilaterally imposing one untested option with unknown effects on people on everyone, even if such an experiment must, to ensure the validity of the data (not just because getting informed consent is inconvenient), be nonconsensual (after all, we necessarily cannot give informed consent to the unknown effects of unilaterally imposed A-only practices). To the contrary, a better rule of thumb is that, given a choice between these two options, an A/B test will generally be preferable to an untested A-only practice. (There is, however, a third option; see below.)
Perhaps the best interpretation of the piece, then, is that it takes no position on debriefing or IRB review, one way or the other. That's because the emphasis in the op-ed is on the A/B illusion, not on laying out the process by which such low-risk, necessarily nonconsensual, quality improvement experiments are conducted. But I would sooner characterize the op-ed as implicitly endorsing those things than I would interpret it as explicitly or even implicitly rejecting them, much less urging people to flout them in violation of law or policy.
Ryan Calo raises some similar concerns. He tweets that our op-ed “has [an] obvious flaw: Just because some A/B testing is ok doesn’t mean all experiments are. So, how do we know which is ok and which [are] not w/out notice? Because, believe this: sometimes firms are experimenting to extract rent. I’ll bet you $9.99.” In a follow-up tweet, he endorses the view, which he has previously written about, that corporations should use consumer subject review boards.
For reasons I explain above, I disagree that this is a flaw of the op-ed. Again, the op-ed is simply silent about the role of IRBs or some sort of alternative. It does not state or assume that nonconsensual low-risk research can occur with only the researcher deciding what constitutes low risk. Making the case that nonconsensual “human experimentation” can ever be ethically acceptable—much less laudable or sometimes ethically obligatory—seemed like enough to tackle in 800 words.
Conversely, the op-ed is not silent about other (substantive, as opposed to process-oriented) criteria by which we might begin to distinguish acceptable from unacceptable nonconsensual A/B tests. To repeat: we say that low-risk research to determine the effects of innovations on those affected by them (not to learn more about some random scientific question that is orthogonal to the company-user/employee relationship or to extract rent from users) even if they must, for the validity of the data (not just because getting informed consent is inconvenient), be nonconsensual, shouldn’t be viewed as per se more morally suspicious than simply unilaterally imposing one untested option with unknown effects on people on everyone. Each of those italicized caveats is important and limits the scope of our argument, and although we can only briefly assert them in an op-ed, my article discusses each at length. My article also explicitly acknowledges that some corporate research, like most academic research, does not take the form of quality assurance/quality improvement activities in which subjects exposed to (minimal) risks in the study are randomly drawn from the same population that is expected to benefit from the knowledge produced by the study.
Beyond the Op-ed: My Views on IRB Review and Debriefing
In fact, I have endorsed both debriefing and prospective group review of nonconsensual research especially (by an IRB or some similar body) again and again, regardless of whether these are required by law or not. The op-ed itself links to my underlying law review article, which contains an entire section on the importance of debriefing and the normative ends that debriefing does and does not serve. In a footnote in the article, I bracket questions about the appropriate ex ante review process, including IRB review, in order to focus on the appropriate substance of experiments, e.g., what constitutes informed consent in the human subjects research context and when and why it is and isn’t ethically required. Nevertheless, in the Nature piece I co-authored last year on the Facebook experiment, we say that the Facebook and Cornell researchers should have voluntarily sought IRB review and should have debriefed subjects. And just the other day, I had occasion to explain why prospective group review of nonconsensual research is ethically important:
As I’ve said before in the context of the Facebook mood contagion experiment and in other contexts involving risky behavior by journalists, if you’re not going to ask prospective subjects to weigh the risks and potential benefits of participating (say, because that would badly bias the results or because your subjects are not competent adults) and are instead going to deceptively conscript them into service as subjects, then it is right and good to ask people who are less personally and professionally invested in the study than you to put eyes on your plans to ensure that they really don’t pose any more than minimal risk to subjects and that whatever risk[s] they do impose are justified by the expected benefits. This holds true whether you’re an academic, a journalist, a corporate data scientist, or a citizen scientist.
I’ll be turning my attention to corporate research review boards in the coming months, so I expect I’ll have more to say about them in due course. For now, just a few points. First, prospective group review isn’t ethics magic. They don’t always prevent unethical studies from being conducted, both because the review system isn’t perfect and because IRBs exercise zero oversight of the actual consent or study processes to ensure that they adhere to the IRB-approved protocol (they technically have the power to do so but virtually never exercise it, in part because they don’t remotely have the resources). If there’s a truly bad actor out there, you can staff your IRB with a bus full of nuns, and they aren’t going to stop him or her. IRBs can also delay, alter, or block valuable and ethical research in ways that can be harmful to people’s welfare. In short, IRBs make Type I and Type II errors, and both are worth watching out for and trying to mitigate.
Second, and relatedly, I can’t say that we should simply transplant Common Rule IRBs to companies (or other institutions not covered by the Common Rule), because Common Rule IRBs have too many well-known problems that scholars have been documenting, occasionally even empirically, for decades (in the 1970s, the National Commission reviewed some of this empirical work highlighting problems with IRBs that existed at the time). Even federal regulators have acknowledged a variety of problems and are seeking to redress them. And that’s without even considering the possible relevance of various differences between universities and their subjects, on one hand, and corporations and their users, on the other.
Third—and on the other hand—one strength of Common Rule IRBs is that they must include at least one member who is a non-scientist—someone who isn’t steeped in the perspective of scientific research and may not see proposed studies the same way—and at least one member who is a non-affiliate of the institution—so that they do not have any inherent financial or other professional investment in the institution being permitted to conduct the research. The whole point of IRBs (or at least one really big point) is that we don’t trust researchers to be sufficiently disinterested in their proposed research to fairly assess and present the risks and benefits to prospective subjects—not because they’re bad people but because they’re people.
In academia, other affiliates of the same institution and other members of the same or related disciplines will tend to be less interested in a particular project going forward or being conducted in a particular way than the PI, but they are likely not as disinterested and, just as important, they likely do not have quite the same perspective, as others, including members of the subject population (patients, patient advocates, families, product users, employees, citizens—whatever the subject population is). In practice, alas, a single member of the IRB often fills both the non-scientist and the non-affiliate role, and some empirical research suggests that those “non-affiliate” members are uncertain of their role on the IRB and/or may have less of an impact on deliberations as one might hope. Most other countries with IRB-like bodies require far more than one non-affiliate, perhaps for that reason. So, again, there is room for improvement in the way U.S. IRBs are structured. But they seem to provide a significant advantage over any corporate IRBs comprised solely of company insiders steeped in a particular corporate culture. Yes, corporate IRBs would often be reviewing research involving proprietary information, but there are ways of structuring these kinds of ethics boards that protect such information while still serving the intended purpose.
This is actually what makes the prospect of corporate ethics review boards so interesting for those of us who study research ethics and regulation: the ANPRM has been stuck in bureaucratic limbo for almost four years now in a quest to bring more than a dozen agencies on board. But here we’re presented with a blank canvas. Of course, by now, readers can probably guess one approach I’d take if I were czar of the universe: evidence-based research ethics. Come up with a few different promising ways of structuring corporate IRBs (as I’ll call them for simplicity’s sake) and a few different promising sets of substantive rules, principles, or guidelines for them to apply to proposed research projects, test all of them against each other and against the status quo in most companies of having no such bodies at all; measure the effects; and, using the results, develop best practices that companies are encouraged to adopt. That won’t happen for a variety of reasons, which is too bad.
Power Asymmetries
Mary L. Gray tweets our op-ed and comments, “if you’ve ever been on the crap end of scientific experiments=not so awesome.” I am not unaware that nonconsensual research has a dark history in which risks (often more than minimal) were imposed on subjects who were deliberately drawn from already vulnerable populations where neither the subjects themselves nor even the larger population from which they were drawn were likely ever to benefit from whatever knowledge was expected to result. See here for a recent NPR report on one such WWII-era experiment. Beecher’s famous NEJM article provides several other examples. And then, of course, there was Tuskegee. And Guatemala. All of these subjects were used as mere means to science's ends, to use Kant’s locution.
That’s why our op-ed discusses three examples of one category of experiments that are very different in ethically relevant ways: low-risk quality improvement/quality assurance RCTs where the randomly selected subjects exposed to (minimal) risks are also those who stand to benefit from the results, because the RCTs aim to determine the effects of company practices on ongoing users and employees, not transient passers-by with no ongoing relationship to the company. My article makes explicit that this kind of experiment is ethically different from, and explains why. At the heart of the A/B illusion is the idea that corporations (and, as we say, other "powerful people" like lawmakers and educators) are already unilaterally affecting your welfare in unknown and, therefore, necessarily unconsented-to ways. So if the group of employers, users, citizens, or whomever being “experimented on” is vulnerable, then we should be every bit as worried about the effects of imposing untested A’s on them as Mary is about subjecting them to A/B testing. In these cases, it's a mistake, we think, to view attempts to determine the effects of their power over us as more morally suspect than their practice of simply imposing untested innovations on us. In fact, I think there’s a prima facie case that it would be a shirking of corporate responsibility and an exploitation of users for a company like Facebook, faced with probative but inconclusive (and indeed contradictory) evidence that their untested product or service is harming users, to ignore that evidence and not seek to determine the truth about the effects it has on users.
There has always been a tension between the idea that subjects need to be protected from research and the idea that people, especially the most vulnerable people, need access to the considerable benefits of research. Both women and AIDS patients have demanded to be included in biomedical research, for instance. It would be a tragic mistake to conclude from the sorry history of Tuskegee and other exploitative research that low-risk RCTs that are conducted without fully informed consent when they cannot be conducted otherwise have only the potential to harm vulnerable populations and not the potential to help them. Here, for instance, is a great story about Riot Games’ use of nonconsensual social psychology RCTs to find the most effective ways of combatting a range of troublesome behaviors in the company’s popular online game, League of Legends, from incivility to harassment. As the article discusses, Riot Games engages in a level of transparency about its A/B testing that other companies would do well to emulate. But, as far as I know, there is nothing remotely approaching “informed consent” for each of their user experiments, as that term is understood in the context of human subject research, nor could there be for the vast majority of those experiments.
Speaking of Beecher, here is as good a place as any to urge readers to take 15 minutes to watch this speech by Georgetown bioethicist Tom Beauchamp who, as staff philosopher on the National Commission, drafted the famous Belmont Report. Tom gave this speech as he accepted the prestigious Hastings Center’s Beecher Award in 2010, and in it, he argues that the distinction between research and ordinary practice that is written into the Belmont Report and the Common Rule has caused us to overprotect subjects and underprotect patients (and, I’d add, clients, users, and others who are subject to unilaterally imposed untested practices). I also discuss this problematic distinction in my article.
Now, of course, for these experimenters to truly meet the noble purpose I’ve described for them, they have to do more than simply run a low-risk experiment with the potential to improve their practice, to the benefit of those who are subject o the effects of those practices. As I say in the conclusion to my article:
[R]esponsible innovation only begins—it does not end—with a culture of continual testing. The responsible innovator will respond appropriately to the results of A/B testing. This may not require, in every case, that a practice be changed, implemented, or abandoned accordingly. But if the practitioner opts not to change her practice, then sometimes—especially if A/B testing was conducted without subjects’ consent, imposed any incremental risk, and yielded important results—ethics will require that she at least transparently communicate the results so that users or consumers can make a more informed decision about whether and how to avail themselves of that practice.
To return to the Riot Games example, yes, we all wish harassers and those who are uncivil online would just stop or go away. But absent that fantasy, do you really want gaming companies guessing about how to address the problem, or would you like them to quickly gather rigorous evidence of what works through low risk, nonconsensual RCTs? It’s certainly possible to prefer the former choice to the latter, but I think that doing so requires acknowledging the benefits that are likely foregone and owning that outcome.
A Challenge for Critics: Compared to What?
Which leads me to my final series of comments. For those who reject nonconsensual low-risk RCTs to determine the effects on the subject population (users, customers, or employees) of a company’s product, service, or policy, which alternative do you prefer (either as a matter of law or as a matter of policy)? Please stipulate with me that sometimes, nothing but a nonconsensual experiment (but with all of the debriefing afterward and IRB review beforehand that you want) will definitively determine the effects of an innovation or an existing practice. In the Facebook case, for example, as I explain in my article, academic observational studies over several years had reached opposite conclusions about the psychological risks, if any, of News Feed. Some said that friends’ artificially positive posts risk making users feel worse about their own lives (a social comparison effect) while others said that it is friends’ negative posts that risk depressing users (through a mood contagion effect). Others said that both of these hypotheses hold some truth, and that the effect of News Feed on users depends on an individual user’s characteristics (like personality). Still others thought that News Feed has no significant psychological effect on users and that studies finding otherwise were mere noise, based on oft-low sample sizes, self-reported outcomes, and other methodological problems.
The way I see it, there are two broad alternatives to mid-stream A/B field experiments of the kind Facebook and OkCupid did. The first is conducting A/B testing from the beginning, with small groups of people, before rolling out an innovation to everyone. This is what we do with new drugs and medical devices. The second alternative, of course, is never to conduct A/B testing.
Let’s consider the second possibility first. A company (or other innovator) could go ahead and launch an untested innovation with unknown effects on its customers, users, or employees (or citizens, students, patients, clients—whomever). Maybe no one ever even tries to observe its effects. Or maybe someone inside or outside the company observes correlations between the innovation and bad outcomes. Maybe several such people do, but their observations contradict each other. Some say using your innovation one way harms users while others say, no, using the innovation the opposite way is what harms users. You and only you could run a sufficiently powered RCT that could resolve this dispute and determine the actual effects of your innovation on your users. But telling users about the RCT would bias the results and make them useless. Maybe you think that not telling people about an A/B test, no matter how low the risk compared to the status quo and no matter how much they themselves stand to benefit from the results, is absolutely unethical. Or maybe you know that some other people believe this and you don’t want the bad publicity. Or maybe you just can’t be bothered; after all, you’re still making money and your user base continues to grow. For whatever reason, you don’t run an A/B test. You simply ignore the academic fight about whose hypothesis is right, assume they’ll cancel each other out in the media, and move on.
If either this approach or midstream A/B testing is disrespectful of users, an abuse of power, deprives users of important information, treats users like mere means to corporate ends, and so on, my money’s on this approach, not on low-risk, consensual A/B testing. Yet this approach seems to be what a lot of people would have had Facebook do in the wake of probative, but conflicting and indeterminate evidence that either positive posts or negative posts harm users.
But these two approaches—mid-stream A/B testing or making no rigorous attempt to ever determine the effects of your innovations or existing practices—do not exhaust the universe of possibilities. This is implicit, albeit subtly, in the op-ed: The A/B illusion refers only to the mistake of viewing low-risk attempts to determine the effects of practices on us as more morally suspect than simply allowing untested innovations to be imposed on us without ever testing their effects. As we say, “as long as we permit those in power to make unilateral choices that affect us, we shouldn’t thwart low-risk efforts . . . to rigorously determine the effects of those choices” (emphasis added).
But sometimes, of course, we shouldn't permit those in power to make unilateral, untested choices that affect us. A third option, then, is to prohibit altogether, reduce the frequency of, or otherwise “regulate” (by law or ethics) untested A-only practices. There are at least two reasons why we might want to adopt this third approach of jettisoning an A-only practice rather than either imposing A forever untested or rolling it out untested and then A/B testing it mid-stream (perhaps only if credible evidence of harm of inefficacy arises).
First, there may be innovations or existing practices that we think are simply off-limits, not because we aren’t sure of their safety or efficacy but because they are inherently problematic. I flagged this possibility in a section of my article called “The Problem of Unethical Underlying Practices.” What if, for instance, Facebook decided to configure its News Feed algorithm in an attempt to skew an election in a particular direction? Sure, it might be nice to know how effective their attempts at mucking around in an election are, and if somehow the only options were figuring out how successfully Facebook can skew an election outcome and just guessing about how effective it is, perhaps A/B testing would be preferable to nothing. But these aren’t, in any obvious sense, the only options. Facebook should not be mucking around in an election in the first place and the right thing to do is to seek to prohibit that underlying practice, not to test its effects or to ignore the untested practice (or, in my view, to attack the testing in a backdoor effort to get rid of the underlying practice, for that is a gamble whose cost is borne by all of us when the concept of research is tarnished and depicted as inherently problematic). That is, the right thing to do is neither conducting an A/B test nor allowing A to proceed unstudied, but to stop A.
Second, we might subject innovations to A/B testing from the outset (either as a matter of law or policy) because, like new drugs and medical devices, the prior probability of an innovation causing significant harm (or forgoing significant benefit) is high enough that something like premarket approval based on phased safety and efficacy trials is warranted. A/B testing in small populations before rolling out an innovation to everyone can prevent harmful effects occurring on a catastrophic scale (this was the lesson of thalidomide).
I think this is probably a good idea for some categories of innovations not currently governed this way. Note, however, several reasons why this third alternative of premarket A/B testing will not always relieve the innovator of the later dilemma of deciding whether to run a field experiment or ignore hints of problems with her product, service, policy, law, or practice.
First, Phase I, II, and III trials obviously do not avoid the problem of “experimenting on human beings.” Someone has to go first. Such trials do have advantages, however. They are usually highly controlled, both in terms of the inclusion and exclusion criteria for subjects and in terms of adherence to a rigid protocol, and well-monitored. All of this control and monitoring requires subjects to know that they’re in a trial, so there is almost always informed consent. On the other hand, what pristine lab RCTs have in internal validity they tend to lack in external validity: the results of trials based on cherry picked subjects in highly controlled—read: contrived—settings do not always generalize well outside the lab, where people have co-morbidities, are not treatment-naïve, imperfectly comply with regimens, and so on. I’m referring to well-known problems with the external validity of biomedical trials, but there are analogs to, say, psychology lab studies. This is why, at some point, field experiments involving members of the actual population of interest in the actual setting of interest are so critical.
Similarly, some adverse effects are so subtle or take so long to emerge that these relatively small, relatively brief trials may not reveal all serious problems with an innovation.
Finally, it can’t possibly be cost-benefit justified to conduct an RCT of literally every change someone in power proposes to make. RCTs usually take time, money, personnel, and other resources. The world would come to a standstill if we had to await the results of well-designed RCTs before anyone makes a move. I will note, however, that one fairly unique characteristic of digital products and services is that A/B testing them tends to be quick and cheap, making it more incumbent on these innovators, all else equal, to determine the effects of their innovations in advance.
I'm actually quite interested, fascinated even, by this topic, and I think the point about "moral illusions" is a very important one, but it's going to take me some time to read through this, digest it, and offer some specific feedback ...
Posted by: Enrique Guerra-Pujol | June 23, 2015 at 08:28 PM
I think the problem with A/B testing (versus just "imposing" a single option, as in the 401k example) is that one still has a choice when there is only a single option--e.g. I don't have to sign up for Facebook if I don't want to--whereas I have no choice if I am being tested without my consent, as in the Facebook mood contagion experiment. We can spend thousands of words parsing nuances upon nuances, but ultimately, the issue comes down to one's view of ethics: if you are "consequentialist" you are more likely to approve of nonconsensual A/B testing; if, however, you take a "duty-" or "principles-based" view of ethics, you are less likely to approve. In any case, to the extent this is an ethical issue, then don't we have to concede there are no "right answers" up front, instead of pretending that there is?
Posted by: Enrique Guerra-Pujol | June 24, 2015 at 07:45 AM
Hi Enrique. Thanks for the comments.
Let me take your latter comments first. I think that there is a more or less correct interpretation of the Belmont Report and of the Common Rule, and of both of these to the various fact patterns I've analyzed, and you won't be surprised to hear that I believe that my interpretations are sound. For instance, it is simply a legal and historical fact, not my mere preference, that these documents permit nonconsensual human subjects research under certain circumstances and that IRBs have, since these documents were in place, blessed such studies accordingly.
Are these documents--which, I again hasten to emphasize, do not mandate consequentialism (or else the balancing test would hardly be that consent is the default rule and can be waived only when the research can't feasibly be carried out any other way, involves only minimal risk, and does not violate any other rights of subjects) but instead lay out *prima facie* principles to be balanced--themselves morally "right"? No more and no less than a zillion other policy decisions, some of which are codified, such as mandatory vaccination, seatbelt, and helmet laws, which also infringe liberty in order to protect welfare (sometimes the person's whose liberty is being infringed; sometimes others'). I don't think it's reasonable to insist on an absolute rule of consent in as broad a category of activity as "research," as the fed regs define it (must Google obtain the informed consent of every visitor to its website before it A/B tests several shades of blue?). It's not a reasonable way to balance liberty and welfare (in the case of QI/QA A/B testing) or corporate autonomy (in the case of A/B testing website design), and it fails to cohere with the many other ways we allow people to do things to other people without their permission. I don't think I "pretend" to be doing anything more than making that argument. I am certainly not offering a mathematical proof of the ethics of quality improvement A/B testing, because I don't think ethics, policy, or law work that way.
On the other hand, let's not be too nihilistic. The fact that ethics, policy, and a whole lot of law do not lend themselves to proof of singularly "right" answers doesn't mean that there aren't better and worse answers and better and worse arguments in support of various answers.
As for your first comment about having a choice about whether to be subject to untested innovations and existing practices versus having no choice to be subject to A/B testing, I think there's certainly something to what you're saying here, but I have a couple of push backs:
(1) I'm more skeptical than you are of the claim that we always have meaningful choices about whether to be subject to untested innovations and existing practices. Sometimes this will be true, but other times, not so much. Untested laws are an obvious example: we generally don't get to opt out of those. Re: workplace policies like 401(k) matching letters, an employee may or may not have a meaningful choice of whether to hold a job (probably not). If she does need to work, she will necessarily be subject to countless decisions made by management, and if she's lucky, receiving some sort of 401(k) matching letter will be among these choices. As for Facebook, some of the fiercest critics of the its experiment believe that Facebook should be regulated like a public utility precisely because, for so many people, there is effectively no choice about whether to have an account or not.
(2) Even where there is some element of choice over whether to subject oneself to an untested innovation or existing practice (say, OkCupid or Facebook--if you reject the public utility argument), this choice is, by definition, not an informed one. We don't know the effects of untested As without rigorously comparing them to Bs. Would you want to say that, in 2006 when News Feed was rolled out, Facebook users assumed its psychological risks because they could have chosen to quit Facebook at that time (or not sign up thereafter)?
Ideally, when someone rolls out an untested innovation (law, policy, product, service, other practice), those affected would be told--or simply realize--that there are risks and uncertainties involved. When an innovator adopts a culture of continual testing, as I think they generally should, they would tell those affected of this culture up front, even if, in order to maintain the validity of the results or due to the sheer volume, they cannot fully disclose the details of each A/B test in advance. There would be some sort of specific statement about the substantive parameters of A/B testing (only QI/QA-type research, or also to test orthogonal questions?) and the process behind it (advance corporate IRB review? commitment to publishing the results? to acting on them in certain circumstances?).
Posted by: Michelle Meyer | June 26, 2015 at 08:06 PM