I have a new law review article out, Two Cheers for Corporate Experimentation: The A/B Illusion and the Virtues of Data-Driven Innovation, arising out of last year's terrific Silicon Flatirons annual tech/privacy conference at Colorado Law, the theme of which was "When Companies Study Their Customers."
This article builds on, but goes well beyond, my prior work on the Facebook experiment in Wired (mostly a wonky regulatory explainer of the Common Rule and OHRP engagement guidance as applied to the Facebook-Cornell experiment, albeit with hints of things to come in later work) and Nature (a brief mostly-defense of the ethics of the experiment co-authored with 5 ethicists and signed by an additional 28, which was necessarily limited in breadth and depth by both space constraints and the need to achieve overlapping consensus).
Although I once again turn to the Facebook experiment as a case study (and also to new discussions of the OkCupid matching algorithm experiment and of 401(k) experiments), the new article aims at answering a much broader question than whether any particular experiment was legal or ethical. Here is how the abstract begins:
“Practitioners”—whether business managers, lawmakers, clinicians, or other actors—are constantly innovating, in the broad sense of introducing new products, services, policies, or practices. In some cases (e.g., new drugs and medical devices), we’ve decided that the risks of such innovations require that they be carefully introduced into small populations, and their safety and efficacy measured, before they’re introduced into the general population. But for the vast majority of innovations, ex ante regulation requiring evidence of safety and efficacy neither does—nor feasibly could—exist. In these cases, how should practitioners responsibly innovate?
My short answer to this question is that responsible innovators should inculcate a culture of continuous testing of their products, services, policies, and practices, and that it is a kind of moral-cognitive mistake (which I dub the "A/B illusion") for the rest of us to respond to these laudable (and sometimes morally obligatory) experimental efforts by viewing it as more morally suspicious for innovators to randomize users to one of two (or more) conditions than to simply roll out one of those conditions, untested, for everybody. The long answer, of course, is in the article. (The full abstract, incidentally, explains the relevance of the image that accompanies this post.)
Thanks to Paul Ohm and conference co-sponsor Ryan Calo for inviting me to participate, to the editors of the Colorado Technology Law Journal, and to James Grimmelmann for being a worthy interlocutor over the past almost-year and for generously unfailingly tweeting my work on Facebook despite our sometimes divergent perspectives. James's contribution to the symposium issue is here; I don't know how many other conference participants chose to write, but issue 13.2 will appear fully online here at some point.
If you would rather hear, than read, me drone on about the Facebook and OkCupid experiments (and some other recent digital research, including Apple's ResearchKit and the University of Michigan's Facebook app-based GWAS, "Genes for Good," as well as learning healthcare systems and the future of human subjects research) you may do so by listening to episode 9 of Nic Terry's and Frank Pasquale terrific new weekly podcast, This Week in Health Law.
The "abstract" on SSRN is way too long, and the ethics of the Facebook experiment is not clear at all, especially if one takes a consent-view based of ethics as one's starting point instead of a consequentialist view
Posted by: Enrique Guerra Pujol | May 18, 2015 at 01:41 AM
Thanks for taking the time to share your concern about the length of my abstract, Enrique; I'm sorry you felt it wasn't worth your time.
Moving on to the merits, as it were, the actual article doesn't, in fact, suggest that the ethics of the Facebook experiment are "clear" (nor do the Wired or Nature articles). Nor does the article take consequentialism as a "starting point." To the contrary, as the abstract says: "the Belmont Report,. . . codified in the federal Common Rule, appropriately permits prima facie duties to obtain subjects’ informed consent to be overridden when obtaining consent would be infeasible and risks to subjects are no more than minimal." Recognizing informed consent as a prima facie duty makes consent the starting point, albeit not necessarily the ending point.
As I explain in Part II.B, the Belmont Report (and the Common Rule that codifies it) balances welfare and autonomy (aka the principles of beneficence and respect for persons' autonomy) without fetishizing either at the expense of the other. If you think this approach is wrong, and that an absolute rule of obtaining participants' fully informed consent in every instance of systematic learning is ethically (or legally?) required, I would be interested in whether you think that the studies discussed at pp. 295-98 were unethical and should not have been conducted.
Posted by: Michelle Meyer | May 18, 2015 at 10:46 AM