By now, most of you have probably heard—perhaps via your Facebook feed itself—that for one week in January of 2012, Facebook altered the algorithms it uses to determine which status updates appeared in the News Feed of 689,003 randomly-selected users (about 1 of every 2500 Facebook users). The results of this study—conducted by Adam Kramer of Facebook, Jamie Guillory of the University of California, San Francisco, and Jeffrey Hancock of Cornell—were just published in the Proceedings of the National Academy of Sciences (PNAS).
Although some have defended the study, most have criticized it as unethical, primarily because the closest that these 689,003 users came to giving voluntary, informed consent to participate was when they—and the rest of us—created a Facebook account and thereby agreed to Facebook’s Data Use Policy, which in its current iteration warns users that Facebook “may use the information we receive about you . . . for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”
Some of the discussion has reflected quite a bit of misunderstanding about the applicability of federal research regulations and IRB review to various kinds of actors, about when informed consent is and isn’t required under those regulations, and about what the study itself entailed. In this post, after going over the details of the study, I explain (more or less in order):
- How the federal regulations define “human subjects research” (HSR)
- Why HSR conducted and funded solely by an entity like Facebook is not subject to the federal regulations
- Why HSR conducted by academics at some institutions (like Cornell and UCSF) may be subject to IRB review, even when that research is not federally funded
- Why involvement in the Facebook study by two academics nevertheless probably did not trigger Cornell’s and UCSF’s requirements of IRB review
- Why an IRB—had one reviewed the study—might plausibly have approved the study with reduced (though not waived) informed consent requirements
- And why we should think twice before holding academics to a higher standard than corporations
As the authors explain, “[b]ecause people’s friends frequently produce much more content than one person can view,” Facebook ordinarily filters News Feed content “via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging.” In this study, the algorithm filtered content based on its emotional content. A post was identified as “positive” or “negative” if it used at least one word identified as positive or negative by software (run automatically without researchers accessing users’ text).
Some critics of the experiment have characterized it as one in which the researchers intentionally tried “to make users sad.” With the benefit of hindsight, they claim that the study merely tested the perfectly obvious proposition that reducing the amount of positive content in a user’s News Feed would cause that user to use more negative words and fewer positive words themselves and/or to become less happy (more on the gap between these effects in a minute). But that’s not what some prior studies would have predicted. Previous studies both in the U.S. and in Germany had found that the largely positive, often self-promotional content that Facebook tends to feature has made users feel bitter and resentful—a phenomenon the German researchers memorably call “the self-promotion-envy spiral.” Those studies would have predicted that reducing the positive content in a user’s feed might actually make users less sad. And it makes sense that Facebook would want to determine what will make users spend more time on its site rather than close that tab in disgust or despair. [Update as I’m wrapping up this post: the first author, Adam Kramer of Facebook, confirms—on Facebook, of course—that they did indeed want to investigate the theory that seeing friends’ positive content makes users sad.]
In any case, the researchers conducted two experiments, with a total of four groups of users (about 155,000 each). In the first experiment, Facebook reduced the positive content of News Feeds. Each positive post “had between a 10% and 90% chance (based on their User ID) of being omitted from their News Feed for that specific viewing.” In the second experiment, Facebook reduced the negative content of News Feeds in the same manner. In both experiments, these treatment conditions were compared to control conditions in which a similar portion of posts were randomly filtered out (i.e., without regard to emotional content). Note that whatever negativity users were exposed to came from their own friends, not, somehow, from Facebook engineers. In the first, presumably most objectionable, experiment, the researchers merely chose to filter out varying amounts (10% to 90%) of friends’ positive content, thereby leaving a News Feed more concentrated with posts in which a user’s friend had written at least one negative word.
The results? Whether they hypothesized these results in advance or not, the investigators found that
[f]or people who had positive content reduced in their News Feed, a larger percentage of words in people’s status updates were negative and a smaller percentage were positive. When negativity was reduced, the opposite pattern occurred. These results suggest that the emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks.
Note two things. First, while statistically significant, these effect sizes are, as the authors acknowledge, quite small. The largest effect size was a mere two hundredths of a standard deviation (d = .02). The smallest was one thousandth of a standard deviation (d = .001). The authors suggest that their findings are primarily significant for public health purposes, because when the aggregated, even small individual effects can have large social consequences.
Second, although the researchers conclude that their experiments constitute evidence of “social contagion” — that is, that “emotional states can be transferred to others” — this overstates what they could possibly know from this study. The fact that someone exposed to positive words very slightly increased the amount of positive words that she then used in her Facebook posts does not necessarily mean that this change in her News Feed content caused any change in her mood. The very slight increase in the use of positive words could simply be a matter of keeping up (or down, in the case of the reduced positivity experiment) with the Joneses. It seems highly likely that Facebook users experience (varying degrees of) pressure to conform to social norms about acceptable levels of snark and kvetching—and of bragging and pollyannaisms. Someone who is already internally grumbling about how United Statesians are, like, such total posers during the World Cup may feel more free to voice that complaint on Facebook than when his feed was more densely concentrated with posts of the “human beings are so great and I feel so lucky to know you all—group hug! <3 <3 <3” variety.
It is not at all clear, then, that the experiment caused the 155,000 or so users who were in the reduced positivity arm to feel any worse than they otherwise would have, much less for any increase in negative affect that may have occurred to have risen to the level of a mental health crisis, as some have suggested.
Was the Facebook Study "Human Subjects Research"?
One threshold question in determining whether this study required IRB approval is whether it constituted “human subjects research.” An activity is “research” under the federal regulations, if it is “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.” The study was plenty systematic, and it was designed to investigate the “self-promotion-envy spiral” theory of social networks. Check.
As defined by the regulations, a “human subject” is “a living individual about whom an investigator . . . obtains,” inter alia, “data through intervention.” Intervention, in turn, includes “manipulations of the subject or the subject’s environment that are performed for research purposes.” According to guidance issued by the Office for Human Research Protections (OHRP), the federal agency tasked with overseeing application of the regulations to HHS-conducted and –funded human subjects research, “orchestrating environmental events or social interactions” constitutes manipulation.
I suppose one could argue—in the tradition of choice architecture—that to say that Facebook manipulated its users’ environment is a near tautology. Unless it indiscriminately dumps every friend’s status update into a user’s feed in chronological order, there is no way for Facebook not to manipulate—via filter—users’ feeds. Of course, Facebook could do just that, but in fact it does not; it employs algorithms to filter News Feeds, which it apparently regularly tweaks in an effort to maximize user satisfaction, ideal ad placement, and so on. Given this baseline of constant manipulation, you could say that this study did not involve any incremental additional manipulation. No manipulation, no intervention. No intervention, no human subjects. No human subjects, no federal regulations requiring IRB approval.
It may be that Facebook regularly changes the algorithms that determine how a user experiences her News Feed. But that doesn’t mean that moving from one algorithm to the next doesn’t constitute a manipulation of the user’s environment. In any case, I’ll assume that this study meets the federal definition of “human subjects research” (HSR).
Was the Facebook Study Subject to Federal Research Regulations?
Contrary to the apparent beliefs of some commentators, not all HSR is subject to the federal regulations, including IRB review. By the terms of the regulations themselves, HSR is subject to IRB review only when it is conducted or funded by any of several federal departments and agencies (so-called Common Rule agencies), or when it will form the basis of an FDA marketing application. HSR conducted and funded solely by entities like Facebook is not subject to federal research regulations.
But this study was not conducted by Facebook alone; the second and third authors on the paper have appointments at the University of California, San Francisco, and Cornell, respectively. Although some commentators assume that university research is only subject to the federal regulations when that research is funded by the government, this, too, is incorrect. Any college or university that accepts any research funds from any Common Rule agency must sign a Federalwide Assurance (FWA), a boilerplate contract between the institution and OHRP in which the institution identifies the duly-formed and registered IRB that will review the funded research. The FWA invites institutions to voluntarily commit to extend the requirement of IRB review from funded projects to all HSR in which the institution is engaged, regardless of the source of funding, if any (see item 4b)—and historically, the vast majority of colleges and universities have agreed to “check the box,” as it’s called. If you are a student or a faculty member at an institution that has checked the box, then any HSR you conduct must be approved by an IRB.
As I recently had occasion to discover, Cornell has indeed checked the box (see #5 here). UCSF appears to have done so, as well, although it’s possible that it simply requires IRB review of all HSR by institutional policy, rather than FWA contract.
But these FWAs only require IRB review if the two authors’ participation in the Facebook study meant that Cornell and UCSF were “engaged” in research. When an institution is “engaged in research” turns out to be an important legal question in much collaborative research, and one the Common Rule itself doesn’t address. OHRP, however, has issued (non-binding, of course) guidance on the matter. The general rule is that an institution is engaged in research when its employee or agent obtains data about subjects through intervention or interaction, identifiable private information about subjects, or subjects’ informed consent.
According to the author contributions section of the PNAS paper, the Facebook-affiliated author “performed [the] research” and “analyzed [the] data.” The two academic authors merely helped him design the research and write the paper. They would not seem to have been involved, then, in obtaining either data or informed consent. (And even if the academic authors had gotten their hands on individualized data, so long as that data remained coded by Facebook user ID numbers that did not allow them to readily ascertain subjects’ identities, OHRP would not consider them to have been engaged in research.)
It would seem, then, that neither UCSF nor Cornell was "engaged in research" and, since Facebook was engaged in HSR but is not subject to the federal regulations, that IRB approval was not required. Whether that’s a good or a bad thing is a separate question, of course. (A previous report that the Cornell researcher had received funding from the Army Research Office, which as part of the Department of Defense, a Common Rule agency, would have triggered IRB review, has been retracted.)
Did an IRB Review the Facebook Experiment, and If So, How Could It Have Approved It?
Princeton psychologist Susan Fiske, who edited the PNAS article, told a Los Angeles Times reporter the following:
Then Forbes reported that Fiske “misunderstood the nature of the approval. A source familiar with the matter says the study was approved only through an internal review process at Facebook, not through a university Institutional Review Board.”
Most recently, Fiske told the Atlantic that Cornell's IRB did indeed review the study, and approved it as having involved a "pre-existing dataset." Given that, according to the PNAS paper, the two academic researchers collaborated with the Facebook researcher in designing the research, it strikes me as disingenuous to claim that the dataset preexisted the academic researchers' involvement. As I suggested above, however, it does strike me as correct to conclude that, given the academic researchers' particular contributions to the study, neither UCSF nor Cornell was engaged in research, and hence that IRB review was not required at all.
But if an IRB had reviewed it, could it have approved it, consistent with a plausible interpretation of the Common Rule? The answer, I think, is Yes, although under the federal regulations, the study ought to have required a bit more informed consent than was present here (about which more below).
Many have expressed outrage that any IRB could approve this study, and there has been speculation about the possible grounds the IRB might have given. The Atlantic suggests that the “experiment is almost certainly legal. In the company’s current terms of service, Facebook users relinquish the use of their data for ‘data analysis, testing, [and] research.’” But once a study is under an IRB’s jurisdiction, the IRB is obligated to apply the standards of informed consent set out in the federal regulations, which go well, well beyond a one-time click-through consent to unspecified “research.” Facebook’s own terms of service are simply not relevant. Not directly, anyway.
According to Prof. Fiske’s now-uncertain report of her conversation with the authors, by contrast, the local IRB approved the study “on the grounds that Facebook apparently manipulates people’s News Feeds all the time.” This fact actually is relevant to a proper application of the Common Rule to the study.
Here’s how. Section 46.116(d) of the regulations provides:
An IRB may approve a consent procedure which does not include, or which alters, some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that:
- The research involves no more than minimal risk to the subjects;
- The waiver or alteration will not adversely affect the rights and welfare of the subjects;
- The research could not practicably be carried out without the waiver or alteration; and
- Whenever appropriate, the subjects will be provided with additional pertinent information after participation.
The Common Rule defines “minimal risk” to mean “that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life . . . .” The IRB might plausibly have decided that since the subjects’ environments, like those of all Facebook users, are constantly being manipulated by Facebook, the study’s risks were no greater than what the subjects experience in daily life as regular Facebook users, and so the study posed no more than “minimal risk” to them.
That strikes me as a winning argument, unless there’s something about this manipulation of users’ News Feeds that was significantly riskier than other Facebook manipulations. It’s hard to say, since we don’t know all the ways the company adjusts its algorithms—or the effects of most of these unpublicized manipulations. We know that one News Feed tweak “directly influenced political self-expression, information seeking and real-world voting behaviour of millions of people” during the 2010 congressional elections. That tweak may have been designed to contribute to generalizable knowledge, so perhaps it shouldn’t count in the risks “ordinarily encountered in daily life” analysis. But another tweak to the Facebook interface designed to affect not only users’ word choice or even mood but their behavior—Mark Zuckerberg’s decision to give users a formal way of telling their friends that they had registered as an organ donor—was motivated by altruism after conversations with liver transplant recipient Steve Jobs, although the dramatic effects of that policy change have been studied by academics.
Even if you don’t buy that Facebook regularly manipulates users’ emotions (and recall, again, that it’s not clear that the experiment in fact did alter users’ emotions), other actors intentionally manipulate our emotions every day. Consider “fear appeals”—ads and other messages intended to shape the recipient’s behavior by making her feel a negative emotion (usually fear, but also sadness or distress). Examples include “scared straight” programs for youth warning of the dangers of alcohol, smoking, and drugs, and singer-songwriter Sarah McLachlan’s ASPCA animal cruelty donation appeal (which I cannot watch without becoming upset—YMMV—and there’s no way on earth I’m being dragged to the “emotional manipulation” that is, according to one critic, The Fault in Our Stars).
Continuing with the rest of the § 46.116(d) criteria, the IRB might also plausibly have found that participating in the study without Common Rule-type informed consent would not “adversely effect the rights and welfare of the subjects,” since Facebook has limited users’ rights by requiring them to agree that their information may be used “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”
Finally, the study couldn’t feasibly have been conducted with full Common Rule-style informed consent—which requires a statement of the purpose of the research and the specific risks that are foreseen—without biasing the entire study. Of course, surely the IRB, without biasing the study, could have required researchers to provide subjects with some information about this specific study beyond the single word “research” that appears in the general Data Use Policy, as well as the opportunity to decline to participate in this particular study, and these things should have been required on a plain reading of § 46.116(d). In other words, the study was probably eligible for “alteration” in some of the elements of informed consent otherwise required by the regulations, but not for a blanket waiver.
Moreover, subjects should have been debriefed by Facebook and the other researchers, not left to read media accounts of the study and wonder whether they were among the randomly-selected subjects studied.
Still, the bottom line is that—assuming the experiment required IRB approval at all—it was probably approvable in some form that involved much less than 100% disclosure about exactly what Facebook planned to do and why.
Two Ways of Viewing “Minimal Risk” Bootstrapping
There are (at least) two ways of thinking about this feedback loop between the risks we encounter in daily life and what counts as “minimal risk” research for purposes of the federal regulations.
One view is that this is a lamentable bootstrapping of increasingly common business practices of data mining and behavioral manipulation by marketers and others into federal risk-based regulation of human subjects research, where risk is accordingly defined down. Human beings attempt to manipulate one another all the time, of course, and have done so since time immemorial. But once upon a time, the primary sources of emotional manipulation in a person’s life were called “toxic people,” and once you figured out who those people were, you would avoid them as much as possible. Now, everyone’s trying to nudge, data mine, or manipulate you into doing or feeling or not doing or not feeling something, and they have access to you 24/7 through targeted ads, sophisticated algorithms, and so on, and the ubiquity is—on this view—being further used against us by watering down human subjects research protections.
There’s something to that lament.
But there’s something to the other view, as well. This other view is that this bootstrapping is entirely appropriate. If Facebook had acted on its own, it could have tweaked its algorithms to cause more or fewer positive posts in users’ News Feeds even without obtaining users’ click-through consent (it’s not as if Facebook promises its users that it will feed them their friends’ status updates in any particular way), and certainly without going through the IRB approval process. It’s only once someone tries to learn something about the effects of that activity and share that knowledge with the world that we throw up obstacles.
Academic researchers’ status as academics already makes it more burdensome for them to engage in exactly the same kinds of studies that corporations like Facebook can engage in at will. If, on top of that, IRBs didn’t recognize our society’s shifting expectations of privacy (and manipulation) and incorporate those evolving expectations into their minimal risk analysis, that would make academic research still harder, and would only serve to help ensure that those who are most likely to study the effects of a manipulative practice and share those results with the rest of us have reduced incentives to do so. Would we have ever known the extent to which Facebook manipulates its News Feed algorithms had Facebook not collaborated with academics incentivized to publish their findings?
We can certainly have a conversation about the appropriateness of Facebook-like manipulations, data mining, and other 21st-century practices. But so long as we allow private entities freely to engage in these practices, we ought not unduly restrain academics trying to determine their effects. Recall those fear appeals I mentioned above. As one social psychology doctoral candidate noted on Twitter, IRBs make it impossible to study the effects of appeals that carry the same intensity of fear as real-world appeals to which people are exposed routinely, and on a mass scale, with unknown consequences. That doesn’t make a lot of sense. What corporations can do at will to serve their bottom line, and non-profits can do to serve their cause, we shouldn’t make (even) harder—or impossible—for those seeking to produce generalizable knowledge to do.
[Cross-posted at Bill of Health and The Bioethics Program Blog.]
I don't think anyone is arguing that what facebook did is illegal.
I have never worked with an IRB that would have considered intentional manipulation of mood to be such a minor event that it would not require full informed consent. Studies that manipulate emotion or mood are NOT exempted anywhere I have worked.
The suggestion that Facebook's own frequent manipulations define the standard of "minimal risk" is absurd. So, if I regularly expose my students to aerosol chemicals, then I can continue to do it because I've been doing it all along? What?
Other actors do not regularly manipulate our emotions for the purpose of systematic research. You seem to have already forgotten that we are talking about systematic research, not the effects of advertisements of art on our moods.
Finally, the take home message--that we should loosen requirements for consent because everyone's doing it--is woeful. If anything, private agencies like Facebook should, perhaps, be made to treat their subjects in the same way that scientists must. This is a basic issue of human rights.
Posted by: Jeff Sherman | June 30, 2014 at 12:42 AM
I've wondered how Facebook handled the data set of research subjects. Was any consideration given the likelihood that the randomly selected subjects included minors, or any action taken to exclude them from the subject pool? I saw no mention of this in the study.
Does this affect the results, given that Facebook's user base is both above and below the age of majority? Does it trigger a stricter reading of informed consent guidelines when the research subject is a minor?
Furthermore, you write:
"Unless it indiscriminately dumps every friend’s status update into a user’s feed in chronological order, there is no way for Facebook not to manipulate—via filter—users’ feeds. Of course, Facebook could do just that, but in fact it does not; it employs algorithms to filter News Feeds, which it apparently regularly tweaks in an effort to maximize user satisfaction, ideal ad placement, and so on."
This seems to be a funny definition of "manipulation." Sure, no one is arguing that Facebook shouldn't filter News Feeds in various ways for various reasons. But there is a difference between "maximizing user satisfaction", ensuring "ideal ad placement, and so on" and conducting research on emotional contagion using one's user base as research subjects. One is part of the core moneymaking mission of the company as an advertising platform and social network, and the other -- while serving Facebook's financial ends, no doubt -- is being done for research purposes. And, I'd wager, is far outside what the average user agrees to when consenting to their data being used for "research"; I assumed that it was inwardly-focused research aimed at making the service better, more targeted, and more responsive.
You go on to write:
"Given this baseline of constant manipulation, you could say that this study did not involve any incremental additional manipulation. No manipulation, no intervention. No intervention, no human subjects. No human subjects, no federal regulations requiring IRB approval."
However, this is a big "given" to accept: because Facebook tweaks their algorithms and changes the way the site works all the time, any other manipulation is just noise? So as long as there's a "baseline of constant manipulation", is there *any* other activity that would qualify in your estimation as "additional manipulation"? (If no significant variables are being changed, does research actually take place?) The study itself admits that there is, as you characterize it, "incremental additional manipulation": "The experiment manipulated the extent. . .to which people were exposed to emotional expressions in their News Feed", and goes on to state that exposure to friends' positive and negative emotional content was reduced, depending on random groupings. I don't see how you can argue that this doesn't constitute manipulation, or intervention in the experience presented to human subjects without the knowledge that they were being used as research subjects without the opportunity to withdraw consent.
Posted by: Sam Meyer | June 30, 2014 at 04:05 AM
What is your stance considering that many of the participants were likely minors between the ages of 13 and 17 (who are allowed by the terms of service to have FB accounts, but who have increased human subjects protections)?
Posted by: Gwynne Ash | June 30, 2014 at 02:52 PM
Thanks for this thoughtful post Michelle, but I am afraid I have to agree with some of the others above that the baseline amount of manipulation seems like a troubling basis for concluding this is no big deal...although as a practical matter it might not be. What this seems to highlight is the troubling amount of manipulation period. I don't think advertising should get a pass since its effects are likely not trivial given that it is directed at children since before they can talk for products like fast food (think Ronald McDonald) that are far from benign. But perhaps we need to look at the fairly shocking amount of experimentation going on to which we seemed to have been relatively indifferent for some time.
Posted by: Tamara Piety | June 30, 2014 at 05:45 PM
Study was partly funded by Dept of the Army. OHRP applies. Besides, Nuremburg report lays out ethical principles for ALL research, regardless of OHRP's reach. It is just prior to Facebook no one had a captive audience of this size before.
Posted by: Larry DeLuca | June 30, 2014 at 08:23 PM
One line of argument, that the academics didn't actually conduct research--because they didn't personally collect the data and run the stats but, instead, merely designed the study, interpreted the results, and published the findings, is based on a misunderstanding of what research is and is simply wrong. Authorship on a scientific paper signifies a nontrivial contribution to the research the paper describes. And the academics were authors. In fact, data collection and (less frequently) data analysis are sometimes treated as warranting a lesser degree of research recognition than design and interpretation (acknowledgement, say, in a footnote).
Posted by: Bob S | June 30, 2014 at 11:35 PM
Thanks for writing this post. I think you've done an excellent job of describing the issues.
I, for one, fully support your conclusion. If it's ethical for a business to conduct experiments to maximize profit--a selfish motive--it should be MORE ethical, not less, for a business to conduct experiments that give back to science and society.
Almost all modern web companies run experiments on their users that manipulate their behavior (and often their psychological state). These are structured experiments aimed at learning what drives users to take certain actions, like clicking and posting.
Criticizing Facebook for not holding to an arbitrary "higher" standard throws up walls that will discourage other companies from sharing what they learn, to the detriment of society.
What makes manipulating someone's mood "unethical", but getting them to buy a product "ethical"? Why is doing something for profit pure, while doing something for science dirty?
Facebook caused a slight temporary change to some people's moods, no more than might happen with any change to their algorithm. More concerning would be a change that affected users significantly or permanently. I'd argue that getting someone to part with $100 during a test is a far greater "ethical" boundary than making someone feel slightly more sad or happy.
Posted by: Trevor Lohrbeer | June 30, 2014 at 11:41 PM
News reports indicate that (a) Facebook's terms of service at the time of the study did not include permission for this kind of activity; it was amended recently to make it seem legal (b) the study did receive federal funding.
If those reports are correct, how was this experiment ethical, or even legal?
Posted by: Chris | July 01, 2014 at 03:50 PM
"It would seem, then, that neither UCSF nor Cornell was 'engaged in research' and, since Facebook was engaged in HSR but is not subject to the federal regulations, that IRB approval was not required."
-----------------------------------
The following is straight form the FAQ of the IRB at Cornell:
"How do I know if I am conducting research with human participants?
According to Cornell University Policy, research is defined as '…a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.'"
This indicates quite clearly that Cornell's IRB is supposed to view research design as being part of any research that is conducted, as they should. The idea that it might not be is ridiculous, and would certainly seem to imply that it's okay for researchers to intentionally design grossly unethical data gathering processes and analyze the resulting data as long as some other entity actually carries out the data collection. Most IRB members are not idiots, and wouldn't use a definition with such dangerous implications to make decisions. Of course Cornell's PR department is a different story. They are just trying to cover their butts at this point, and are doing a really poor job of it, such that my respect for Cornell has already been knocked down a notch or two.
As for "The IRB might plausibly have decided that since the subjects’ environments, like those of all Facebook users, are constantly being manipulated by Facebook, the study’s risks were no greater than what the subjects experience in daily life as regular Facebook users, and so the study posed no more than 'minimal risk' to them," you are completely failing to account for the "probability" part of the previous sentence and focusing on the magnitude. Even if a convincing argument could be made regarding the magnitude of the harm -- something that I am not convinced of in the first place -- receiving the treatment certainly increased the probability of harm.
Moreover, I really don't understand why you so easily accept as settled fact that claim that "the study couldn’t feasibly have been conducted with full Common Rule-style informed consent—which requires a statement of the purpose of the research and the specific risks that are foreseen—without biasing the entire study." If that is true, which it may or may not be, then it would also be true for the huge majority of research projects in which informed consent IS obtained as a simple matter of course. What makes this project different in that regard? At least you come to a reasonable conclusion here: that something at least closer to what is typically considered to be informed consent should have been obtained.
Lastly, regarding the qualifications for a waiver of the informed consent requirement, two rights that are typically spelled out in informed consent documents are the right not to participate in the research and the right to cease participating at any time. It's hard to exercise these rights if you don't know you're even involved in an experiment. Granted, this calls into question the ethicality of certain field experiments that have been done, etc., but we certainly shouldn't pretend that these rights are not typically spelled out to human subjects: they are. In this case, though, they were not.
Posted by: Matt M. | July 02, 2014 at 11:11 PM