Search the Lounge

Categories

« Antebellum Courthouse Trivia | Main | Hiring Announcement: Texas A&M »

June 29, 2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Jeff Sherman

I don't think anyone is arguing that what facebook did is illegal.

I have never worked with an IRB that would have considered intentional manipulation of mood to be such a minor event that it would not require full informed consent. Studies that manipulate emotion or mood are NOT exempted anywhere I have worked.

The suggestion that Facebook's own frequent manipulations define the standard of "minimal risk" is absurd. So, if I regularly expose my students to aerosol chemicals, then I can continue to do it because I've been doing it all along? What?

Other actors do not regularly manipulate our emotions for the purpose of systematic research. You seem to have already forgotten that we are talking about systematic research, not the effects of advertisements of art on our moods.

Finally, the take home message--that we should loosen requirements for consent because everyone's doing it--is woeful. If anything, private agencies like Facebook should, perhaps, be made to treat their subjects in the same way that scientists must. This is a basic issue of human rights.

Sam Meyer

I've wondered how Facebook handled the data set of research subjects. Was any consideration given the likelihood that the randomly selected subjects included minors, or any action taken to exclude them from the subject pool? I saw no mention of this in the study.

Does this affect the results, given that Facebook's user base is both above and below the age of majority? Does it trigger a stricter reading of informed consent guidelines when the research subject is a minor?

Furthermore, you write:
"Unless it indiscriminately dumps every friend’s status update into a user’s feed in chronological order, there is no way for Facebook not to manipulate—via filter—users’ feeds. Of course, Facebook could do just that, but in fact it does not; it employs algorithms to filter News Feeds, which it apparently regularly tweaks in an effort to maximize user satisfaction, ideal ad placement, and so on."

This seems to be a funny definition of "manipulation." Sure, no one is arguing that Facebook shouldn't filter News Feeds in various ways for various reasons. But there is a difference between "maximizing user satisfaction", ensuring "ideal ad placement, and so on" and conducting research on emotional contagion using one's user base as research subjects. One is part of the core moneymaking mission of the company as an advertising platform and social network, and the other -- while serving Facebook's financial ends, no doubt -- is being done for research purposes. And, I'd wager, is far outside what the average user agrees to when consenting to their data being used for "research"; I assumed that it was inwardly-focused research aimed at making the service better, more targeted, and more responsive.

You go on to write:
"Given this baseline of constant manipulation, you could say that this study did not involve any incremental additional manipulation. No manipulation, no intervention. No intervention, no human subjects. No human subjects, no federal regulations requiring IRB approval."

However, this is a big "given" to accept: because Facebook tweaks their algorithms and changes the way the site works all the time, any other manipulation is just noise? So as long as there's a "baseline of constant manipulation", is there *any* other activity that would qualify in your estimation as "additional manipulation"? (If no significant variables are being changed, does research actually take place?) The study itself admits that there is, as you characterize it, "incremental additional manipulation": "The experiment manipulated the extent. . .to which people were exposed to emotional expressions in their News Feed", and goes on to state that exposure to friends' positive and negative emotional content was reduced, depending on random groupings. I don't see how you can argue that this doesn't constitute manipulation, or intervention in the experience presented to human subjects without the knowledge that they were being used as research subjects without the opportunity to withdraw consent.

Gwynne Ash

What is your stance considering that many of the participants were likely minors between the ages of 13 and 17 (who are allowed by the terms of service to have FB accounts, but who have increased human subjects protections)?

Tamara Piety

Thanks for this thoughtful post Michelle, but I am afraid I have to agree with some of the others above that the baseline amount of manipulation seems like a troubling basis for concluding this is no big deal...although as a practical matter it might not be. What this seems to highlight is the troubling amount of manipulation period. I don't think advertising should get a pass since its effects are likely not trivial given that it is directed at children since before they can talk for products like fast food (think Ronald McDonald) that are far from benign. But perhaps we need to look at the fairly shocking amount of experimentation going on to which we seemed to have been relatively indifferent for some time.

Larry DeLuca

Study was partly funded by Dept of the Army. OHRP applies. Besides, Nuremburg report lays out ethical principles for ALL research, regardless of OHRP's reach. It is just prior to Facebook no one had a captive audience of this size before.

Bob S

One line of argument, that the academics didn't actually conduct research--because they didn't personally collect the data and run the stats but, instead, merely designed the study, interpreted the results, and published the findings, is based on a misunderstanding of what research is and is simply wrong. Authorship on a scientific paper signifies a nontrivial contribution to the research the paper describes. And the academics were authors. In fact, data collection and (less frequently) data analysis are sometimes treated as warranting a lesser degree of research recognition than design and interpretation (acknowledgement, say, in a footnote).

Trevor Lohrbeer

Thanks for writing this post. I think you've done an excellent job of describing the issues.

I, for one, fully support your conclusion. If it's ethical for a business to conduct experiments to maximize profit--a selfish motive--it should be MORE ethical, not less, for a business to conduct experiments that give back to science and society.

Almost all modern web companies run experiments on their users that manipulate their behavior (and often their psychological state). These are structured experiments aimed at learning what drives users to take certain actions, like clicking and posting.

Criticizing Facebook for not holding to an arbitrary "higher" standard throws up walls that will discourage other companies from sharing what they learn, to the detriment of society.

What makes manipulating someone's mood "unethical", but getting them to buy a product "ethical"? Why is doing something for profit pure, while doing something for science dirty?

Facebook caused a slight temporary change to some people's moods, no more than might happen with any change to their algorithm. More concerning would be a change that affected users significantly or permanently. I'd argue that getting someone to part with $100 during a test is a far greater "ethical" boundary than making someone feel slightly more sad or happy.

Chris

News reports indicate that (a) Facebook's terms of service at the time of the study did not include permission for this kind of activity; it was amended recently to make it seem legal (b) the study did receive federal funding.

If those reports are correct, how was this experiment ethical, or even legal?

Matt M.

"It would seem, then, that neither UCSF nor Cornell was 'engaged in research' and, since Facebook was engaged in HSR but is not subject to the federal regulations, that IRB approval was not required."
-----------------------------------

The following is straight form the FAQ of the IRB at Cornell:

"How do I know if I am conducting research with human participants?

According to Cornell University Policy, research is defined as '…a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.'"

This indicates quite clearly that Cornell's IRB is supposed to view research design as being part of any research that is conducted, as they should. The idea that it might not be is ridiculous, and would certainly seem to imply that it's okay for researchers to intentionally design grossly unethical data gathering processes and analyze the resulting data as long as some other entity actually carries out the data collection. Most IRB members are not idiots, and wouldn't use a definition with such dangerous implications to make decisions. Of course Cornell's PR department is a different story. They are just trying to cover their butts at this point, and are doing a really poor job of it, such that my respect for Cornell has already been knocked down a notch or two.

As for "The IRB might plausibly have decided that since the subjects’ environments, like those of all Facebook users, are constantly being manipulated by Facebook, the study’s risks were no greater than what the subjects experience in daily life as regular Facebook users, and so the study posed no more than 'minimal risk' to them," you are completely failing to account for the "probability" part of the previous sentence and focusing on the magnitude. Even if a convincing argument could be made regarding the magnitude of the harm -- something that I am not convinced of in the first place -- receiving the treatment certainly increased the probability of harm.

Moreover, I really don't understand why you so easily accept as settled fact that claim that "the study couldn’t feasibly have been conducted with full Common Rule-style informed consent—which requires a statement of the purpose of the research and the specific risks that are foreseen—without biasing the entire study." If that is true, which it may or may not be, then it would also be true for the huge majority of research projects in which informed consent IS obtained as a simple matter of course. What makes this project different in that regard? At least you come to a reasonable conclusion here: that something at least closer to what is typically considered to be informed consent should have been obtained.

Lastly, regarding the qualifications for a waiver of the informed consent requirement, two rights that are typically spelled out in informed consent documents are the right not to participate in the research and the right to cease participating at any time. It's hard to exercise these rights if you don't know you're even involved in an experiment. Granted, this calls into question the ethicality of certain field experiments that have been done, etc., but we certainly shouldn't pretend that these rights are not typically spelled out to human subjects: they are. In this case, though, they were not.

The comments to this entry are closed.

StatCounter

  • StatCounter
Blog powered by Typepad