UPDATES (11/9/14): NPR reports that Hickox's boyfriend has withdrawn from nursing school and that the two will move out of the state after Nov. 10. Alas, in so reporting, NPR claims that Maine had "sought a court order to require [Hickox] to stay indoors." Apparently NPR doesn't read TFL (or court petitions). Meanwhile, I found this recent JAMA news report about what we do and don't know about Ebola enlightening. Both links via Ross Silverman on Twitter (@phlu). Finally, thanks to Christian Turner for plugging the post on the latest episode of the always interesting Oral Argument podcast.
The case I mentioned in my last post, Maine Department of Health and Human Services v. Kaci Hickox is no more. Hickox and public health officials agreed to stipulate to a final court order imposing on Hickox the terms that the court had imposed on her in an earlier, temporary order. Until Nov. 10, when the 21-day incubation period for Ebola ends, Hickox will submit to "direct active monitoring" and coordinate her travel with Maine public health authorities to ensure that such monitoring occurs uninterrupted. She has since said that she will not venture into town or other public places, although she is free to do so.
Below is a detailed account of the case, which suggests the following lessons:
Gavin Macgregor-Skinner, an epidemiologist and Global Projects Manager for the Elizabeth R. Griffin Foundation, who has led teams of doctors to treat Ebola in West Africa, reported that he "can't tell them [his doctors] to tell the truth [to U.S. officials]" on Monday's "CNN Newsroom."
“At the moment these people are so valuable . . . I have to ensure they come back here, they get the rest needed. I can't tell them to tell the truth at the moment because we're seeing so much irrational behavior,” he stated. “I've come back numerous times between the U.S. and West Africa. If I come back now and say ‘I've been in contact with Ebola patients,’ I'm going to be locked in my house for 21 days,” Macgregor-Skinner said as his reason for not being truthful with officials, he added, “when I'm back here in the US, I am visiting US hospitals everyday helping them get prepared for Ebola. You take me out for three weeks, who’s going to replace me and help now US hospitals get ready? Those gaps can't be filled.”
He argued that teams of doctors and nurses could be trusted with the responsibility of monitoring themselves, stating, “When I bring my team back we are talking each day on video conferencing, FaceTime, Skype, text messaging, supporting each other. As soon as I feel sick I’m going to stay at home and call for help, but I’m not going to go to a Redskins game here in Washington D.C. That's irresponsible, but I need to get back to these hospitals and help them be prepared.
UPDATE: Here is the CNN video of his remarks.
The city’s first Ebola patient initially lied to authorities about his travels around the city following his return from treating disease victims in Africa, law-enforcement sources said. Dr. Craig Spencer at first told officials that he isolated himself in his Harlem apartment — and didn’t admit he rode the subways, dined out and went bowling until cops looked at his MetroCard the sources said. “He told the authorities that he self-quarantined. Detectives then reviewed his credit-card statement and MetroCard and found that he went over here, over there, up and down and all around,” a source said. Spencer finally ’fessed up when a cop “got on the phone and had to relay questions to him through the Health Department,” a source said. Officials then retraced Spencer’s steps, which included dining at The Meatball Shop in Greenwich Village and bowling at The Gutter in Brooklyn.UPDATE 11PM, 10/30: A spokesperson for the NYC healh department has now disputed the above story, which cites anonymous police officer sources, in a statement provided to CNBC. The spokesperson said: "Dr. Spencer cooperated fully with the Health Department to establish a timeline of his movements in the days following his return to New York from Guinea, providing his MetroCard, credit cards and cellphone." . . . When CNBC asked again if Spencer had at first lied to authorities or otherwise mislead them about his movements in the city, Lewin replied: "Please refer to the statement I just sent. As this states, Dr. Spencer cooperated fully with the Health Department."
Kaci Hickox, the Ebola nurse who was forcibly held in an isolation tent in New Jersey for three days, says she will not obey instructions to remain at home in Maine for 21 days. "I don't plan on sticking to the guidelines," Hickox tells TODAY's Matt Lauer. "I am not going to sit around and be bullied by politicians and forced to stay in my home when I am not a risk to the American public."
Maine health officials have said they expect her to agree to be quarantined at her home for a 21-day period. The Bangor Daily News reports. But Hickox, who agreed to stay home for two days, tells TODAY she will pursue legal action if Maine forces her into continued isolation. "If the restrictions placed on me by the state of Maine are not lifted by Thursday morning, I will go to court to fight for my freedom," she says.
I know, I know. You're tired of hearing me drone on about Facebook. I'm tired of hearing me talk about Facebook, too. But this is actually a different Facebook controversy.
Reuters broke the story on Friday, citing anonymous sources:
The company is exploring creating online "support communities" that would connect Facebook users suffering from various ailments. . . . Recently, Facebook executives have come to realize that healthcare might work as a tool to increase engagement with the site. One catalyst: the unexpected success of Facebook's "organ-donor status initiative," introduced in 2012. The day that Facebook altered profile pages to allow members to specify their organ donor-status, 13,054 people registered to be organ donors online in the United States, a 21 fold increase over the daily average of 616 registrations . . . . Separately, Facebook product teams noticed that people with chronic ailments such as diabetes would search the social networking site for advice, said one former Facebook insider. In addition, the proliferation of patient networks such as PatientsLikeMe demonstrate that people are increasingly comfortable sharing symptoms and treatment experiences online. . . . Facebook may already have a few ideas to alleviate privacy concerns around its health initiatives. The company is considering rolling out its first health application quietly and under a different name, a source said.
I'm quoted in this International Business Times article about Facebook's rumored plans. What follows is the full statement I provided to the reporter (links added). (I love it when a journalist will let me email my thoughts to him or her; I can better control what I say and after they use one or two sentences, the rest becomes an instant blog post.)
It's hard to comment too much, since the details are at this point so vague. But here are some thoughts. There's nothing inherently wrong with creating free online fora centered around particular medical conditions, including very serious ones, inviting patients suffering from those conditions to share their experiences with each other, and conducting research on and even selling that data to third parties. That's exactly the model that PatientsLikeMe uses. That patient network was created by the brothers and friend of a man with ALS and it has proved critical for many patients suffering from some 1500 diseases and critical for science. The Personal Genome Project, begun by Harvard geneticist George Church and now with additional university sites in Canada and the UK, is similar. [Disclosure: I'm a PGP research subject through which I have published by whole genome sequence and sensitive medical data online. I have not identified my PGP profile by name, although genomic data is inherently re-identifiable and PGP subjects were the target of a pair of re-identification attacks by privacy scholars last year. I'm also a member of the board of directors of PersonalGenomes.org, the nonprofit created to support the PGP and similar initiatives.]
Openly sharing such sensitive information online is the last thing many people would want to do, and it's not for everybody. But there are lots of reasons why many benefit from such fora. Connecting with others who share and can understand your experience can be psychologically critical, and for some ill people, such as those with limited mobility or rare conditions, an online forum may be the only feasible way of achieving this benefit. Such online fora needn't be open to researchers and others, but again, many patients prefer to openly share their experiences. Privacy can be important, but secrecy can also take a toll, especially for those with stigmatized conditions. Sharing patient data as widely as possible is the best way to accelerate research into those conditions, and many patients feel that playing even a modest role in accelerating research is empowering.
If true, it's troubling that Facebook is considering initially launching its health application under a different name, presumably in order to conceal its connection to Facebook and what many view as its checkered history of privacy practices. Individuals have the right to decide how open they want to be with their personal information, and they also have the right to decide for themselves whether Facebook can be trusted to stick to whatever terms it offers the users of its health application.
A WSJ reporter just tipped me off to this news release by Facebook regarding the changes it has made in its research practices in response to public outrage about its emotional contagion experiment, published in PNAS. I had a brief window of time in which to respond with my comments, so these are rushed and a first reaction, but for what they're worth, here's what I told her (plus links and less a couple of typos):
There’s a lot to like in this announcement. I’m delighted that, despite the backlash it received, Facebook will continue to publish at least some of their research in peer-reviewed journals and to post reprints of that research on their website, where everyone can benefit from it. It’s also encouraging that the company acknowledges the importance of user trust and that it has expressed a commitment to better communicate its research goals and results.
As for Facebook’s promise to subject future research to more extensive review by a wider and more senior group of people within the company, with an enhanced review process for research that concerns, say, minors or sensitive topics, it’s impossible to assess whether this is ethically good or bad without knowing a lot more about both the people who comprise the panel and their review process (including but not limited to Facebook's policy on when, if ever, the default requirements of informed consent may be modified or waived). It’s tempting to conclude that more review is always better. But research ethics committees (IRBs) can and do make mistakes in both directions – by approving research that should not have gone forward and by unreasonably thwarting important research. Do Facebook’s law, privacy, and policy people have any training in research ethics? Is there any sort of appeal process for Facebook’s data scientists if the panel arbitrarily rejects their proposal? These are the tip of the iceberg of challenges that the academic IRBs continue to face, and I fear that we are unthinkingly exporting an unhealthy system into the corporate world. Discussion is just beginning among academic scientists, corporate data scientists, and ethicists about the ethics of mass-scale digital experimentation (see, ahem, here and here). It’s theoretically possible, but unlikely, that in its new, but unclear, guidelines and review process Facebook has struck the optimal balance among the competing values and interests that this work involves.
Most alarming is Facebook’s suggestion that it retreat from experimental methods in favor of what are often second-best methods resorted to only when randomized, controlled studies are impossible. Academics, including those Facebook’s statement references in its announcement, often have to resort to non-experimental methods in studying social media because they lack access to corporate data and algorithms. “Manipulation” has a negative connotation outside of science but it is the heart of the scientific method and the best way of inferring causation. Studies have found that people perceive research to be more risky when it is described by words like “experiment” or “manipulation” rather than “study,” but it’s not always the case that randomized, controlled studies pose more risk than do observational studies. The incremental risk that a study — of whatever type — imposes on users is clearly ethically relevant, and that's what we should focus on, not this crude proxy for risk. I would rather see Facebook and other companies engage in ethical experiments than retreat from the scientific method.
It’s also unclear to me why guidelines require more extensive review if the work involves a collaboration with someone in the academic community.
Another stop on my fall Facebook/OKCupid tour: on October 10, I'll be participating on a panel (previewed in the NYT here) on "Experimentation and Ethical Practice," along with Harvard Law's Jonathan Zittrain, Google chief economist Hal Varian, my fellow PersonalGenomes.org board member and start-up investor Ester Dyson, and my friend and Maryland Law prof Leslie Meltzer Henry.
The panel will be moderated by Sinan Aral of the MIT Sloan School of Management, who is also one of the organizers of a two-day Conference on Digital Experimentation of which the panel is a part. The conference, which brings together academic researchers and data scientists from Google, Microsoft, and, yes, Facebook, may be of interest to some of our social scientist readers. (I'm told registration space is very limited, so "act soon," as they say.) From the conference website:
The ability to rapidly deploy micro-level randomized experiments at population scale is, in our view, one of the most significant innovations in modern social science. As more and more social interactions, behaviors, decisions, opinions and transactions are digitized and mediated by online platforms, we can quickly answer nuanced causal questions about the role of social behavior in population-level outcomes such as health, voting, political mobilization, consumer demand, information sharing, product rating and opinion aggregation. When appropriately theorized and rigorously applied, randomized experiments are the gold standard of causal inference and a cornerstone of effective policy. But the scale and complexity of these experiments also create scientific and statistical challenges for design and inference. The purpose of the Conference on Digital Experimentation at MIT (CODE) is to bring together leading researchers conducting and analyzing large scale randomized experiments in digitally mediated social and economic environments, in various scientific disciplines including economics, computer science and sociology, in order to lay the foundation for ongoing relationships and to build a lasting multidisciplinary research community.
I’m participating in several public events this fall pertaining to research ethics and regulation, most of them arising out of my recent work (in Wired and in Nature and elsewhere) on how to think about corporations conducting behavioral testing (in collaboration with academic researchers or not) on users and their online environments (think the recent Facebook and OKCupid experiments). These issues raise legal and ethical questions at the intersection of research, business, informational privacy, and innovation policy, and the mix of speakers in most of these events reflect that.
The first, and most accessible, is a “tweet chat” today, from 1-2 p.m. EST, on “Research Ethics in a Modern World.” It’s being sponsored by the Milken Institute’s FasterCures, “an action tank driven by a singular goal — to save lives by speeding up and improving the medical research system.” Other participants are Susannah Fox (RWJF's first Entrepreneur in Residence), John Wilbanks (a very interesting guy whose work, largely on empowering patients and research participants to share their data in ways that are consistent with their values, and bio defy concise description), and Margaret Anderson (FasterCures’s Executive Director).
Likely discussion topics include the impact of social media on research and research ethics; the ethics of A/B testing and similar behavioral testing of user environments (i.e., websites) by corporations, and who should be empowered to conduct research risk-benefit analysis (patients/participants? scientists? legal systems?). You don’t need to have a Twitter account to read the conversation, either as it unfolds in real time or at any time thereafter. Just point your web browser to this URL. But to ask a question or otherwise participate in the discussion (which is encouraged!), you do need a Twitter account.
On October 22, I’ll be at Harvard Law participating in a panel discussion of the Petrie-Flom Center’s latest anthology, “Human Subjects Research Regulation: Perspectives on the Future,” with Glenn Cohen, Holly Fernandez Lynch, and Barbara Bierer.
On December 4, I’ll be at Colorado Law participating in a day-long conference, “When Companies Study Their Customers: The Changing Face of Science, Research, and Ethics,” sponsored by the Silicon Flatirons Center for Law, Technology, and Entrepreneurship and the Tech Policy Lab at the University of Washington. Other participants include law profs Paul Ohm, Ryan Calo, and James Grimmelman, Princeton Center for Internet Technology Policy Director Edward Felten, FTC Commissioner Julie Brill, and UT-Austin psychologist Tal Yarkoni.
On December 5, I’ll be in Baltimore participating in a Meet the Authors lunch event at the annual PRIM&R conference with the editors and several other contributors to the aforementioned book.
And on December 6, I’ll be participating in a PRIM&R session on “Manipulating Emotions on the Internet: The Cases of Facebook, OkCupid,” with OHRP Director Jerry Menikoff, privacy and Internet research ethics scholar Michael Zimmer, and Berkeley Director of Research Subject Protection Rebecca Armstrong.
On April 13, I’ll be giving a seminar at Wharton in the Legal Studies and Business Ethics Speaker Series, in which I may present something related to these themes.
Many thanks to the organizers for including me in these very interesting events (interesting least of all due to my inclusion, needless to say), and I hope to see some readers at some of them.
I have a long article in Slate (with Chris Chabris) on the importance of replicating science. We use a recent (and especially bitter) dispute over the failure to replicate a social psychology experiment as an occasion for discussing several things of much broader import, including:
By now, most of you have probably heard—perhaps via your Facebook feed itself—that for one week in January of 2012, Facebook altered the algorithms it uses to determine which status updates appeared in the News Feed of 689,003 randomly-selected users (about 1 of every 2500 Facebook users). The results of this study—conducted by Adam Kramer of Facebook, Jamie Guillory of the University of California, San Francisco, and Jeffrey Hancock of Cornell—were just published in the Proceedings of the National Academy of Sciences (PNAS).
Although some have defended the study, most have criticized it as unethical, primarily because the closest that these 689,003 users came to giving voluntary, informed consent to participate was when they—and the rest of us—created a Facebook account and thereby agreed to Facebook’s Data Use Policy, which in its current iteration warns users that Facebook “may use the information we receive about you . . . for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”
Some of the discussion has reflected quite a bit of misunderstanding about the applicability of federal research regulations and IRB review to various kinds of actors, about when informed consent is and isn’t required under those regulations, and about what the study itself entailed. In this post, after going over the details of the study, I explain (more or less in order):
Much of my scholarship addresses cyberlaw. In order to keep up in this field, I have to follow the developments in two fields: law (obviously) and computer science. For this second area, it helps that I am trained in computer science and spent several decades programming for a variety of businesses. As part of the process of keeping up with computer technology, I maintain my membership with the Association for Computing Machinery, the primary academic society for computer science.
In this month’s Communications of the ACM, there was an interesting commentary entitled, “Technology Confounds the Courts.” It was written by Keith Kirkpatrick, apparently a non-lawyer. Comm. of the ACM, May, 2014, at 27. In the article, Mr. Kirkpatrick attempts to identify the reasons why our courts often do a poor job of understanding the computer technology that is involved in many modern cases. I found this commentary interesting as it examined a commonly identified problem within cyberlaw from the perspective of a technologist. Two of his points — the age of judges and the narrowness of decision-making — miss the mark. His underlying point — that judges need to understand technology — is sound although achieving the goal may be more difficult than he realizes.
Several times the author raises the average age of judges as a cause of their technological ignorance. See id. at 27 & 28. He also raises a somewhat related issue: the fact that our federal judges are appointed for life. See id. at 29. On these issues being a significant source of technological ignorance in the judiciary, I assert that Mr. Kirkpatrick is just plain wrong. A lot of us grey hairs have comprehensive knowledge of the current technology and how it is used. I could choose many examples, but I will highlight one of the professors I have always held in high regard, Frederick Brooks of the UNC Computer Science Department. According to his biography, Dr. Brooks is 83 years old. He has been a very important leader in the computer science field since the 1950s when helped develop the most famous line of mainframe computers ever, the IBM 360 series. He founded the computer science department at UNC where he is actively involved in researching virtual reality, hardly a backwater area of computer science. Similarly, most judges of my experience are not monk-like. They read the paper (probably online) and even Reddit. They serf the Web. Their exposure to technology is similar to others with busy lives. Mr. Kirkpatrick: age is irrelevant.
The author also complains about the narrowness of many technological-related decisions. See id. at 28–29. Here, he needs to better understand our legal system. Because we live in a common law country, a fundamental aspect of the system is that the courts render as narrow of a decision as the facts allow. Rather than being a fault, seeking the narrowest grounds for decision has kept the common law functioning for centuries.
The Hit (Well, Mostly)
Judicial ignorance of technology may not be complete, but its existence is impossible to deny. See, e.g., St. Clair v. Johnny's Oyster & Shrimp, Inc., 76 F. Supp. 2d 773 (S.D. Tex. 1999). The harder question is deciding how this can be changed. Mr. Kirkpatrick advocates for a more specialized court using both the EU’s Court of Justice and the Japanese Intellectual Property High Court as examples. Kirkpatrick, Technology Confounds at 29. For the EU court, “the more complex selection and appointment process involving a disparate group of EU members ... are more likely [to result in judges who are] current on a greater variety of technologies.” Id. For the Japanese court, he suggests that its use of full-time technical advisors will result in more competence. See id. Of course, he did not need to go overseas for examples as the U.S. Federal Circuit with its patent expertise would also seem to qualify (although far from all of the judges appointed to the Circuit have technological backgrounds).
Personally, I’m not sure how well a specialized, technology court would work. In the U.S., at least, we would still run into the “one supreme Court” language in Article III of the Constitution. Even if a technology court decided a case, it would be subject to an appeal the Supreme Court; indeed, this is the pattern with the Federal Circuit and the Supreme Court. In my area of cyberlaw, for example, the Federal Circuit often has a better understanding of the technology even though the Circuit might forget the broader purposes of the patent act. Ultimately, though, it is the Supreme Court’s often mistaken understanding of the technology that rules the day.
More importantly, it is not clear that a technology court is practical. To start with, which technology? As Mr. Kirkpatrick’s article correctly points out, our court system has done a horrible job articulating a functional system of laws for computer software. Part of the reason for this is that even our more techno-centric court lacks any members with the relevant computer science training. The Federal Circuit has numerous judges trained in chemistry and other traditional scientific areas — as well as some trained in history and other liberal arts — but it does not have computer scientists. This is problematic as it has become impractical to be the technological Jack-of-all-trades that it was possible to be through the late 1800s or early 1900s.
What that leaves is a suggestion in the article that our judges no longer pretend that they can understand all forms of technology without assistance. So, if a court recognizes that it needs help, where does it turn? One source that Mr. Kirkpatrick does not discuss is the lawyers who are representing the parties. There are numerous examples of case where the parties prepared a joint technology statement to help the court understand the issues. Of course, this only works where the parties agree about the technology — something that is less true in intellectual property litigation where defining the technology “your way” is often equivalent to winning. Further, it assumes that the legal team is sophisticated enough about the technology to be able to competently articulate it.
The other possibility, of course, as Mr. Kirkpatrick suggests, is to encourage the judges to recognize their technological shortcomings and to appoint masters under Rule 53 to help the court determine the technologically based facts. Unfortunately, the limited nature of the rule and the requirements of the Constitution may interfere. Rule 53 only allows masters if both parties consent, Fed. R. Civ. P. 53(a)(1)(A), or if the case is a non-jury case and is “exceptional,” id. 53(a)(1)(B). The Constitution imposes two limitations on the use of masters: each parties right to have the case ultimately decided by a Judge appointed under Article III and to demand a jury trial in many cases.
Even with the constitutional limitations, however, it would seem to be time to revisit the use of masters in technology cases within the federal system. For non-jury cases, the rule could be easily amended to make it clear that complicated technology underlying a case is an “exceptional condition” that the rule requires. Id. Obviously, the rule has to recognize the jury trial right provided by the Seventh Amendment. It would seem that a master’s report on the technology could be submitted to the jury to assist it in its decision-making in the same way that the report would be submitted to the judge in a non-jury matter. In both cases, the Article III or Amendment Seven decision-maker would be preserved while providing them with technological expertise from a neutral source.
You might think that the answer to this question is obvious. Obviously, it's your business, and yours alone, right? I mean, sure, maybe it would be considerate to discuss the potential ramifications of this activity with your partner. And you might want to consider the welfare of the bee. But other than that, whose business could it possibly be?
Well, as academic empiricists know, what others can do freely, they often require permission to do. Journalists, for instance, can ask potentially traumatizing questions to children without having to ask whether the risk to these children of interviewing them is justified by the expected knowledge to be gained; academics, by contrast, have to get permission from their institution's IRB first (and often that permission never comes).
So, too, with potentially traumatizing yourself — at least if you're an academic who’s trying to induce a bee to sting your penis in order to produce generalizable knowledge, rather than for some, um, other purpose.
Earlier today, science writer Ed Yong reported a fascinating self-experiment conducted by Michael Smith, a Cornell graduate student in the Department of Neurobiology and Behavior who studies the behavior and evolution of honeybees. As Ed explains, when, while doing his other research, a honeybee flew up Smith's shorts and stung his testicles, Smith was surprised to find that it didn't hurt as much as he expected. He began to wonder which body parts would really smart if they were stung by a bee and was again surprised to learn that this was a gap in the literature. So he decided to conduct an experiment on himself. (In addition to writing about the science of bee stings to the human penis, Ed is also your go-to guy for bat fellatio and cunnilingus, the spiky penises of beetles and spiders, and coral orgies.)
As Ed notes, Smith explains in his recently published paper reporting the results of his experiment, Honey bee sting pain index by body location, that
Cornell University’s Human Research Protection Program does not have a policy regarding researcher self-experimentation, so this research was not subject to review from their offices. The methods do not conflict with the Helsinki Declaration of 1975, revised in 1983. The author was the only person stung, was aware of all associated risks therein, gave his consent, and is aware that these results will be made public.
As Ed says, Smith's paper is "deadpan gold." But on this point, it's also wrong.
Last week, Kim Krawiec organized "Taxing Eggs," a mini on-line symposium here at the Lounge on the tax consequences of the compensated transfer of human eggs. Lisa Milot (Georgia), Larry Zelenak (Duke), Paul Stephan (Virginia) and I weighed in with different perspectives on the tax consequences of the transfers at issue in Perez v. Commissioner.
In that context, my attention was drawn to this story out of Sweden about four women who have received uterine transplants...and who are attempting to carry pregnancies to term:
A Swedish doctor says four women who received transplanted wombs have had embryos transferred into them in an attempt to get pregnant.
He would not say on Monday whether any of the women had succeeded. In all, nine women in Sweden have received new wombs since 2012, but two had to have them removed because of complications.
The women received wombs donated by their mothers or other close relatives in an experimental procedure designed to test whether it's possible to transfer a uterus so a woman can give birth to her own biological child. The women had in vitro fertilization before the transplants, using their own eggs to make embryos.
Read the full story here.
Just in time for the "holiday," Twitter brings us #AcademicValentines. Not surprisingly, many of these tidings of love and joy center on tenure:
But for you academic commitophobes out there, you can always hedge your bets:
Ouch. Of course, many entries address scholarship strategy...
...and methodology (again, offered in the more and less committed varieties)...
...and the beloved peer review process:
Sure, but is she committed enough to convert to Bluebook? That's dedication. And speaking of Bluebook, and the legal academy's penchant for insisting that every factual claim, no matter how obviously true or otherwise universally accepted, be cited:
I'll end with another one of my favorites:
UPDATE: Al is on the case:
This is post is part of The Bioethics Program’s ongoing Online Symposium on the Munoz and McMath cases, which I've organized, and is cross-posted from the symposium. To see all symposium contributions, in reverse chronological order, click here.
Had the hospital not relented and removed the ventilator from Marlise Munoz's body, could the Munoz fetus have been brought to term, or at least to viability? And if so, would the resulting child have experienced any temporary or permanent adverse health outcomes? Despite some overly confident commentary on both "sides" of this case suggesting a clear answer one way or the other—i.e., that there was no point in retaining the ventilator because the fetus could never be viable or was doomed to be born with catastrophic abnormalities; or, on the other hand, that but for the removal of the ventilator, the "unborn baby" was clearly on track to being born healthy—the truth is that we simply don't know.
Before getting into the limited available data about fetal outcomes in these relatively rare cases, a bit of brush clearing. The New York Times juxtaposed reports about possible abnormalities in the Munoz fetus with the hospital's stipulation about the fetus's non-viability in ways that are likely to confuse, rather than clarify:
Lawyers for Ms. Muñoz’s husband, Erick Muñoz, said they were provided with medical records that showed the fetus was “distinctly abnormal” and suffered from hydrocephalus — an accumulation of fluid in the cavities of the brain — as well as a possible heart problem.
The hospital acknowledged in court documents that the fetus was not viable.
Whether intentionally or not, the nation's newspaper of record implies — wrongly, I think — that the hospital conceded that the fetus would never be viable because of these reported abnormalities. In court, the hospital and Erick Munoz stipulated to a series of facts, including that Marlise was then 22 weeks pregnant and that "[a]t the time of this hearing, the fetus gestating inside Mrs. Munoz is not viable" (emphasis added). The hospital conceded nothing at all about any fetal abnormalities. In short, the Times, and many other commentors, have conflated "non-viability" as a function of gestational age with "non-viability" as a way of characterizing disabilities that are incompatible with life. As I read this stipulation, the hospital was not at all conceding that the fetus would never have been viable, had the ventilator remained in place. Rather, given the constitutional relevance of fetal viability, the hospital was merely conceding the banal scientific fact that the Munoz fetus was, at 22 weeks, not currently viable. There is nothing surprising in the least about the hospital's "concession" about "viability" in the first sense, above: 22-week fetuses are generally not considered viable.
From the truth is (a lot) stranger than fiction files comes this disturbing story, which interweaves—in ways that would be deemed implausible, if they appeared in a fiction manuscript—several of the topics I've written about here before: legal academia, human subjects research (sort of), reproductive technologies, direct-to-consumer (DTC) genetic testing, and preference heterogeneity.
Recently, a family—wife, husband, 21-year-old daughter—with an interest in genetic genealogy decided to avail themselves of 23andMe's DTC services. They received the results and were surprised to learn that the daughter is the biological child of the wife, but not the husband (and confirmed these results through clinical testing). So far, not so fantastic a story. Rates of non-paternity in the general population are traditionally said to be about 10%, although recent studies have suggested much lower rates. And the fact that the family discovered non-paternity through DTC genetic testing? Welcome to 2013.
The couple, it turns out, had had difficulty conceiving, and in 1991 had sought the help of Reproductive Medical Technologies, a fertility clinic associated with the University of Utah. Several times, clinicians there inseminated the wife with her husband's sperm. Alas, no pregnancies resulted. They decided to give artificial insemination one final try and—success. Some twenty-one-years later, they reflected on their newfound knowledge of the husband's nonpaternity and figured that there must have been a mix up in the clinic. They imagined the life now perhaps being lived by another 21-year-old, created from the husband's sperm and another artificially inseminated client. Unfortunate though they are, accidental mix-ups in fertility clinics are known to happen.
In this case, the family took its nonpaternity results beautifully in stride; the daughter knows that the man who raised her is her "real" dad, and he knows that she is his "real" daughter. Indeed, the family decided to go further and seek out their daughter's biological father—and perhaps the husband's biological daughter. To do so, they used the other two major DTC genetic genealogy companies, Family Tree DNA and AncestryDNA, to find close paternal relatives of the daughter. Searching for biological relatives through DTC genetic genealogy is increasingly common. Here's a great story about one adoptee's search, for instance, and only yesterday, I agreed to share my 23andMe profile with an adoptee looking for biological relatives. We're not quite yet at truth-stranger-than-fiction status yet.
The AncestryDNA testing yielded a predicted second cousin for the daughter, and the family made contact. The second cousin was at a loss to explain their genetic connection, except to note that her first cousin, an only child now deceased, had lived in Salt Lake City at the time and told the family that he'd been a sperm donor. When she shared his name—Thomas Ray Lippert—and an older picture of him, the husband and wife recognized him as Tom, who had worked at the front desk of the fertility clinic as well as in the back, as a technician. The wife
remembered [Tom] proudly displaying dozens of photos of babies behind his desk, boasting that he had helped all of their parents conceive. Looking at all of those beautiful babies and Tom’s confidence gave [the wife] hope that she and [the husband] could have the baby that they so desperately wanted as well. She never could have imagined how far Tom apparently would go to “help” couples conceive. [The husband] too remembered him and recalled thinking that Tom was a bit odd when he handed him the sample receptacle and the magazine.
Admittedly, discovering that someone in the fertility clinic substituted his sperm for the husband-client's is slightly more fantastical, but hardly unheard of in the real world. Tom's mother, still living, consented to genetic testing, which confirmed that Tom was indeed the daughter's biological father.
What happens next, however, reads like the kind of fantastical plot elements that would get a fiction manuscript tossed.
I wrote previously about a pending lawsuit to be filed by the ACLU on behalf of 54-year-old Jane Doe, a U.S. citizen who alleges that she was subjected to six hours of increasingly invasive cavity searches by U.S. Customs and Border Protection (CBP) agents and clinicians at the University Medical Center of El Paso as she attempted to return to the U.S. from Mexico via the Cordova Bridge in El Paso, Texas. That lawsuit has now been filed.
The facts alleged in the complaint, which are more or less those previously alleged in the media by Doe through her lawyer, are horrific. Here's the gist (Doe consented to none of the following searches, and none turned up any evidence of contraband):
Having found absolutely nothing incriminating, agents then gave her a choice: Retroactively sign a medical "consent" form and CBP will pick up the medical bill, or refuse to sign and be billed. Doe refused to sign and was later billed over $5,000 for her "treatment." She has not paid.
If true, this fact pattern suggests either that those involved are sadists or—far more likely—that they, like those who searched David Eckert and Timothy Young, are grossly overconfident in the evidentiary signal provided by a K-9 alert.
Under Florida v. Harris,
If a bona fide organization has certified a dog after testing his reliability in a controlled setting, a court can presume (subject to any conflicting evidence offered) that the dog’s alert provides probable cause to search. The same is true, even in the absence of formal certification, if the dog has recently and successfully completed a training program that evaluated his proficiency in locating drugs.
I have no earthly idea what a "bona fide" organization means, but my understanding is that there is wide variation in the standards employed by various training and certification organizations, some of which will "pass" a K-9 with a very high false positive rate. Among other things, K-9 responses have been shown to be significantly influenced by their handlers' beliefs about the presence of contraband. Or, on second thought, maybe not.
But even if we assume a very reliable K-9, and whatever we deem the predictive value of a positive test (i.e., an alert), it will be less than 100%. We should update our confidence in the presence of contraband when subsequent human searches yield nothing. And at some point in a series of false human searches following an initial K-9 alert, our updated belief should be considered insufficient to justify continued searches.
Agents and clinicians in this case — who had no reason to suspect Doe other than the K-9 alert combined with the base rate of drug smugglers among those who enter the U.S. at El Paso — behaved as if the reliability of K-9 alerts were 100%, or something very closely approaching it. Since these are the same principles that govern when it is appropriate to offer, and how to interpret the results of, medical screening tests, it's more than a little ironic than so many doctors seem willing to go to the mat for these K-9s and their handlers.
Like many of you, I imagine, I'm always looking for brief clips that I can show my students to illustrate a point with a bit of humor. When I teach advance directives (and the difficulties in and drawbacks of drafting them), for instance, I show this clip from "The Comeback" episode of Seinfeld season 8. (The only downside is that each time I show it, fewer students have ever heard of Kramer and the gang.)
Well, those of you who teach the Lilly Ledbetter Act or related matters may wish to check out this short snippet from a 2012 TED talk (but going semi-viral now, thanks to Upworthy) by primatologist and Emory professor Frans de Waal. I myself plan to use it the next time I teach or talk about bounded self-interest (and in particular, preferences for fairness/inequality aversion). Even if your work has nothing to do with employment discimination or behavioral economics, these two minutes are worth watching. Because cute Capuchin monkeys.
In the Frankfurtian sense of bullshit, that is. Or so it appears. Malcolm Gladwell is often hailed as bringing the insights of social science to the masses. But most social scientists (and many others) have long known that Gladwell plays fast and loose with that science. He cherry picks data (often discussing studies with splashy findings while failing to acknowledge other—often larger, more recent—studies that failed to replicate the beguiling counter-intuitive finding). He fails to acknowledge the limitations of the studies with which he captivates his audience (such as their embarrassingly small sample sizes). And so on.
But failing to do the science justice does not, all by itself, make Gladwell a bullshitter. He might earnestly believe in the truth of what he says—say, in Outliers, about becoming an “expert” in any competition by spending 10,000 hours practicing the relevant skill—but simply lack the competence to carefully assess and communicate the literature. Or he might just as earnestly believe that the 10,000-hour rule he popularized is false, and aim to mislead his readers (for whatever reason).
For Frankfurt, neither scenario implicates bullshit. Instead, for him, the “essence of bullshit” is a “lack of connection to a concern with truth,” an “indifference to how things really are.” Whether someone is a bullshitter or not, then, depends not on any correspondence of their statements to objective truths but, rather, on the speaker’s state of mind. In several remarkable recent interviews—analyzed in this Slate article*—Gladwell talks about his writing (and live “performances”) in ways that suggest that he is primarily concerned with telling a good (read: captivating) story, and not especially concerned with the scientific truth of those stories. In other words, Gladwell is a bullshitter.
The only question, now, is whether his audience knows they’re being bullshitted.
* Disclosure: the Slate article is written by my husband. I wouldn’t link to his work, however, if I didn’t share his deep loathing for bullshit, whether in popular writing or (ahem) academia. Nor would I do so if I didn’t think that the Gladwell problem is related to (if distinct from) similar questions closer to home, such as when an academic may pursue advocacy that selects data and arguments in the service of a preselected conclusion and when, instead, an academic’s writing should come with an “implied warranty of scholarly integrity.”
Legal academics who work across disciplines sometimes find themselves in the uncomfortable position of explaining to their stunned colleagues the process by which second- and third-year law students, armed with author c.v.s, decide what gets published and where.
Well, get ready to get your schadenfreude on. For the past 10 months, John Bohannon, a contributing correspondent for Science magazine, has been conducting a sting of (other) science journals and their peer review processes. Much like the famed Sokal hoax, Science submitted to 304 journals a bogus paper written by a fictitious researcher from a nonexistent institution. The paper described "the anticancer properties of a chemical that [the fictitious researcher] had extracted from a lichen," and according to Bohannon, "[a]ny reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short-comings immediately" and rejected it promptly. And yet, over half of the journals accepted the paper. Recall that the bogus paper purports to report on the discovery of the anticancer properties of lichen. Let the prospect of bogus cancer research published in peer reviewed medical journals sink in.
In late May, I wrote the following:
Yesterday, the Social Science Genetic Association Consortium, an international consortium that pools and conducts social science research on existing genome-wide association study (GWAS) data, and on whose Advisory Board I sit, published (online ahead of print) the results of its first study in Science. That paper — "GWAS of 126,559 Individuals Identifies Genetic Variants Associated with Educational Attainment" — like much human genetics research, has the potential to be misinterpreted in the lay, policy, and even science worlds. That's why, in addition to taking care to accurately describe the results in the paper itself, including announcing the small effect sizes of the replicated SNPs in the abstract, being willing to talk to the media (many scientists are not), and engaging in increasingly important "post-publication peer review" conversations on Twitter (yes, really) and elsewhere — we put together this FAQ of what the study does — and, just as important, does not — show. So far, our efforts have been rewarded with responsible journalism that helps keep the study's limits in the foreground.
I had no role in the GWAS itself; that credit goes to SSGAC’s extraordinarily meticulous scientists. I did, however, have a strong hand in the FAQs. And so I am really pleased that in a new editorial, the editors of Nature (not for nothing, Science’s main competitor) highlighted our FAQ as an example of best practices in behavioral genetics research and science communication. They write:
For clarity, scientists would do well to follow the example of the Social Science Genetic Association Consortium. In June, this group published a paper on genetic variants associated with educational attainment (C. A. Rietveld et al. Science 340, 1467–1471; 2013). Accompanying this was a nine-page Frequently Asked Questions document that, in plain, easy-to-understand language, addressed such questions as why the researchers did the study, what they found and what the implications of the work are — and are not (see go.nature.com/7mov2j). The document spelled out that the consortium had not found ‘the gene’ for educational attainment, that each genetic marker found has only a very small effect on length of schooling, and that any policy response based on that single study would be premature.
Scientists cannot be held responsible every time someone misinterprets their work. But simple steps such as these could help to prevent and address some of the potential distortions of behavioural genetics — and could help to ensure that society continues to support the work.
For more on taboo science—including IQ, race, violence, and sexuality—see Erika Check Hayden’s accompanying article, which discusses our Science GWAS in the IQ category and (elsewhere in the article) quotes Duke lawprof (and new Conspirator!) Nita Farahany. Now if we could only get popularizers of science to understand that their lay audience will rarely know that they are "oversimplifying" that science.
[Cross-posted at Bill of Health]