Search the Lounge


« My L.A. Times Op-Ed: In Defense of the Evidence-Based Nudge | Main | Trusts and Estates Meets Gender, Race, and Class »

September 29, 2013


Feed You can follow this conversation by subscribing to the comment feed for this post.

alta charo

Cute but not brilliant. An ethical randomized trial requires clinical equipoise, i.e. a genuine uncertainty as to which arm of the study is best for the subjects. To the extent that supporters and opponents are sure of their opinions, they certainly would not see equipoise and therefore could not ethically agree to randomization. Perhaps the original proposer of this social experiment is one of the few who truly don't think they know the effect of the law. Second, in addition to the oft-noted problem of randomizing among non-standardized entities (ie the Calif-Wyoming problem), there is the problem of blinding. If people know which state they are in, and can observe effects in other states while the experiment continues, then they can change their behaviors in reaction, thus completely undermining the original design. And finally, this whole suggestion is premised on the belief that carefully constructed evidence will sway opinion and action on this topic. To date that has not been true, with large sections of the very population this will help - the non- or under-insured -- believing that the law will hurt them due to what they hear on some radio/TV/internet sources. Only time and personal experience and neighbors' anecdotes will change attitudes.

Thomas NZ

Or why not just compare health care in the US to the rest of the developed world...

Michelle Meyer

Hi Alta. You have a point about the lack of blinding, but plenty of RCTs are conducted and yield valuable fruit without double (or even single) blinding. Lack of blinding makes an RCT imperfect, but far from useless -- and usually far better than retrospective observational studies, which is what we're stuck with after we implement a policy on a national scale all at once (see, e.g., the inconclusive observational studies of menu calorie counts I discuss in my LA Times op-ed and accompanying blog post here). Declining to conduct an RCT because we can't blind subjects and/or investigators strikes me as making the perfect the enemy of the good.

As for equipoise, as you know, Charles Fried thought (actually, still thinks) that the only potentially morally relevant version of equipoise is the one that takes place at the level of the individual clinician: if my doc is in equipoise between intervention A and B, then (perhaps) it's ethical to flip a coin and randomize me to one of those options. Ben Freedman, noting that such a requirement would make clinical research nearly impossible, proposed a different version -- clinical equipoise -- which has come to dominate research ethics. Clinical equipoise finds morally meaningful equipoise at the level of the relevant professional community, e.g., oncologists in the aggregate being in equipoise between mastectomy and lumpectomy with radiation makes it ethical to run a trial in which patient-subjects consent to be randomized between the two options. So, if conventional research ethics is to be our guide, then I don't think the fact that individual citizens, lawmakers, employers, insurers, health care providers and other stakeholders have strong feelings about the likely effects of Obamacare one way or another is especially relevant.

What's relevant is equipoise in the aggregate, and that arguably exists here. I voted for Obama three times and I don't feel that Obamacare portends the end of civilization as we know it. But I don't think we know what Obamacare's effects will be. How will employers respond? Will sufficient numbers of individuals respond to their mandate by enrolling? Who will the winners and losers of Obamacare be and by how much will they win and lose relative to alternative mechanisms for healthcare access and delivery? And even: how important is health care *insurance* to health *care* and to what we really care about -- *health*? We're still trying to answer the latter question in the context of Medicaid coverage (see the important work of Katherine Baicker et al. in Oregon).

Yes, most of us have strong intuitions about the answers to some or all of these questions. But even the strongest, most plausible intuitions about the effects of welfare programs (and much else) can be brought up short by a good RCT (see, e.g., the results, published in the YLJ, of Jim Greiner's RCT of the effects of an offer of legal aid representation from HLS students, and some of the Oregon Medicaid experiment results). Obamacare, like most complex policies newly being implemented, is itself an experiment. The idea that we might conduct that experiment rigorously, and thereby gain valuable knowledge that will allow us to improve the quality of our healthcare systems, is an important one that is at the heart of learning healthcare systems, comparative effectiveness research, and so on. (For more see and

On your final point, I'm not sure that that blogger's premise is necessarily that carefully constructed empirical evidence will sway public opinion on a large scale. More to the point, regardless of what's inside his head, that needn't be our hypothesis. One hopes that policy makers would pay attention to good data and have the intellectual integrity to change their position if the data warrants it. Yes, clearly some in each category (policy-makers and public) will always be hell bent on policy-based evidence (and of course much of the healthcare wars concern disputes about values rather than facts), but I'm not so cynical as to capitulate just yet to the inevitability of policy-by-anecdata of the sort you mention.

@Thomas: although this is a favorite game of those on both sides of the Obamacare wars, and although such comparisons certainly aren't meaningless, comparing U.S. health care systems to those of other countries, along with the respective health and other outcomes of their citizens, and then attempting to infer causation is the California-Wyoming problem Alta refers to (i.e., the problem of confounding variables) -- on steroids.

The comments to this entry are closed.


  • StatCounter
Blog powered by Typepad