Search the Lounge


« Susan Poser Named Provost of Illinois - Chicago | Main | LSAT Scores and Eventual Bar Passage Rates »

December 14, 2015


Feed You can follow this conversation by subscribing to the comment feed for this post.

Doug Richmond

As someone who has tried a lot of jury cases, this strikes me as horribly misguided.


I'd be extremely worried about it - it is sort of half-witted poker-playing, but does not take account of why people vary.

I don't play poker, but I have spoken with some pretty successful players. They all say the same thing - looking for tells that someone is lying takes place over the course of several games and requires memory, because every player's "tells" are different and specific to that player. A good professional poker player learns to wind back the game to remember each player's tells when bluffing (which is essentially lying) etc.

One problem I have in the past had with Asian witnesses, particularly Japanese, is that they tend to speak with a slightly higher pitched voice and frequently will raise a hand to cover their mouth when talking. However, in western countries higher pitch when speaking is considered an indication of lying (I call it the reverse Walter Cronkite effect), while facial touching and covering the mouth, which is polite in many Asian cultures is considered a sure-fire indication of lying in western. I remember as a junior associate a very experienced partner teaching me the get-the-witness to the bathroom rule before he/she is in the box- he explained that as a Justice Department lawyer he had a bad experience with a "shifty witness." When he asked the witness after his testimony wtf - it turned out that he had wanted to go to the bathroom, but was too embarrassed to ask.

Vocal fill - good grief, the work I have had to do to teach people not to uhh, umm and like - it's a speech trait that can be taught out of people. Some people do it a lot, others don't. I had a conflict of laws professor who ummmed and uhhed so appallingly (and then restarted the sentence) that a class with him would drive you crazy - a fifteen word sentence could take 5 minutes to deliver. He was a nice guy, and he was hardly lying about conflict of law, but sheeesh.

Gesturing with both hands - what about the French, Italians and Spanish? They are incapable of not gesticulating with both. Did these researchers visit various of the ethnic communities of Brooklyn and Queens?

I would be exceptionally worried about this stuff being taken seriously - it's not is it?

Michelle Meyer

Oy. I couldn't figure out from the PR what they actually did, so I tracked down the paper, which was published in the conference proceedings in which this study was presented (it's unclear whether it was peer reviewed). In a separate post which Steve will probably have to fish out of Typepad spam purgatory, I'll provide the URL. Here's the key bit from the methods section:

"We considered three different trial outcomes that helped us to correctly label a certain trial video clip as deceptive or truthful: guilty verdict, non-guilty verdict, and exoneration. Thus, for guilty verdicts, deceptive clips are collected from a defendant in a trial and truthful videos are collected from witnesses in the same trial. In some cases, deceptive videos are collected from a suspect deny- ing a crime he committed and truthful clips are taken from the same suspect when answering questions concerning some facts that were verified by the police as truthful. For the witnesses, testimonies that were verified by police investigations are labeled as truthful, whereas testimonies in favor of a guilty suspect are labeled as deceptive. Exoneration testimonies are collected as truthful statements."

As to whether bad science with potentially serious practical uses is taken seriously, I'm afraid it is. (That includes badly designed and/or unreplicated small-N, implausible studies cited in legal academics' work and sometimes by courts. Very troubling. As long as we're talking about law school reform, perhaps we should teach future lawyers and judges how to skeptically consume scientific literature.)

I note that no lawyers were involved in the making of this research. Even this explanation is a little short on details, but based on it, to say that their data likely contains a lot of noise is an understatement.

Michelle Meyer

Link to paper:

Deborah Merritt

I also looked at the underlying paper, because I have followed other research in this area and UM's summary sounded fishy. As Michelle's quote suggests, the research was a bit more nuanced than simply relying upon the jury verdict. But, as someone currently serving as a special prosecutor, I feel comfortable saying that police investigations can be equally unreliable (even on seemingly small matters).

Another concern with this study is that the researchers focused on publicly available trial clips--and seem proud of the fact that they included clips from high-profile trials. But witnesses in high-profile trials (especially ones where the cameras are going) may testify differently than witnesses in other cases. I wouldn't place much faith on this particular study.

All of that said, this is a very serious area of research that is turning up important findings. A key point about the computer programs being developed at centers like this is that they learn--much as poker players do. Good researchers feed computers videos that have been validated (in more reliable ways) as true or false, then ask computers to spot differences. The computer programs can both identify tells that humans might miss (because of their ability to compare vast amounts of data) and continue learning from their mistakes if the researchers give them feedback. Good researchers have also paid attention to cross-cultural differences.

There are two very intriguing things about this research: (1) As noted above, computer learning algorithms can unearth all sorts of useful information that complements human learning. We notice some things, but computers pick up others. (2) We know that humans are pretty poor at distinguishing truth from lies. That's a systemic problem in police investigations, security systems, and trials. The challenge is to figure out how to educate humans based on what the computers tell us and/or how to merge the two types of expertise for the best results.

Steve L.

I added an update to the original post with some thoughts about poker.



Lately, in civil matters, are things called "videotaped depositions" (in the criminal context, interrogations are regularly videotaped). Trials on remand that were originally videotaped, etc.

Lots of opportunities to study, though these materials may not, of course, always be available.


A different question for Steve:

In the UK for example, after various reforms following scandals such as the Birmingham 6, etc. where the police apparently made up incriminating statements given in interview as well as the old techniques for exacting confessions, all police interviews are recorded (on special two tape recording systems.) Increasingly they are also video-recorded. At least anecdotally this has led to higher conviction rates.

However, excluding instances where the interviewee collapses and announce "I done it guv' you got me dead to rights, clap the darbies on me and haul me up to the beak," do you have a concern at juries, magistrates and judges' belief that they are lie detectors too? i.e., that they can unerringly tell if someone is lying?


By the way the modern British criminal classes, at least these days, would never use that sort of language, whatever the late John Mortimer might have suggested - but a correct approximation would get caught by the spam and profanity filter.

Deborah Merritt

I see the difference between repeat encounters and one-off ones; defense attorneys and prosecutors usually know the other's negotiating strategy and mannerisms when bluffing. But many credibility determinations in law are made in one-off situations: trials, depositions, negotiations (when not with a repeat player), police interviews, etc. The question is whether computer analyses can tell us anything that aids credibility determinations in these one-off circumstances.

The biggest appetite for this research, I think, comes from airport security and other types of security surveillance. If computers analyze thousands of video images, and are given information about what subsequent searches yielded, can they effectively do initial screening and/or identify characteristics that human observers can learn to spot?

I find this research fascinating, partly because it challenges us to assess what we believe about our own capabilities. Why do we think a person is lying, smuggling explosives, or likely to steal? Are we as savvy about cultural differences as we think? Does a computerized security system do a better job than a loss prevention officer in overriding racial/gender/cultural biases when screening shoppers for signs of shoplifting?

One thing we don't have to worry about is the use of these systems in courtrooms. Judges emphatically reject almost anything that tries to interfere with the jury's determination of credibility. Except, of course, for prior criminal convictions. Those we routinely admit to suggest lack of credibility, despite the fact that no one (to my knowledge) has come up with sound evidence showing that people convicted of felonies are actually more likely (as a group) to lie on the stand than people who have not been previously convicted. The irony to this rule of evidence is that the prior conviction usually results from a plea bargain in which the person admitted their guilt in open court!

Steve L.

Great comment, Deborah. I can definitely see the possibility using such software as a screening device. For example, to decide which luggage to search. But it would be very dangerous to use it for any sort of ultimate determination.

Steven J. Harper

This study is quintessential "junk science." Small and Incomplete samples; dubious methodology; assumptions about the relationship between witnesses and trial outcomes; even assumptions about ultimate truth. Unfortunately, it's also an example of our society's efforts to develop quantitive (and therefore superficially appealing) metrics that provide illusory objectivity (and certainty) where it does not exist.

"Lying individuals moved their hands more. They tried to sound more certain. And, somewhat counterintuitively, they looked their questioners in the eye a bit more often than those presumed to be telling the truth, among other behaviors." Good grief.

The great danger is that anyone will think this study has any value at all.

Who runs the project? Answer: a "professor of computer science and engineering...leads the project with [an] assistant professor of mechanical engineering at UM-Flint." They also have two research fellows.

Who pays for this stuff? Answer: "The work was funded by the National Science Foundation, John Templeton Foundation and Defense Advanced Research Projects Agency." What a waste.


By the way, I have been through this issue, now that I think about it on the false-rape-accusation debate, when I made the point "who knows??"

All a "he says, she say acquittal" is that the jury were not convinced beyond a reasonable doubt. As a social convention we treat the acquitted as innocent ... but would you let him date your daughter/sister/granddaughter/friend - assuming you had a veto? Maybe he's innocent, but....hmmm......

Using race acquittals as meaning "she lied" is as horseshit as the methodology in this situation.

The comments to this entry are closed.


  • StatCounter
Blog powered by Typepad