19 February 2013
How to Be a Lie Detector
Professor Glenn D Wilson
When Neville Chamberlain met Hitler in 1938 he came away with the impression that “here was a man who could be relied upon when he had given his word” (Ekman, 1992). This failure of reading body language had dire consequences, perhaps leading to the Second World War. Could we do better today? Modern research and techniques could help but the science of lie detection remains imperfect.
On average we lie two or three times day but few of these sinister. Mostly they are just “white lies”, intended to spare the feelings of others - an essential part of social etiquette (c.f., the Ricky Gervais film, The Man Who Invented Lying). It is the big lies to our detriment, like Hitler’s denial of intending to invade Czechoslovakia, that we need to be wary of and these can be more difficult to detect.
In Mediaeval times suspected witches who denied consorting with the devil were subjected to the flotation test. Dropped into water, they were found guilty and executed if they floated (since few people in those days knew how to swim). If they sank, they were declared innocent but often drowned. The use of waterboarding to interrogate suspected terrorists is reminiscent of this procedure.
The best known “lie detector” apparatus consists of a polygraph measuring autonomic stress responses such as skin conductance, respiration and heart rate, blood pressure and finger temperature. The idea is that these physiological indices are outside of conscious control. Of course, a baseline needs to be established using questions where the answer is known to be true or false before moving on to critical questions. The guilty knowledge paradigm looks for exaggerated reactions to items that would be of no special significance to an innocent party. Still, the polygraph can be faked and is not acceptable in most courtrooms around the world (Lewis & Cuppari, 2009) even though it may prompt the guilty to confess.
Various drugs have been tried as possible truth serums, particularly hypnotics like sodium pentothal and psychedelics such as LSD. In fact, they just make people more suggestible and produce “too much information”, most of it fantasy (c.f., hypnotism and torture). Even if they worked, they are neither ethical nor legal in most civilised countries. However, that might not be the end of the matter. It has been suggested that oxytocin, which makes people more trusting, might one day be used to enhance a “good cop” bonding effect (Brown, 2006).
There is very little variation in people’s ability to detect lying in others (Bond & DePaulo, 2008). Overall, untrained people are about 53% accurate, which is only just better than chance, though a few individuals may be more astute. Training does not necessarily improve this performance; it may even diminish it by making people over-confident. Detecting lies does not seem to vary with intelligence or emotional ability. However, experienced liars tend to be better at spotting deception in others (Wright et al, 2012).
There is rather more variation in the skill of liars. Good liars tend to be high in Machiavellianism and Self-monitoring and are often good actors. Some succeed either because they lack emotions like guilt that might give them away or have come to “believe” their own lie. Psychopaths are a bad risk for recidivism but are more than twice as likely as their non-psychopathic counterparts to be granted parole following a parole interview (Porter & ten Brinke, 2010).
Feminine facial features, such as soft complexion, large eyes and mouth and a happy look are judged as more trustworthy than those with macho looks such as thick, knitted eyebrows, a strong jawline and an angry aspect. This can affect jury decisions and court sentencing quite powerfully. Usually this is unfair, though the stereotypes are not entirely without foundation. Within limits, masculine features do make people less trustworthy (Sirrat, 2010).
Gestures called illustrators (e.g., spreading one’s hands when describing something big) suggest emotional involvement and reinforce what is being said. Lying tends to be associated with a reduction in illustrator gestures, particularly when compared with an individual’s own norm, but again this principle may break down when the stakes are high (Porter & ten Brinke, 2010). When hand gestures are inconsistent with what is being said this is suggestive of deception.
Charles Darwin observed that fleeting emotions sometimes break through other expressions despite attempts to conceal them. Paul Ekman (2003) has researched facial expressions characteristic of various emotions and the way in which these may appear as inappropriate micro-expressions that can betray deception. Ekman gives as an example the testimony of Kato Kaelin in defence of O.J. Simpson. Although trying to charm the prosecutor, he gave glimpses of disgust and anger (c.f., the TV appearance of murderer Tracie Andrews in England). Ekman’s work is the basis of the character “Dr Cal Lightman” the body language expert who assists law enforcement in the TV series Lie To Me.
Porter and ten Brinke (2010) say micro-expressions have been prematurely accepted as a lie detection system without proper assessment of false positives (meaningless micro-expressions occurring in truthful statements). However, there is some experimental support for the idea. Hurley and Frank (2011) showed that liars were less able to control their eyebrows and smiling than people telling the truth, while Porter and ten Brinke (2008) found that people asked to respond to emotive images with inappropriate expressions did display contrary micro-expressions. Despite this emotional leakage, untrained observers were only slightly above chance in judging which emotional expressions were “correct”.
Expressions that are manufactured differ from those that are spontaneous and truly felt. For example, a false smile can be detected by a lack of involvement of the eyes (Ekman et al, 1988). A genuine smile includes a contraction of the muscles around the eyes producing “crow’s feet” wrinkles. False smiles are more likely to show the lower teeth, to include elements of negative emotions such as anger or disgust and to be asymmetrical (Gosselin et al, 2010). Whatever the differences, skilled actors (and others) are able to produce natural looking smiles by conjuring feelings of happiness.
There is a popular belief that liars avert their eyes in shame. This is an unreliable cue because it is easily over-ridden; today’s liars are just as likely to stare you straight in the face. Some proponents of “neurolinguistic programming” claim that lies are betrayed by the direction of eye movements as a person prepares to answer a question. The theory is that glances upward to the left indicate genuine memory recall, while glances up to the right suggest that the “creative” side of the brain is being called upon “make up” a lie. Unfortunately, experimental evidence does not support this theory (Wiseman et al, 2012).
A more promising approach is that of computer analysis of eye movements following a “searching” (critical) question compared to a baseline derived from the same individual’s eye movements in response to neutral questions. This is reported to give 82% accuracy in separating liars from those telling the truth (Bhaskaran et al, 2011).
Emotional stress can be observed without a polygraph in body signs such as blushing, blinking, sweating and dry mouth (frequent sipping of water). These could indicate lying but are there are many other reasons for feeling stressed when giving a speech or under interrogation and these should be considered first. The same is true of self-comforting gestures such as folding arms across the chest and touching the face. Blink rates have actually been found to lower in some people who are lying (another example of over-control). Also, as “Dr Cal Lightman” would emphasise, the absence of appropriate distress such as grief (especially in the upper face) may be equally telling.
Lip pressing is associated with lying (DePaulo & Morris, 2004), perhaps because it represents a fear of blurting out something that one would prefer the world not know (“buttoning one’s lip”). Similarly, touching the nose is said to indicate lying, though it may only signal discomfort. Hirsch & Wolf (2001) dubbed this the Pinocchio Effect because they say it is due to erectile tissue in the nose causing irritation that one feels inclined to scratch. During his denial of sex with Monica Lewinsky, Bill Clinton was observed to touch his nose with greater frequency at moments later revealed to be untruthful.
Voice stress has been vaunted as a sign of deceit and many computer and mobile phone applications claim to be able to detect deception through micro-tremors in the voice when a person is lying. If true, this would be very useful because it could be applied with telephone conversations and tape recorded interview responses. Unfortunately, scientific studies suggest that its validity is poor. For example measures of drug users denying the use of drugs, validated against urine analysis, show voice stress to be little better than chance as an indicator of guilt (Harnsberger et al 2009).
Thermal imaging has been used to detect fever at airports and is now being investigated as an aid to lie detection (Warmelink, 2011). When passengers lied about their travel plans, the skin temperature around their eyes rose significantly (compared with truth-tellers, whose temperature did not change). On this basis it was possible to identify 64% of truth-tellers and 69% of liars. However, interviewers’ judgements were more accurate (72% and 77%) and the thermal image data added nothing extra. The only advantage of the thermal camera, therefore, is that it can be automated.
Since lies originate in the brain, fMRI has been tested as a way of capturing them “at source”. The theory is that telling the truth just requires memory activity whereas telling a lie is more complicated. The liar must first suppress the truth and then generate an imaginary scenario, all of which will involve more frontal brain activity. Claims of 80-90% accuracy have been made for this technology. However, it is cumbersome, expensive and not currently considered practical for use in a courtroom (Spence, 2008; Langleben & Moriarty, 2012). Also, like the polygraph, it is not immune to cheating by self-stimulation to neutral stimuli (Ganis et al, 2011).
A unique case of neuroscientific evidence being used as evidence in a murder trial occurred in Mumbai (2008), when Aditi Sharma and her lover were found guilty of poisoning her fiancé. EEG evoked potentials (P300) were used in connection with a “guilty knowledge” paradigm. The judge agreed that this provided “experiential knowledge” of details of the crime which corroborated other evidence and both were sentenced to life in prison.
Speech patterns change when people are lying. There is usually an increase in repetition, undue emphasis and hesitations (ums and ers). Liars tend to start their answers to a question later (unless well-prepared, in which case they may start even sooner than truth-tellers). The pace is usually slower, giving them more time to construct a story. They talk less, giving fewer facts that might be checked (particularly details close to the crime). In general, they come across as negative and uncooperative.
According to Vrij (2008;Vrij et al, 2011) a suspect’s choice of words and phrases betrays guilt better than body language. Interviewers are better to focus on what they say, rather than how they are looking. Open questions like “What did you do yesterday between 3 and 4pm?” encourage the suspect to talk so that inconsistencies can be exposed. Guilty people use oblique rather than direct denials and tentative words like “maybe” and “perhaps”. They say evasive things like “I’m trying to tell you the truth”. Michael Jackson repeatedly denied ever “hurting” children (but not “molesting” them). Bill Clinton denied “having sexual relations” with “that woman” (not “sex with Monica Lewinsky”). Expanded contractions (“I did not”, rather than “I didn’t”) may suggest deception, as does referring to a missing person in the past tense (“My wife was amazing”).
Computer software has been devised to diagnose deception by counting the frequency of certain word categories in a statement. The Linguistic Inquiry and Word Count (Pennebaker et al, 2007) is based on findings that liars use fewer personal pronouns (I, me) and exclusion terms (but, nor) than truth-tellers, and more negative emotion words (hate, sad). This is claimed to be 67% accurate (compared with 52% for human judges). Linguistic analysis of deceptive online dating profiles has yielded similar figures (Toma & Hancock, 2012). This is a promising approach but it needs further validation in realistic, forensic settings.
Criteria-Based Content Analysis (CBCA) assesses the credibility of a statement by counting the number of characteristics of true stories that appear in it. These include amount of detail, unusual and superfluous details, embedding within a context of time and place, verbatim conversation, subjective feelings, self-deprecation, admission of memory lapses and spontaneous corrections. This system is already widely used but the various criteria need separate validation and it could be susceptible to coaching (Porter & ten Brinke, 2010).
Vrij et al (2011) note that lying is more mentally taxing than telling the truth and interventions that impose extra cognitive load make the lie more obvious. Unexpected questions may wrong-foot a suspect that has a carefully prepared story and asking suspects to recall events in reverse order magnifies the difference between true and false accounts. Another ploy is to have suspects make sketches of a scene, including the positioning of people in their story. This provides further material that can be checked. The strategic use of evidence can also be revealing, e.g. holding back known evidence until later, when the suspect’s story might already have contradicted it. As already noted, torture and other coercive methods are ineffective, aside from being illegal and unethical.
In summary, many of the best-known methods of lie detection (including the polygraph, truth drugs, voice stress, thermal imaging and brain scanning) lack reliability. They are not currently admissible as evidence in Western courtrooms though they may be useful in prompting confessions, which then require independent verification. The reading of body language is also vexed. Many popular stereotypes turn out to be untrue and publicity given to research findings allows skilled liars to subvert them. Discovering the characteristics of liars not only makes us better at detecting them but, equally, it can make us better liars.
Bhaskaran, N. et al (2011) Lie to me: Deceit detection via online behavioural learning. Automatic Face and Gesture Recognition and
Workshops. IEEE Xplore.
Bond, C.F. & DePaulo, B.M (2008) Individual differences in judging detection: accuracy and bias. Psychological Bulletin, 134, 477-492.
Brown, D. (2006) Some believe truth serums will come back. Washington Post (Nov 20).
DePaulo, B.M. & Morris, W. (2004). In P.Granhag & L.Stromwall (Eds). The Detection of Deception in Forensic Contexts. NY: Cambridge
Ekman, P. (1992) Telling Lies: Clues to Deceit in the Marketplace, Politics and Marriage. NY, Norton.
Ekman, P. et al (1988) Smiles when lying. Journal of Personality and Social Psychology, 54, 414-420.
Ekman, P (2003) Emotions Revealed, New York: Holt.
Ganis, G. et al (2011) Lying in the scanner: Covert countermeasures disrupt deception detection by functional magnetic resonance imaging. Neuroimage, 55, 312-319, PMID: 21111834.
Gosselin, P. et al (2010) Children’s ability to distinguish between enjoyment and non-enjoyment smiles. Infant Child Development, 19, 297-312.
Harnsberger et al (2009) Stress and deception in speech: evaluating layered voice analysis. Journal of Forensic Science, 54, 642-650.
Hirsch, A. & Wolf, C. (2001) Practical methods for detecting mendacity: A case study. Journal of the American Academy of Psychiatry and Law, 29, 438-444.
Hurley, C.M. & Frank, M.G. (2011) Executing facial control during deception situations. Journal of Nonverbal Behaviour, 35, 119 (online).
Langleben, D.D. & Moriarty, J.C. (2012) Using brain imaging for lie detection: Where science, law and policy collide. Psychology, Public Policy and the Law, Sept 2012 (online).
Lewis, J.A. & Cuppari, M. (2009) The polygraph: The truth lies within. Journal of Psychiatry and Law, 37, 85-92.
Pennebaker, J.W. et al (2007) The Linguistic Inquiry and Word Count. (online).
Porter, S. & ten Brinke, L. (2008) Reading between the lies: Identifying concealed and falsified emotions in universal facial expressions. Psychological Science, 19 508-514.
Porter, S. & ten Brinke, L. (2010) The truth about lies; what works in detecting high stakes deception? Legal and Criminological Psychology, 15, 57-75.
Sirrat, M. (2010) Valid facial cues to cooperation and trust: Male facial width and trustworthiness. Psychological Science, 21, 349-354.
Spence, S.A. (2008) Playing Devil’s advocate. The case against fMRI lie detection. Legal and Criminological Psychology, 13, 11-25.
Toma, C.L. & Hancock, J.T. (2012) What lies beneath: The linguistic traces of deception in online dating profiles. Journal of Communication, 62, 78 (online).
Vrij, A. (2008) Nonverbal dominance versus verbal accuracy in lie detection. A plea to change police practice. Criminal Justice and Behaviour, 35, 1323-1336.
Vrij, A. et al (2008) A cognitive load approach to lie detection. Journal of Investigative Psychology and Offender Profiling, 5, 39-43.
Vrij, A. et al (2011) Outsmarting the liars: Toward a cognitive lie detection approach. Current Directions in Psychological Science, 20, 28-32.
Vrij, A. et al (2011) Lie detection: Misconceptions, pitfalls and opportunities for improvement. Psychological Science in the Public Interest, 11 (online).
Warmelink, (2011) Thermal imaging as a lie detection tool at airports. Law and Human Behaviour, 35, 40-48.
Wiseman, R. et al (2012) The eyes don’t have it: Lie detection and neurolinguistic programming. PloS ONE.
Wright, G.R.T. et al (2012) “You can’t kid a kidder”: Association between production and detection of deception in an interactive deception task. Frontiers of Human Neuroscience (online).
© Professor Glenn D Wilson 2013