From toddlers to criminal suspects, lying is an inherent human trait. Now, AI lie detectors may soon be used to separate fact from fiction.

Thanks to our senses, we get accurate data about the reality surrounding us. However, our biased judgment can mess with the idea of “truth”.

In an argument between two people, for example, the interpretation of events is likely to be completely different for both parties.

We can fool ourselves into thinking a made-up truth is a truth. Then, we can often unintentionally present it to others as such.

In his book The Folly of Fools, evolutionary biologist Robert Trivers says that lying to oneself makes it easier to lie to others.

People lie by falsifying information and presenting it as true, or by omitting some facts and telling half-truths. We all do this for several reasons, but in the end, the act of lying is intended to manipulate and misdirect.

But how do you know whether you’re being lied to?

It’s a bit complicated.

Read More: Emotional AI Will Soon see Right Through Your Poker Face

Lying is More Deceptive Than we Think

We all know that lying can be relatively stressful, a fact that translates into several verbal and physical signs that are observable.

When lying, a person puts their mind to what they’re saying more than to all the nonverbal communication accompanying the lie. As a result, they let slip gestures, facial expressions, other clues that vary in subtlety, and that may give the lie away.

However, according to the results of a new study, it’s actually harder to spot a liar based on their body language than you think.

Psychologists at the University of Edinburgh (UK) devised an interactive game to see how easy it is to detect the common cues to lying.

The researchers divided participants into 24 pairs of speakers and listeners. The first group was asked to lie at will, and the other tried to spot the lies.

Researchers then compiled a record containing 1,100 statements uttered by speakers coded against 19 verbal and nonverbal cues usually associated with lying such as changes in speech rate and pitch, hesitation, shifts in eye gaze, blinking, and hand gestures.

Upon analyzing the results, the team of psychologists found that people are pretty good at detecting lying cues, but there’s a catch.

People can display some of the same verbal and physical cues that are “more likely to be used if the speaker is telling the truth.”

The findings suggest that we have strong preconceptions about the behavior associated with lying, which we act on almost instinctively when listening to others. However, we don’t necessarily produce these cues when we’re lying, perhaps because we try to suppress them.

With these findings, it shows the challenges that arise in creating lie detectors. Polygraph tests, for example, are known to be beatable. But could an AI lie detector be a feasible alternative?

AI Lie Detectors

Right now, there’s no foolproof way to detect lies.

Lie detectors, or polygraphs, go a bit deeper in the assessment of truthfulness by analyzing physiological signals like blood pressure and heart rate.

However, conventional lie detectors remain limited and their accuracy is questionable.

Maybe AI will be able to tell lies apart.

Funded by the U.S. Department of Homeland Security, a team from San Diego State University and the University of Arizona developed AVATAR (Automated Virtual Agent for Truth Assessments in Real-Time), an AI lie detector system for border security.

This kiosk-like robot has an AI avatar agent that asks travelers standard questions, analyzes their responses in real-time, and can then flag suspicious individuals to human agents for additional screening.

AVATAR is currently under test at the U.S.-Mexico border, in Canada, and the European Union.

The problem with creating a foolproof AI lie detector is that everyone lies differently. However, if introduced into airports and border crossings across the world, the sheer magnitude of sample data available may be enough to reduce the failure rates of these AI lie detectors to almost zero.

But then again, sometimes you may just want to trust your gut. There is evidence to suggest that humans have the ability for Unconscious Lie Detection, but even this may be something that AI lie detectors could understand in the future.

Do you think an AI platform could ever be taught to lie proficiently?

banner ad to seo services page