The science of statement analytics is being applied to AI as a more intuitive way to detect falsehoods. As lie detecting AI gets more effective, only time will tell how people react to this kind of tech. Is it necessary or just an excuse to invade people’s privacy?
Since the polygraph machine was invented 94 years ago, it has been a fascinating piece of technology. Although it has been a useful interrogation device that has helped solve numerous crimes, its use in pop culture true crime dramas and talk show plot lines has made the polygraph a strange source of intrigue for the general public as well.
How can a machine really know if somebody is lying?
It is because of this question that often information that comes from polygraphs can be inadmissible in court. It seems that for every story of a polygraph’s success, there is another one about its inconsistency.
As technology is becoming more advanced, and more intuitive, it’s possible that soon we will be unable to unlock the mysteries behind lies and how to detect them.
As AI advances we will be able to go beyond using biological aspects of lie detection like increased heart rate and nervousness. We can use textual clues found in written communication, as well as visual clues that people have learned to pick up on when communicating with one another.
Lying is one of the most human traits, it is an intuitive action that needs an intuitive method of detection.
The Writing’s on The Wall
Statement analysis is the study of how people use their words in order to discern whether they are creating falsehoods or telling the truth. There are a few consistent ways to tell if someone is lying just based off of their emails.
If someone sends you an email and does not use many personal pronouns, this is a telltale sight that they are lying to you.
“Just got home, missed the bus, had to walk 5 miles in the rain.” Unbelievable.
Using vague language is another sign of deception.
People lie all of the time, but they feel most comfortable lying by omission. People who purposefully leave out details when asked about specific projects are most likely trying to deceive.
Verb tense discrepancies in text are puzzling–yet a common trait of liars. “When people come up with lies, their brains often have a hard time keeping track of timelines. When fibbing about a past event, it’s typical to accidentally switch to the present tense; that’s because the lie is being created in the moment,” says Vanessa Van Edwards of Entrepreneur.
Of course, these methods are scientifically proven to detect lies, but they do not guarantee that the person is lying. What if they just have bad grammar and a pithy writing voice?
Retired Green Beret Sergeant Major turned entrepreneur, Karl Erikson, suggests that the only ways to tell if someone is lying is to gather research. His solutions are helpful in professional and casual settings but do little to help when time is limited.
A Lie Detecting AI Kiosk
In conjunction with Canada, the U.S. has begun testing a lie detecting AI. The Automated Virtual Agent for Truth Assessments in Real-Time (AVATAR), looks like a self check-out kiosk with an AR floating head that is meant to ask security questions to air travel passengers like “do you have any meat or vegetables in your luggage?” along side easier softball questions like “how was your flight?”The U.S. and Canada are using AVATAR to improve lie detecting capability.Click To Tweet
AVATAR asks a range of questions in order to gather information about how the passenger responds and then uses that information to discern whether or not the person is lying.
The system is able to “detect changes in the eyes, voice, gestures and posture to determine potential risk. It can even tell when you’re curling your toes.”
It is difficult for a human to monitor all of these changes when trying to detect falsehood. Often, biases lead us astray. The technology driving AVATAR is a mix biological polygraph technology mixed with statement analytics body language analysis.
With the U.S. continuing to increase border security, AVATAR seems like something that may be of use to security agents. However, there may be some backlash regarding the ethical concerns of the AVATAR system.
In terms of privacy, a lie detecting AI could be a real breach. If a person submits to questioning, security agents already have the rights to search a person’s body and possessions. Now, we see news of people’s phones and social media history being required information for people at airports. In a world where everything is subject to search, what will it mean if they can search your mind?
With the good of Industry 4.0 innovation also comes the potential for misuse. What do you think? Is this technology a breach of personal privacy or a welcomed security upgrade?