Scientists have developed an AI that can detect deception in courtroom videos.
The system, dubbed as Deception Analysis and Reasoning Engine (DARE), allegedly uses an autonomous AI that detects deception in videos of courtroom trials. The researchers from the University of Maryland led by Larry Davis, head of the Center for Automation Research (CfAR), explained their AI system in a paper that has yet to be peer-reviewed on arXiv.
Spotting deception is a crucial part of courtroom trials. While people in the witness stand swear to tell the truth before giving their statements, many intentionally do not adhere to that promise. Now, being able to spot those lies can make a world of difference for anyone who’s awaiting vindication or incarceration through a trial.Researchers created an #AI that detects deception in courtroom trial videos.Click To Tweet
“Deception is common in our daily lives,” the researchers wrote in the paper. “Some lies are harmless, while others may have severe consequences and can become an existential threat to society. For example, lying in a court may affect justice and let a guilty defendant go free. Therefore, accurate detection of a deception in a high stakes situation is crucial for personal and public safety.”
With that in mind, the team of researchers developed DARE to spot micro-expressions that people tend to show when lying such as raising their eyebrows or tilting their head.
AI That Detects Deception Might Improve Justice System
According to the researchers, their AI that detects deception was trained to identify five common micro-expressions usually associated with lying: eyebrows raising, frowning, protruding lips, lip corners turning up, and head side turning. Furthermore, DARE could also analyze audio frequency to reveal certain vocal patterns which can indicate if a person is telling the truth or not.
“On the vision side, our system uses classifiers trained on low-level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction,” Davis and his colleagues explained.
The team then used a set of training videos where actors were instructed to either lie or tell the truth. Surprisingly, after watching 15 training videos, DARE managed to spot 92 percent of micro-expressions in the final test video.
The system appeared to be 11 percent better than human assessors who only picked up 81 percent of micro-expressions when given the same task, a clear indication that AI could be better at spotting liars inside the courtroom.
“An interesting finding was the feature representation which we used for our vision module,” Bharat Singh, co-author of the study, told Futurism. “A remarkable observation was that the visual AI system was significantly better than common people at predicting deception.”
The team believes their AI that detects deception can indeed be used in courtroom trials in the future. However, they don’t see its applications as limited to the courtroom setting only. Singh said to Futurism:
“The goal of this project is not to just focus on courtroom videos but predict deception in an overt setting. We are performing controlled experiments in social games, such as Mafia, where it is easier to collect more data and evaluate algorithms extensively. We expect that algorithms developed in these controlled settings could generalize to other scenarios, also.”
While the intention behind the development of DARE is good, Raja Chatilla, Global Initiative for Ethical Consideration in Artificial Intelligence and Autonomous Systems chair at the Institute of Electrical and Electronics Engineers, warned that DARE must be used with caution.
“If this is going to be used for deciding […] the fate of humans, it should be considered within its limitations and in context, to help a human — the judge — to make a decision,” Chatilla said.