The Artificial Lawyer reports that the European Union is testing a system of automated lie-detector tests for use at its international borders. The technology “will use a digital avatar to interview travellers at border posts, ask them questions and then use facial expression ‘biomarkers’ based on previously taught patterns to decide if they are lying.” The focus of the six-month pilot run of iBorderControl, as the software is called, will be on questions relating to immigration: a major administrative problem in the EU.

A number of EU countries have signed on to the €4.5m project. The trial, which concludes in August 2019, is taking place in Greece, Latvia and Hungary, with administration based in Luxembourg.

As the Artificial Lawyer article points out, if adopted, the system would give decision-making powers relating to a legal area to machine-learning-based technology. “Given that lying at a border in an attempt to gain entry would likely constitute a criminal offence, then this software has important human rights and justice implications.” The author of the article goes on to imagine other legal contexts in which similar approaches might be applied in future – by police, courts, and other legal entities.

The prospect is both alarming and intriguing, and I recommend reading both the article and the comments. To my mind, the most important initial question may be: What legal jurisdiction will first admit assessments of human reliability or deceit obtained by learning-based digital technology as evidence in court?

I would be very interested to know your thoughts on this or any other matter related to the law, either in the comments section below or directly via email.