Between 2016 and 2019, the EU experimented with iBorderCtrl, an automated border control system which uses machine learning for lie-detection, and facial recognition to measure the affective state of immigrants – to reveal their personality and emotions through their facial features. State-of-the-art scientific research does not support the notion that we can deduce emotions from facial expressions. Yet here we are, using such applications supported by government institutions, taking automated decisions that affect millions of lives.
AI can truly change the world – but we have a responsibility to ensure that these systems are developed free of bias, truly measure what we want them to measure, and are not discriminating against certain groups for the sake of efficiency. Regulations pushing for explainable and transparent AI are a welcome trend for 2022. Proposed first by the EU, then followed by China, we will see more countries introducing oversight and responsibility expectations for AI use. We want AI to amplify humanity’s best, not automate its worst.
Oyku Isik, Professor of Digital Strategy and Cybersecurity Â