Those were the words of US President Joe Biden in a robocall reaching out to registered Democrats in New Hampshire, discouraging them from voting in the primary this year. Except they were not. The audio message was an audio deepfake, a digital manipulation of the president’s voice in what the New Hampshire attorney general’s office described as an “unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters.”
A sure sign that, as digital tools become more sophisticated, our vigilance must go up – and it’s true for voters in high-stakes elections all over the world. It’s also true for executives protecting their organizations from cyber-attacks.
For today’s executives, in an AI-amped up world, I want to share three important points to keep in mind to stay safe from harm:
1. Know AI’s capabilities and strengths so they can’t be used against you (as in the above example).
2. Know AI’s shortcomings and weaknesses so they can’t be used against you (for example, biases in large language models).
3. Know that AI itself can be attacked and look out for that, too.
In all three of these areas, the common thread is to think like your potential enemies – anyone who would want to do your company harm – to prepare your best defense.