What’s happening with GenAI in China?
Global leaders should know where China stands with generative AI and what strategic actions make sense with an eye to the future. ...
by Öykü Işık Published 23 July 2024 in Artificial Intelligence • 5 min read
It’s important that you save your vote for the November election.
Those were the words of US President Joe Biden in a robocall reaching out to registered Democrats in New Hampshire, discouraging them from voting in the primary this year. Except they were not. The audio message was an audio deepfake, a digital manipulation of the president’s voice in what the New Hampshire attorney general’s office described as an “unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters.”
A sure sign that, as digital tools become more sophisticated, our vigilance must go up – and it’s true for voters in high-stakes elections all over the world. It’s also true for executives protecting their organizations from cyber-attacks.
For today’s executives, in an AI-amped up world, I want to share three important points to keep in mind to stay safe from harm:
1. Know AI’s capabilities and strengths so they can’t be used against you (as in the above example).
2. Know AI’s shortcomings and weaknesses so they can’t be used against you (for example, biases in large language models).
3. Know that AI itself can be attacked and look out for that, too.
In all three of these areas, the common thread is to think like your potential enemies – anyone who would want to do your company harm – to prepare your best defense.
Audio and video deepfakes can be used for fraud, phishing, and reputational damage. Imagine you are a Chief Financial Officer who receives a phone call that sounds just like your boss, the CEO. The voice is the same and the contextual clues are consistent with what you know of your boss, who says, “We have just acquired a new startup and we don’t want anyone to know about this exciting development yet. This is highly confidential. But before it can happen, we need you to quietly transfer $10m to this account.”
It may sound far-fetched, but unfortunately, such scams are common. There are two things that executives should do to avoid being victimized. First, learn about what is possible with the latest digital technologies and artificial intelligence, and second, have a secret codeword in place to act as an analog check in case of any doubts.
Parents of teenagers may know the secret codeword trick already. It’s an ingenious way to allow your child to signal on the phone in front of a packed party of their peers that they may need a safe ride home (without losing face).
The basic lesson here is that, before launching their attacks, bad actors can gather a lot of information on your company and employees to find and exploit vulnerabilities. You should know this and be prepared with some sort of multimodal authentication in place. As with the parents of teenagers, the two-pronged goal is not only to verify information (that might be used to do harm) but to keep that verification under wraps.
“The lesson here is that even the most sophisticated new generative AI tool trained on vast amounts of data is not foolproof.”
Once again, knowledge is power here. My own research focusing on responsible AI governance has led me down the rabbit hole of all sorts of biases in AI. Take, for instance, facial recognition technologies that recognize white faces with a significantly higher degree of accuracy than Black faces, which, if used in law enforcement, can have significant implications. This bias can be fixed by paying more attention to the training-data curation process.
How about large language models (LLMs)? Research has found many instances of bias there, too. One study looked at how ChatGPT was reading CVs differently if they were associated with male or female names. In this study, ChatGPT was to be used to help write job recommendations. Male names prompted praise for professionalism and work ethic, while female names prompted praise for personalities and pleasantness – even when the content of the CVs was the same (apart from gender-obvious names). This one does not have an easy fix and requires a carefully crafted governance mechanism in the organization to mitigate the risks.
The lesson here is that even the most sophisticated new generative AI tool trained on vast amounts of data is not foolproof. Companies must keep human beings in the loop to figure out AI biases and blind spots. Companies must have processes with human oversight in their responsible AI governance.
For example, Air Canada found out the hard way about its AI shortcomings after its chatbot told a customer incorrect information about getting a bereavement discount for a flight. After the AI got it wrong, the customer won their suit for damages. To prevent a similar mishap in your organization, keep a human being in the oversight process.
There are also so-called jailbreaking mechanisms that hackers use to jump the guardrails put into place to limit GenAI’s potential for harm.
The third and final area I want to address is that AI models themselves can be attacked. Data poisoning attacks are popping up, where AI models are tricked into behaving badly. For example, small black and white stickers on stop signs were shown to gum up how autonomous cars interpreted those crucial traffic indicators. Essentially, the algorithm was tricked, and the stop sign was misread. There are also so-called jailbreaking mechanisms that hackers use to jump the guardrails put into place to limit GenAI’s potential for harm. So, for example, for the sake of society, we don’t want ChatGPT making a home recipe for deadly bombs available to anyone of any age who asks. But jailbreaks are codes that undo the guardrails.
To understand how systems can be attacked, hackathons are useful. Invite digital experts in to attempt to break your systems to find their vulnerabilities before actual cyber criminals do.
At the end of the day, AI is not ready to be set on its way without human oversight.
By remaining vigilant and knowledgeable about AI’s strengths and weaknesses, organizations can mitigate risks. Think like your potential enemies to stay one step ahead in this new technological realm.
AI x 9: This article appears in a nine-part summer series examining how AI impacts leadership and business, produced in collaboration with Expansión.
Professor of Digital Strategy and Cybersecurity at IMD
Öykü Işık is Professor of Digital Strategy and Cybersecurity at IMD, where she leads the Cybersecurity Risk and Strategy program. She is an expert on digital resilience and the ways in which disruptive technologies challenge our society and organizations. Named on the Thinkers50 Radar 2022 list of up-and-coming global thought leaders, she helps businesses to tackle cybersecurity, data privacy, and digital ethics challenges, and enables CEOs and other executives to understand these issues.
27 August 2024 • by Mark J. Greeven in Artificial Intelligence
Global leaders should know where China stands with generative AI and what strategic actions make sense with an eye to the future. ...
19 August 2024 • by Tommaso Giardini in Artificial Intelligence
A patchwork of emerging AI rules is raising companies’ compliance risk. Ready? Here’s how to think strategically about new risks now. ...
6 August 2024 • by Julia Binder, José Parra Moyano in Artificial Intelligence
AI can enhance sustainability reporting and uncover insights to deepen green efforts. However, it also has its own environmental impact. Here’s our advice for handling the double imperative of sustainability with and...
2 August 2024 • by Kartik Hosanagar in Artificial Intelligence
AI is no substitute for human ingenuity, but it is a valuable collaborator that saves time and frees us from repetitive work, argues Kartik Hosanagar ...
Explore first person business intelligence from top minds curated for a global executive audience