
Trust me, I’m a robot: how to avoid the pitfalls of AI
The development and adoption of AI are fraught with potential hazards. Here’s how to avoid the pitfalls and ensure trust in your systems. ...
by Öykü Işık Published December 5, 2024 in Artificial Intelligence • 3 min read
Audio and video deepfakes can be used for fraud, phishing, and reputational damage. Executives should do two things to avoid being victimized:
The basic lesson here is that, before launching their attacks, bad actors can gather a lot of information on your company and employees to find and exploit vulnerabilities. Prepare for this by putting some kind of multimodal authentication in place. The aim is twofold: to verify information that could be used to do harm, and keep that verification secret.
Knowledge is power again here. There are all sorts of biases in AI (such as facial recognition technologies that recognize white faces with a higher degree of accuracy than black faces, and GenAI systems that read CVs differently depending on whether they are associated with male or female names.) Fix such biases by:
The lesson here is that even the most sophisticated GenAI tool is not foolproof. Companies must keep human beings in the loop to detect AI biases and blind spots, and have human oversight in their responsible AI governance processes.
We are now seeing data-poisoning attacks where AI models are tricked into behaving badly. (This is a type of cyberattack in which an adversary intentionally compromises a training dataset used by an AI or machine-learning model to influence or manipulate it.) Hackers are also using “jailbreaking mechanisms” to evade the guardrails put in place to limit GenAI’s potential for harm. Two measures are useful to understand how AI systems can be attacked:
By remaining vigilant and knowledgeable about AI’s strengths and weaknesses, organizations can mitigate risks. Think like your potential enemies to stay one step ahead in this new technological realm.
Professor of Digital Strategy and Cybersecurity at IMD
Öykü Işık is Professor of Digital Strategy and Cybersecurity at IMD, where she leads the Cybersecurity Risk and Strategy program and co-directs the Generative AI for Business Sprint. She is an expert on digital resilience and the ways in which disruptive technologies challenge our society and organizations. Named on the Thinkers50 Radar 2022 list of up-and-coming global thought leaders, she helps businesses to tackle cybersecurity, data privacy, and digital ethics challenges, and enables CEOs and other executives to understand these issues.
September 10, 2025 • by José Parra Moyano in AI
The development and adoption of AI are fraught with potential hazards. Here’s how to avoid the pitfalls and ensure trust in your systems. ...
August 26, 2025 • by Konstantinos Trantopoulos , Yash Raj Shrestha, Amit M. Joshi, Michael R. Wade, Jingqi Liu in AI
Generative AI (GenAI) promises massive transformation, but only if strategy and execution align. After studying 100 GenAI implementations across sectors, we’ve found that most firms fall into four GenAI personas. Each reflects...
August 19, 2025 • by Michael R. Wade, Konstantinos Trantopoulos , Mark Navas , Anders Romare in AI
Here’s what leaders need to unlearn to scale AI across their organizations. ...
July 1, 2025 • by Gopi Kallayil in AI
Artificial intelligence is perhaps the most far-reaching technology ever created. Google’s AI business strategist Gopi Kallayil recommends asking yourself three key questions regarding your business strategy and identifies three AI capabilities you...
Explore first person business intelligence from top minds curated for a global executive audience