
How do deans lead when the world turns upside down?
Global deans Catherine Duggan and Darren Dahl on leading through turbulence — and why optimism, not certainty, is the real power in today’s business education....
by Öykü Işık Published December 5, 2024 in Uncategorized • 3 min read
Audio and video deepfakes can be used for fraud, phishing, and reputational damage. Executives should do two things to avoid being victimized:
The basic lesson here is that, before launching their attacks, bad actors can gather a lot of information on your company and employees to find and exploit vulnerabilities. Prepare for this by putting some kind of multimodal authentication in place. The aim is twofold: to verify information that could be used to do harm, and keep that verification secret.
Knowledge is power again here. There are all sorts of biases in AI (such as facial recognition technologies that recognize white faces with a higher degree of accuracy than black faces, and GenAI systems that read CVs differently depending on whether they are associated with male or female names.) Fix such biases by:
The lesson here is that even the most sophisticated GenAI tool is not foolproof. Companies must keep human beings in the loop to detect AI biases and blind spots, and have human oversight in their responsible AI governance processes.
We are now seeing data-poisoning attacks where AI models are tricked into behaving badly. (This is a type of cyberattack in which an adversary intentionally compromises a training dataset used by an AI or machine-learning model to influence or manipulate it.) Hackers are also using “jailbreaking mechanisms” to evade the guardrails put in place to limit GenAI’s potential for harm. Two measures are useful to understand how AI systems can be attacked:
By remaining vigilant and knowledgeable about AI’s strengths and weaknesses, organizations can mitigate risks. Think like your potential enemies to stay one step ahead in this new technological realm.
Professor of Digital Strategy and Cybersecurity at IMD
Öykü Işık is Professor of Digital Strategy and Cybersecurity at IMD, where she leads the Cybersecurity Risk and Strategy program and co-directs the Generative AI for Business Sprint. She is an expert on digital resilience and the ways in which disruptive technologies challenge our society and organizations. Named on the Thinkers50 Radar 2022 list of up-and-coming global thought leaders, she helps businesses to tackle cybersecurity, data privacy, and digital ethics challenges, and enables CEOs and other executives to understand these issues.
May 19, 2025 • by David Bach, Felix Zeltner in Uncategorized
Global deans Catherine Duggan and Darren Dahl on leading through turbulence — and why optimism, not certainty, is the real power in today’s business education....
May 7, 2025 • by José Parra Moyano in Uncategorized
As AI continues to reshape industries, businesses must navigate the balance between automation and human contribution. Take this short quiz to test your knowledge of its potential uses and check out our...
April 24, 2025 • by Nicola Pless , Thomas Maak in Uncategorized
Every leadership approach has its pros and cons. Recognizing your style and knowing how to blend different approaches – and avoid the traps – will improve your ability to respond to challenges...
April 17, 2025 • by Alexander Fleischmann in Uncategorized
A recent survey by IMD, Microsoft, and Ringier reveals that just 35% of organizations have strategies to address diversity bias in GenAI. Alexander Fleischmann identifies the risks and suggests ways to mitigate...
Explore first person business intelligence from top minds curated for a global executive audience