
Hacking Digital 4: How to work across silos
Working in silos is one of the biggest obstacles to digital success. The key to real digital transformation is to align the various business units in the organization. Here’s how to avoid...
by Öykü Işık Published December 5, 2024 in Artificial Intelligence • 3 min read
Audio and video deepfakes can be used for fraud, phishing, and reputational damage. Executives should do two things to avoid being victimized:
The basic lesson here is that, before launching their attacks, bad actors can gather a lot of information on your company and employees to find and exploit vulnerabilities. Prepare for this by putting some kind of multimodal authentication in place. The aim is twofold: to verify information that could be used to do harm, and keep that verification secret.
Knowledge is power again here. There are all sorts of biases in AI (such as facial recognition technologies that recognize white faces with a higher degree of accuracy than black faces, and GenAI systems that read CVs differently depending on whether they are associated with male or female names.) Fix such biases by:
The lesson here is that even the most sophisticated GenAI tool is not foolproof. Companies must keep human beings in the loop to detect AI biases and blind spots, and have human oversight in their responsible AI governance processes.
We are now seeing data-poisoning attacks where AI models are tricked into behaving badly. (This is a type of cyberattack in which an adversary intentionally compromises a training dataset used by an AI or machine-learning model to influence or manipulate it.) Hackers are also using “jailbreaking mechanisms” to evade the guardrails put in place to limit GenAI’s potential for harm. Two measures are useful to understand how AI systems can be attacked:
By remaining vigilant and knowledgeable about AI’s strengths and weaknesses, organizations can mitigate risks. Think like your potential enemies to stay one step ahead in this new technological realm.
Professor of Digital Strategy and Cybersecurity at IMD
Öykü Işık is Professor of Digital Strategy and Cybersecurity at IMD, where she leads the Cybersecurity Risk and Strategy program and co-directs the Generative AI for Business Sprint. She is an expert on digital resilience and the ways in which disruptive technologies challenge our society and organizations. Named on the Thinkers50 Radar 2022 list of up-and-coming global thought leaders, she helps businesses to tackle cybersecurity, data privacy, and digital ethics challenges, and enables CEOs and other executives to understand these issues.
October 14, 2025 • by Michael R. Wade, Didier Bonnet, Tomoko Yokoi, Nikolaus Obwegeser in AI
Working in silos is one of the biggest obstacles to digital success. The key to real digital transformation is to align the various business units in the organization. Here’s how to avoid...
October 9, 2025 • by Michael R. Wade, Didier Bonnet, Tomoko Yokoi, Nikolaus Obwegeser in AI
Learning new digital tools, technologies, and business models presents both short-term and longer-term challenges. Here’s a quick guide to the essentials of getting your people up to speed and beyond....
October 8, 2025 • by Michael R. Wade, Didier Bonnet, Tomoko Yokoi, Nikolaus Obwegeser in AI
The tenure of a Chief Digital Officer (CDO) is relatively short due to a variety of factors. To avoid costly mistakes, design the role around the four key actions below - but...
October 1, 2025 • by Howard H. Yu in AI
AI’s transformative potential requires more than technical readiness – it demands workforce readiness. Consult the checklist to gauge whether your employees are learning by doing, and follow the three steps to begin...
Explore first person business intelligence from top minds curated for a global executive audience