
Hacking Digital 4: How to work across silos
Working in silos is one of the biggest obstacles to digital success. The key to real digital transformation is to align the various business units in the organization. Here’s how to avoid...

by José Parra Moyano Published September 10, 2025 in AI • 3 min read
Always ask for and verify the sources or references behind GenAI outputs.
Maintain a critical mindset toward GenAI outputs and validate with secondary sources, especially for cybersecurity-related tasks. Using your eyes and ears is still the best way to differentiate between AI-generated and genuine human content.
Look for user reviews and documented case studies that illustrate real-world performance and reliability.
Request clarity on the data sources and methodologies used to train AI to assess potential biases thoroughly. Most biased outputs can be traced to training data sets that were not carefully curated, and which were unrepresentative of the group for which the output would be used.
Avoid investing in AI for its own sake; instead, focus on solving pain points where clear value can be demonstrated.
Utilize model cards to document how your team assesses and mitigates risks (e.g., bias and explainability) and make this information available to users.
Conduct regular audits of AI performance and fairness to identify and address any issues proactively. Bodies such as the European Data Protection Board and the Dutch-based ICT Institute have published helpful checklists on how to conduct such audits.
Define clear accountability structures and governance policies (such as ethics boards and external audits) around your AI systems to bolster user trust.
Educate your users about how AI works, including its limitations and the measures taken to ensure its reliability, Use clear, jargon-free communication to demystify AI and build trust.
Be careful to balance the rush to grab commercial gains that AI offers with the paramount need for trust. Put in place strong systems that continuously challenge the trustworthiness of the AI you use and with which your customers will interact.

Professor of Digital Strategy
José Parra Moyano is Professor of Digital Strategy. He focuses on the management and economics of data and privacy and how firms can create sustainable value in the digital economy. An award-winning teacher, he also founded his own successful startup, was appointed to the World Economic Forum’s Global Shapers Community of young people driving change, and was named on the Forbes ‘30 under 30’ list of outstanding young entrepreneurs in Switzerland. At IMD, he teaches in a variety of programs, such as the MBA and Strategic Finance programs, on the topic of AI, strategy, and Innovation.

October 14, 2025 • by Michael R. Wade, Didier Bonnet, Tomoko Yokoi, Nikolaus Obwegeser in AI
Working in silos is one of the biggest obstacles to digital success. The key to real digital transformation is to align the various business units in the organization. Here’s how to avoid...

October 9, 2025 • by Michael R. Wade, Didier Bonnet, Tomoko Yokoi, Nikolaus Obwegeser in AI
Learning new digital tools, technologies, and business models presents both short-term and longer-term challenges. Here’s a quick guide to the essentials of getting your people up to speed and beyond....

October 8, 2025 • by Michael R. Wade, Didier Bonnet, Tomoko Yokoi, Nikolaus Obwegeser in AI
The tenure of a Chief Digital Officer (CDO) is relatively short due to a variety of factors. To avoid costly mistakes, design the role around the four key actions below - but...

October 1, 2025 • by Howard H. Yu in AI
AI’s transformative potential requires more than technical readiness – it demands workforce readiness. Consult the checklist to gauge whether your employees are learning by doing, and follow the three steps to begin...
Explore first person business intelligence from top minds curated for a global executive audience