Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
Can you rely on AI? Use our checklist to avoid the pitfalls 

Artificial Intelligence

Can you rely on AI? Use our checklist to avoid the pitfalls 

Published 15 May 2025 in Artificial Intelligence • 6 min read • Audio availableAudio available

In a world of deepfakes, hallucinations, and bias, we offer practical guidance on how you can trust your AI systems.

From bias and copyright infringement to hallucinations, deepfakes, and false information, we are all familiar with the unhelpful (at best) and damaging (at worst) idiosyncrasies of AI that have overshadowed the development and adoption of one of history’s most significant technological advances.

For example, during the 2024 UK election, a video of former Prime Minister Rishi Sunak declaring plans to introduce compulsory military service in conflict zones appeared on social media. It was a deepfake; there were no such plans. And in 2018, Amazon was forced to scrap an AI-automated hiring tool after it discriminated against women due to male-oriented bias in its data.

Rogue AI can also have serious commercial consequences. In 2012, the financial services firm Knight Capital lost nearly half a billion dollars in less than an hour after an AI-powered algorithm triggered unintended stock trades. As a result, the company had to be sold to a rival. Unforeseen problems may well emerge with the rapid development of AI agents – systems based on Large Language Models (LLMs) that can operate autonomously on the internet and other platforms.

For organizations and leaders, these risks and realities represent more than just eye-grabbing headlines. They can make the difference between success and failure in a future that will likely be shaped by those who harness AI most effectively and credibly. Trust is at the heart of the matter. Society is already anxious about the existential repercussions of AI, such as mass job losses and concerns about autonomous models. Flaws in how AI operates add to these fears and threaten the basis of trust in technology to improve our lives.

What can businesses and executives do to encourage greater trust in AI – and their growing reliance on it – among their customers, policymakers, investors, employees, and wider society? Organizations and leaders must address two questions. Do you trust AI as a source of information for your work? And can you put your hand on your heart and say you trust the AI your organization is developing or using to deliver its services? For many of us, the answer is probably: “I can’t be certain.”

Given the vast number of cases where AI has failed the trust test and a lingering lack of faith in the technology, it’s crucial to have a robust “due diligence” process in place to minimize the risks. The nature and speed of AI’s advance may make this appear analog and cumbersome, but taking pains to adopt a responsible approach will likely save some embarrassment or much worse. Ultimately, it could differentiate your organization in the marketplace from others with weaker governance systems. Based on these two key questions, we’ve developed a practical checklist for you and your organization to strengthen faith in AI.

“Deepfakes are the fastest-growing social engineering vector in AI-enabled cyber-attacks. Using our eyes and ears is still the best way to differentiate between AI-generated and genuine human content.”

How can you trust AI?

1. Verify sources

Always ask for and verify the sources or references behind GenAI output. In 2024, an AI-powered chatbot created by the New York City authorities to help small business owners made headlines when it provided

This had negative reputational implications for the city and risked business owners making potentially harmful and illegal decisions. Verifying the bot’s advice and sources would have limited the damage.

2. Healthy skepticism (trust but verify)

Maintain a critical mindset toward GenAI outputs and validate with secondary sources, especially for cybersecurity-related tasks. Deepfakes are the fastest-growing social engineering vector in AI-enabled cyber-attacks. Using our eyes and ears is still the best way to differentiate between AI-generated and genuine human content.

3. User reviews and feedback

Look for user reviews and documented case studies that illustrate real-world performance and reliability. You can also tap into helpful monitoring tools such as the Galileo Hallucination Index, which measures the performance of leading LLMs.

4. Transparency in data sources

Request clarity on the data sources and methodologies used to train AI to better assess potential biases. Most biased outputs can be traced to training data sets that were not carefully curated and were unrepresentative of the group for which the output would be used. Facial recognition systems are a great example. A study by Joy Buolamwini and her co-author Timnit Gebru (Proceedings of Machine Learning Research, 2018) showed the error rate for light-skinned men was 0.8%, but it was 34.7% for darker-skinned women. By asking for transparency in data sources, you will be able to control whether the training data set is fit for the purpose for which the AI is being trained.

“Take Duolingo: when it introduced its conversation practice tool Duolingo Max, which is powered by Chat-GPT, the language instruction company warned that AI-created responses may not always be accurate and encouraged learners to report errors.”

How can you build trust in your AI?

1. Solve real problems

Erik Brynjolfsson of the Stanford Institute for Human-Centered AI has estimated that “ billions of dollars are being wasted” on AI by companies, with insufficient focus on generating value. Avoid investing in AI for its own sake; instead, focus on solving specific pain points where clear value can be demonstrated.

2. Model cards and transparency

Transparency matters. Utilize model cards to document how your team assesses and mitigates risks (e.g., bias and explainability) and make this information available to users. Model cards accompany machine learning models to give guidance on how they are intended to be used, including performance assessment. For example, they can draw attention to how they perform across a range of demographic factors to indicate possible bias.

3. Continuous auditing

Implement regular audits of AI performance and fairness to proactively identify and address any issues. If you are unsure how to conduct this kind of audit, bodies such as the European Data Protection Board and the Dutch-based ICT Institute have published helpful checklists.

4. Accountability and governance

Define clear accountability structures and governance policies around your AI systems. This might include ethics boards or external audits to bolster user trust. For example, IBM has established an AI ethics board, while Fujitsu has set up an external advisory committee on AI ethics.

5. Education and communication

Educate your users about how AI works: its limitations and the measures taken to ensure its reliability. Clear, jargon-free communication can help demystify AI and build trust. Take Duolingo: when it introduced its conversation practice tool Duolingo Max, which is powered by Chat-GPT, the language instruction company warned that AI-created responses may not always be accurate and encouraged learners to report errors. So, a starting point could be as simple as informing a customer they are talking to a chatbot, not a human, and that there are limits to what it can offer help with – but that a person is on hand if needed.

While AI undoubtedly offers unprecedented possibilities for growth and operational improvements, organizations must be careful to balance the urgent need for trust against rushing to grab commercial gains.

Putting in place strong systems that continuously challenge the trustworthiness of the AI that you use and expect your customers to interact with marks an important first step in enhancing faith in the technology and the businesses that increasingly rely on it.

Where to start? 

The independent International Organization for Standardization (ISO) has developed guidelines for managing risks around using artificial intelligence. This framework offers a helpful starting point for organizations looking to establish safer systems and processes to build trust in the fast-moving technology. Click here to find out more.

Authors

Oyku Isik IMD

Öykü Işık

Professor of Digital Strategy and Cybersecurity at IMD

Öykü Işık is Professor of Digital Strategy and Cybersecurity at IMD, where she leads the Cybersecurity Risk and Strategy program and co-directs the Generative AI for Business Sprint. She is an expert on digital resilience and the ways in which disruptive technologies challenge our society and organizations. Named on the Thinkers50 Radar 2022 list of up-and-coming global thought leaders, she helps businesses to tackle cybersecurity, data privacy, and digital ethics challenges, and enables CEOs and other executives to understand these issues.

José Parra-Moyano

José Parra Moyano

Professor of Digital Strategy

José Parra Moyano is Professor of Digital Strategy. He focuses on the management and economics of data and privacy and how firms can create sustainable value in the digital economy. An award-winning teacher, he also founded his own successful startup, was appointed to the World Economic Forum’s Global Shapers Community of young people driving change, and was named on the Forbes ‘30 under 30’ list of outstanding young entrepreneurs in Switzerland. At IMD, he teaches in a variety of programs, such as the MBA and Strategic Finance programs, on the topic of AI, strategy, and Innovation.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience