How can you build trust in your AI?
1. Solve real problems
Erik Brynjolfsson of the Stanford Institute for Human-Centered AI has estimated that “ billions of dollars are being wasted” on AI by companies, with insufficient focus on generating value. Avoid investing in AI for its own sake; instead, focus on solving specific pain points where clear value can be demonstrated.
2. Model cards and transparency
Transparency matters. Utilize model cards to document how your team assesses and mitigates risks (e.g., bias and explainability) and make this information available to users. Model cards accompany machine learning models to give guidance on how they are intended to be used, including performance assessment. For example, they can draw attention to how they perform across a range of demographic factors to indicate possible bias.
3. Continuous auditing
Implement regular audits of AI performance and fairness to proactively identify and address any issues. If you are unsure how to conduct this kind of audit, bodies such as the European Data Protection Board and the Dutch-based ICT Institute have published helpful checklists.
4. Accountability and governance
Define clear accountability structures and governance policies around your AI systems. This might include ethics boards or external audits to bolster user trust. For example, IBM has established an AI ethics board, while Fujitsu has set up an external advisory committee on AI ethics.
5. Education and communication
Educate your users about how AI works: its limitations and the measures taken to ensure its reliability. Clear, jargon-free communication can help demystify AI and build trust. Take Duolingo: when it introduced its conversation practice tool Duolingo Max, which is powered by Chat-GPT, the language instruction company warned that AI-created responses may not always be accurate and encouraged learners to report errors. So, a starting point could be as simple as informing a customer they are talking to a chatbot, not a human, and that there are limits to what it can offer help with – but that a person is on hand if needed.
While AI undoubtedly offers unprecedented possibilities for growth and operational improvements, organizations must be careful to balance the urgent need for trust against rushing to grab commercial gains.
Putting in place strong systems that continuously challenge the trustworthiness of the AI that you use and expect your customers to interact with marks an important first step in enhancing faith in the technology and the businesses that increasingly rely on it.
Where to start?
The independent International Organization for Standardization (ISO) has developed guidelines for managing risks around using artificial intelligence. This framework offers a helpful starting point for organizations looking to establish safer systems and processes to build trust in the fast-moving technology. Click here to find out more.