Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
AI

Technology

How organizations navigate AI ethics

Published 5 October 2023 in Technology • 7 min read

While many organizations commit publicly to responsible AI principles, a gap often exists when they put those principles into practice.

At this year’s CogX Festival in London in September, the British actor and broadcaster Stephen Fry warned against embracing artificial intelligence (AI) uncritically after discovering that his voice had been impersonated by machine learning.

While Fry cited the (comparatively) benign case of copyright infringement – his voice was being used to voice a history documentary – the unprecedented pace of development and adoption of AI, ethical concerns like algorithmic biases, intellectual property rights, labor displacement, and privacy protection have become regular topics on the corporate agenda.

Online retailer Amazon had to shut down its AI recruiting tool after discovering that the system had taught itself not to select female candidates, there were liability questions for Uber in 2020 after one of its self-driving cars hit and killed a pedestrian, and tech giants like IBM have stopped selling general-purpose facial recognition or analysis software because of fears of potential misuse.

Spurred on by cases like these, many organizations have declared their adherence to responsible AI principles, resulting in over 250 commitments.

These can be boiled down to five core principles: transparency or openness and clarity about how AI is used and the decisions it makes; justice and fairness – ensuring that AI doesn’t discriminate for, or against, certain groups of people; AI should not cause harm; the assignment of accountability for AI; and privacy or the protection of individuals’ personal information in AI models.

Although progress has been slow, the true measure lies in how these principles are applied in practice. There are currently three widely recommended approaches to AI ethics application:

1. Setting up a governance structure to monitor and manage AI ethics

Two types of AI ethics governance predominate today: external advisory boards, and internal review committees. These can bring a range of expertise together to effectively analyze and assess complex AI problems, but there are real challenges here, too.

External ethics advisory boards have had mixed success. Composed of external experts who are hired on a part-time basis to review AI practices, many organizations have struggled to make the model work. Google’s Advanced Technology External Advisory Council (ATEAC) was shut down only a week after it was set up over its inclusion of disputable members. Axon, a manufacturer of Taser weapons and body cameras, effectively discontinued its ethics advisory board after most of the members resigned when the organization ignored its advice.

One difficulty with an external advisory board approach is the degree of influence it can wield over the business it advises. Organizations are hesitant to give a right of veto to an advisory board largely composed of external figures.

Considerably more companies have established internal AI ethics committees. For instance, Microsoft’s AETHER (AI, Ethics, and Effects in Engineering and Research) committee provides guidance on the responsible development and deployment of AI across the company’s products and services.

Amazon
Online retailer Amazon had to shut down its AI recruiting tool after discovering that the system had taught itself not to select female candidates

The straightforward nature of such committees to systematically identify and help mitigate the risks of AI products both developed in-house or purchased externally could explain its widespread adoption to date. At the same time, tensions can arise when committee decisions conflict with business priorities.

Sometimes, both internal and external governance are combined. SAP’s AI governance model consists of an external AI advisory board whose role is to ensure that the company’s AI activities are not only compliant with ethical norms and legal regulations but also with SAP’s own guiding principles for AI.

It reviews high-risk use cases and suggests improvements to SAP’s AI ethics policy. At the same time, SAP maintains an internal committee that works to ensure ethical considerations and compliance across its AI-driven operations. The internal committee works closely with the external advisory board and serves as the primary point of contact for SAP employees regarding AI ethics-related queries and concerns.

Launching education programs to build knowledge around AI ethics

In 2018, Deutsche Telekom became one of the first companies in the telecommunications sector to publish ethical AI guidelines. Since then, the company has turned its nine AI ethics principles into practice.

The company prioritized broad education and engagement on AI ethics within the company. An early initiative was to produce an AI ethics handbook, primarily designed for data scientists. This helped to translate the nine ethics principles into practical applications in the business sphere.

In addition, the company embarked on an extensive program of education and awareness. It organized a series of roadshows across the international offices, presenting various AI use cases and discussions on digital ethics. The company conducted half-day training sessions for data science staff and developed a digital and AI ethics e-learning program that was incorporated into mandatory management compliance training. The company is currently working to provide all new hires with training on AI data ethics risks, further integrating ethical considerations into the fabric of the organization.

Establishing mechanisms to evaluate and verify the development of AI systems

While AI education and training programs provide a valuable foundation for responsible AI practices, they may not necessarily lead to responsible AI outcomes. Once broad education and engagement on AI have been established, some organizations are working toward establishing clear measurement and enforcement mechanisms.

For example, Tieto, a leading Nordic IT services company, announced its commitment to trialing an internal ethics certification. With thousands of Tieto employees already having completed internal courses to improve their knowledge of AI, this certification takes their education a step further by ensuring that those working closely with AI solutions follow ethics guidelines.

A robust AI education, for example, when combined with strong governance could lead to more effective and ethically aligned decision-making across all levels of the organization

Similarly, Telefónica, a Spanish telecommunications firm, operationalizes its responsible AI approach using a methodology called “Responsible AI by Design”. The operating model includes training and awareness activities on AI and ethics accessible in three languages (Spanish, English, and Portuguese) which is accompanied by dedicated workshops, and self-assessment questionnaires that each responsible manager developing products and services using AI is required to complete.

Despite these efforts, software developers often struggle to translate abstract AI principles into their daily work, with recent research suggesting that 79% of tech workers report a need for pragmatic resources to assist them in navigating ethical concerns.

The demand for resources that bridge the principles to practices gap in AI systems development has fostered a plethora of tools, methodologies, frameworks, and processes.

But while the creation of better tools remains an admirable goal, a paradoxical issue emerges. The sheer volume of available tools and methodologies poses a challenge for organizations. Striking a balance between quantity and quality, between accessibility and expertise, and between principle and practice remains a key challenge.

Auditing is becoming more prevalent for assessing whether AI developments are performed in a way consistent with the affirmed principles of an organization. Likewise, certifications serve to verify compliance with specific requirements applicable to AI applications.

While both measures have been geared towards companies developing AI tools, those that use AI are also starting to adopt them. In 2022, for example, American Express, General Motors, Nike, and Walmart announced that they would adopt scoring criteria to help reduce bias in algorithmic tools used to make hiring and workforce decisions.

Are there other approaches?

Given that each of the three recommended approaches offers different strengths and weaknesses, a holistic approach is recommended. A robust AI education, for example, when combined with strong governance could lead to more effective and ethically aligned decision-making across all levels of the organization.

At the same time, we must remember that the field of AI ethics is constantly evolving. The Council of Europe, for example, is focused on strengthening business commitments to human rights. And then there is the approach of AI regulations which is being discussed in different regions across the world, most recently with the EU’s Artificial Intelligence Act.

The importance of AI ethics for organizations is increasing, spurred on by the emergence of generative AI and large language models like ChatGPT. There is a strong need for compliance as regulations tighten, but there is also increasing pressure from civil society for organizations to act responsibly and ethically.

Quick-fix approaches to AI ethics may bring short-term benefits, but sustainable benefits require a more comprehensive and coordinated approach.

We recommend combining strong internal and external governance with engaging educational programs across the organization. There is a need to integrate AI ethics into organizational processes so that ethical lapses and risks can be identified and rectified before they become embedded into processes or offerings.

Authors

Tomoko Yokoi

Tomoko Yokoi

Researcher, Global Center for Digital Business Transformation, IMD

Tomoko Yokoi is an IMD researcher and senior business executive with expertise in digital business transformations, women in tech, and digital innovation. With 20 years of experience in B2B and B2C industries, her insights are regularly published in outlets such as Forbes and MIT Sloan Management Review.

Michael Wade - IMD Professor

Michael R. Wade

Professor of Innovation and Strategy at IMD

Michael R Wade holds the Tonomus Professorship in Digital Business Transformation and is Director of IMD’s Global Center for Digital Business Transformation. He directs a number of open programs such as Leading Digital Business Transformation, Digital Transformation for Boards, Leading Digital Execution, and the Digital Transformation Sprint. He has written ten books, hundreds of articles, and hosts a popular management podcast. In 2021, he was inducted into the Swiss Digital Shapers Hall of Fame.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience