Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

Why leaders need an ethical framework for AI

Published 24 February 2025 in Artificial Intelligence • 7 min read

Business leaders need to ensure that their organizations consider all AI-related decisions and operations through an ethical lens, warns IMD’s Michael Wade.

Following an explosive entry onto global markets, PwC forecasts that, by 2030, AI will add over $15tn to the global economy annually, while McKinsey projects a figure of between $15.5tn and $22.9tn annually by 2040.

But amid all the urgent work businesses are doing to capitalize on AI’s potential, it’s easy to overlook questions about an entirely different form of value – or rather, values.

As AI becomes ever more prevalent, we should all be asking: how will AI impact human values? And how can we ensure its impact is positive?

Business leaders have the opportunity to use AI to promote positive human values, but they should also ensure that AI’s development and use are guided by an ethical governance framework.

Gen AI tools scored themselves just one out of 10 for negative human values like envy and jealousy – but eight for indifference and seven for dishonesty, deception, and manipulativeness

Does AI support human values?

We conducted some experimental research to evaluate AI’s position in relation to 10 values widely regarded as positive across different societies globally: honesty, integrity, respect, equality, freedom, justice, compassion, responsibility, kindness, and trustworthiness. Then, we asked ChatGPT and Gemini to evaluate AI’s consistency with those values.

The highest scores were just six out of 10 for respect and freedom. Honesty, equality, and kindness scored five. Compassion was at the bottom of the table, with a rating of three.

Next, we repeated the experiment with 15 negative human values, including greed, arrogance, and cruelty. The Gen AI tools scored themselves just one out of 10 for envy and jealousy – but eight for indifference and seven for dishonesty, deception, and manipulativeness.

Caveats apply. The research isn’t scientifically rigorous. But, for many people, including AI experts, these results ring true. Of course, humans aren’t perfect either. But with AI becoming ever more prominent, it’s increasingly urgent that leaders focus on aligning AI initiatives with positive human values.

“Some businesses are still in the starting blocks, while others have ethical codes that pre-date AI and require a refresh.”

A focus on ethical governance

The solution lies in defining a robust ethical approach to the governance around AI’s development and usage. An increasing number of executives recognize the importance of completing that task but, unfortunately, ethics remains a glaring gap for AI developers and businesses deploying the technology.

Some businesses are still in the starting blocks, while others have ethical codes that pre-date AI and require a refresh. Others realize that they must apply their ethical codes more rigorously.

Then there are those – including some AI firms – who have drawn up high-level agendas for the ethical use of AI that, upon closer examination, turn out to be superficial and fail to hold the business accountable for future actions. Similarly, some have created AI ethics advisory boards with external individuals filling the seats on a part-time basis. In large, complex corporations, that approach is unlikely to move the needle.

Businesses need a strong ethical framework that shapes the key AI-related activities within the organization, from communicating the new AI-assisted benefits to customers to making high-level technology investment decisions.

Of course, that end goal will be difficult to achieve. The issues involved are complex and the risks uncertain. It’s unchartered territory. Business has never had to confront questions quite like these. And, given the enormous potential benefits of using AI, it’s understandable that many leaders are wary of implementing restrictions overzealously.

Four reasons to implement an ethical approach to AI

There are four significant known problems with today’s AI tools that require careful, ethically-aware handling:

– Accuracy: From generating false statements or inventing information sources to producing vivid “hallucinations” (recent examples include Black Nazi soldiers and Native American Vikings), when AI is deployed at scale, the resulting inaccuracies could be very costly and damaging to a company – and its customers.

– DEI: Access to AI tools is an inclusivity issue. Although some tools are free, many require a significant upfront injection of capital. Moreover, once the tools are in place, workers must be competent to use them. Training can be another costly investment. Organizations need to think carefully about whether they will be able to offer all their employees unrestricted access to a new tech purchase.

– Bias: Because developers train AI on real-world data, it’s prone to replicating real-world bias. For example, the underrepresentation of women or minorities in health datasets has been shown to result in computer-aided diagnosis systems that are less effective for Black patients.

– Sustainability: Generating a ChatGPT response uses about 10 times as much energy as a Google search, according to one study. AI is leading to a huge consumption of energy and water. How can businesses reconcile AI adoption with their sustainability goals?

Each of these issues is complex and challenging. Together, they point to the need for organizations to develop a consistent ethical approach to AI.

The question of who is responsible for developing ethical governance standards on AI is critical

Who will shape ethical standards?

The question of who is responsible for developing ethical governance standards on AI is critical. On one hand, there are the tech firms developing AI systems. Industry players often talk about their efforts – but are they doing enough?

In one of the most encouraging initiatives, the AI developer Anthropic drew up a constitution for its platform, Claude. However, consider an industry leader such as OpenAI. Its charter offers a grand vision, but has that vision become distorted as the organization shifts from a non-profit to a for-profit basis?

On the other hand, if businesses don’t act, the likelihood grows that regulators will step in. The EU’s Artificial Intelligence Act is one of the first attempts to put AI-specific regulation in place on a global scale. It is unlikely to be the last. However, the global pace of AI regulation is slow and tends to be politically charged.

While companies adopting AI need to set their own governance standards, there are opportunities to shape the approach at an industry level, too. Organizations are likely to find they have much in common with commercial rivals, with a large proportion of one organization’s ethical AI governance code applicable to the operations of a rival. Sector leaders may also recognize common interests, such as reinforcing consumer trust and pre-empting regulatory intervention.

That could mean that industry bodies have a role to play as forums where businesses can collaborate on developing common basic standards or templates that firms can then tailor to their unique circumstances.

It is an approach that will take real leadership. The incentives currently favor exploiting AI’s capabilities to the maximum possible extent. On an organizational and industrial scale, self-regulation seems unlikely to be effective without strong regulatory oversight. Indeed, many organizations today are walking back commitments around DEI and online content moderation.

The most acute ethical questions about AI involve its effect on people, including the future division of labor between human and robotic workers.

A responsibility spread across the C-suite

Much of this agenda may seem to fall within the remit of the CEO or CTO. Yet, it is relevant to other senior executives, particularly CHROs.

The most acute ethical questions about AI involve its effect on people, including the future division of labor between human and robotic workers. Consequently, CHROs should be actively involved in developing ethical standards alongside other executives. Without HR input, AI governance standards could evolve with major blind spots.

Whether or not having a chief AI officer is advantageous or even necessary remains open to debate. A central function to drive AI may be useful, especially in the early stages, but the CEO – the role with the most clout to drive organizational change – should lead such an important agenda.

Ultimately, leaders should consider all decisions around AI through an ethical lens, in the same way that sustainability is now a key criterion in decision-making for many organizations.

Creating accountability for the human impact of AI

Ultimately, leaders should consider all decisions around AI through an ethical lens, in the same way that sustainability is now a key criterion in decision-making for many organizations. Leaders should integrate ethical considerations into KPIs, with incentives for strong ethical performance.

Some firms have opted to make public their approach to the ethical use of AI. This can send a powerful signal to stakeholders and the public that the organization is serious and committed. However, it also tends to invite greater scrutiny and more frequent challenges, which may or may not prove helpful. It is also more difficult to tweak policies already in the public domain. Leaders should, therefore, carefully consider the desired degree of exposure at all levels, including to employees and customers.

Although many executives have yet to grasp fully AI’s potential to impact human values, the picture is changing rapidly. Passive observation is not an option. Organizations must take the initiative and decide on a strong ethical approach to an AI-led world.

All views expressed herein are those of the author and have been specifically developed and published in accordance with the principles of academic freedom. As such, such views are not necessarily held or endorsed by TONOMUS or its affiliates.

Authors

Michael Wade - IMD Professor

Michael R. Wade

TONOMUS Professor of Strategy and Digital

Michael R Wade is TONOMUS Professor of Strategy and Digital at IMD and Director of the TONOMUS Global Center for Digital and AI Transformation. He directs a number of open programs such as Leading Digital and AI Transformation, Digital Transformation for Boards, Leading Digital Execution, Digital Transformation Sprint, Digital Transformation in Practice, Business Creativity and Innovation Sprint. He has written 10 books, hundreds of articles, and hosted popular management podcasts including Mike & Amit Talk Tech. In 2021, he was inducted into the Swiss Digital Shapers Hall of Fame.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience