Organizations must ensure they have a strategy for AI’s governance and ethical oversight. DE&I offers principles and practices that can inform decision-makers.
AI is fast becoming an integral part of the operating system for organizations, governments, and society. With this rapid acceleration in adoption, far-reaching and unforeseen implications and negative consequences arise, often caused by a lack of responsible governance. We suggest that professionals leading AI strategies work with DE&I professionals to ensure human-centric principles are integrated into its use and upheld.
HR is among the most frequently discussed areas of AI application. From aspiring to select the best candidate to ensuring the selected individual has the support required to reach their potential in ways that deliver on the organization’s mission and goals, many processes and tasks will benefit from AI. We see, with increasing frequency, AI tools being deployed in candidate selection processes. For instance, large organizations such as Unilever have long used HireVue, an AI-driven hiring platform, in recruitment. Yet, using AI for individual assessment is subject to heated debate. Indeed, HireVue has been the subject of a complaint in the US for unfair and deceptive practices driven by an opaque, proprietary algorithm that evaluates a job applicant’s qualifications based on their appearance. As a result, HireVue dropped facial analytics and published a set of ethical AI principles, promising to follow those guidelines in its future endeavors.
The HireVue story is considered by most to have a “happy ending”. Whether you agree or not, one thing is sure: AI has value to add and will be increasingly integrated into the future of recruitment, personalized assessment, and talent development. For these reasons alone, it is easy to imagine how ubiquitous AI will likely become in our working lives. It is, therefore, essential to recognize the need for responsible AI governance and innovation.
What if the algorithm used to screen resumes contains bias and rejects candidates from underrepresented groups? What happens when we outsource so many tasks to AI that junior staff miss vital opportunities to cut their teeth by learning to master more straightforward challenges before stepping into more challenging roles? How about generative AI and large language models that have taken the world by storm? We know the vast amounts of data used to train major LLMs do not fairly represent all cultures, genders, languages, and beliefs. If left unaddressed, gender, racial, and cultural biases in AI applications will have pervasive – often negative – and unintended repercussions. How are we addressing the commercial, social, and moral risks?
If AI is to help people with productivity and learning, people must be able to trust it. We must act now to ensure that bias and ethical flaws are detected and minimized so that AI can be an effective and trustworthy collaborator. And DE&I can help.
Do you have a strategy for responsible AI?
The DE&I profession has existed for several decades and, over that time, has developed robust, research-based practices and frameworks for effective implementation. Lessons learned from leaders in the field demonstrate that a sound strategy is required to make an impact instead of an array of activities rolled out randomly. In addition, successful DE&I strategies are designed and implemented with input from all regions, business units, and functions, as cultural aspects must be considered and embedded, partially or fully, into the strategic design.
This knowledge, acquired through experience, has resulted in the creation of “DE&I Councils”, which provide lateral strategic input in many leading organizations. These councils represent different segments of the organization and society and offer unique perspectives and observations to ensure that all voices are heard. Whichever DE&I strategy organizations devise, a direct link to their corporate values and beliefs is also a must, as these represent the moral and ethical compass across the organization.
Several Fortune 500 companies use the “DE&I house” model as a basis for integrating and implementing key principles and practices that underpin the required cultural transformation. This best practice model defines a mission-driven story and combines communication, education, metrics, and compliance across three pillars: talent recruitment and retention, development and mentoring, and inclusiveness and equity.
AI governance and ethical oversight must also operate at every level, from articulating specific principles to the lines of code within algorithms and software solutions, monitoring and reporting, and the skills needed to manage and lead AI. Given the proven success that this “house” model has had in embedding DE&I into systems, processes, and the culture of organizations, we recommend that it also provides a useful framework for defining the ethical aspects of their AI strategy. In the remainder of this article, we will offer a way to structure and frame a “responsible AI” house model to form the basis for an organization’s strategic approach to managing the technology’s ethical challenges, risks, and opportunities.
The responsible AI journey: data, design, and delivery
For most organizations, the big questions about AI revolve around three key phases: the data we use to train AI models, the design and development of AI models, and the delivery of AI solutions. As these three phases are critical for success, they should also represent the three pillars of the “AI ethical house model” that form the foundation for developing a robust, responsible AI strategy.
Leading in the age of AI
The artificial intelligence issue
AI is revolutionizing the world of business, at pace. How do executives guarantee this mass adoption benefits both their organizations and society? In Issue 13 of I by IMD, we explore how to lead effectively – and responsibly – in the age of AI.
The first step in any organization’s AI journey is curating the dataset that will be used to train any desired AI capability. Most examples of discriminatory AI relate to the use of training datasets that are not representative enough. For example, facial recognition algorithms used in law enforcement have failed to function correctly for all skin tones. Such systems are notorious for misidentifying people of color, causing many individuals to be mistakenly arrested because “the system got it wrong”. A research initiative that dug into the root cause of this issue found that the training data comprised predominantly of white, western male faces.
In this phase, we need diverse representation to ensure that training datasets are diverse, equitable, and inclusive, representing all demographic groups to minimize bias and ensure fairness. It is also essential that inclusive and transparent processes are established for data governance to reflect different perspectives and needs.
Precise requirements that articulate how the AI must perform, including technology test plans, are vital. A host of algorithms called frontier AI can assist with automating the verification process.
The design and development phase
The organization will be able to design and develop the AI model once the training dataset has been prepared. Before stepping into the design phase, proactive and documented steps must be taken to ensure that various angles and views are part of the process and present in the solution. An assessment should be conducted to identify if any groups need to be added to the design teams. Once identified, it is important to integrate these people and their perspectives. However, experience from DE&I tells us that simply adding diversity does not necessarily lead to different outcomes. Research shows that the representation of diverse groups only impacts decision-making and performance once it reaches 30% of the whole (Mckinsey 2020). This principle of ensuring that teams make decisions in which the working environment is inclusive and conducive for all members to feel authorized to share divergent perspectives is essential.
“The big questions revolve around three key phases: the data we use to train AI models, the design and development of AI models, and the delivery of AI solutions.”
At the design stage, it is vital to integrate mechanisms such as audits and assessments to detect and mitigate algorithm biases. AI systems must be designed with all end users in mind, considering diverse user experiences, accessibility needs, and cultural sensitivities. And ethical guidelines and principles must prioritize fairness, transparency, and accountability throughout the process.
The delivery phase
The delivery phase is when your AI product is rolled out and put to use. Regardless of whether an organization is buying this product from an AI vendor or completing the development process in-house, AI innovations should be tested on various segments of society, gathering feedback from different stakeholders to monitor how the solution impacts each demographic group. Does it recognize differences in gender, race, voices, accents, cultural settings, and more? What may have been overlooked by well-meaning employees due to time constraints in production, complexity, or budgets? Track the impact of the product or service, once live, on various social segments through the lens of usability, mental health, the reduction of biases and stereotypes, well-being, and other essential elements.
Organizations should also strive for equitable access to AI solutions by identifying and addressing issues such as digital literacy and affordability that may disproportionately affect marginalized communities.
DE&I questions to ask your AI teams
To help organizations and decision-makers reflect on what steps and considerations are required across each of the three pillars of a responsible AI strategy, we recommend bringing together your AI and DE&I teams to create an internal “ethical board” to discuss a series of questions during each phase. This exercise should be repeated regularly, perhaps aligned with project and strategic timelines, so that your responsible governance and innovation processes remain on track.
By fostering a diverse representation of voices and perspectives, leaders can ensure that AI strategies and governance frameworks are sensitive to the needs and values of all communities. Equity considerations must guide the development and delivery of AI to mitigate biases and promote fairness. At the same time, inclusion entails creating spaces for marginalized groups to actively participate in shaping AI policies and practices. By incorporating tried and tested DE&I principles and insights into AI journeys, we can cultivate a more just and ethical approach to technological innovation that ultimately benefits all of society.
How can we ensure that human rights and ethical principles are upheld in the design and development phase?
How can we actively engage relevant stakeholders and inform them about the goals, limitations, and potential biases of AI systems to build transparency and trust?
How can we establish clear documentation and communication to disclose AI training data curation, model design, and system development to promote transparency?
What benefits do our AI solutions bring to stakeholders, users, and society? Do these benefits outweigh the potential risks? How do we test the impact of our product or service on different segments of society?
Do we systematically identify, assess, and mitigate risks associated with AI solutions, including ethics, privacy, security, and bias-related concerns?
How do we ensure ongoing monitoring, evaluation, and adaptation of our AI solutions to address emerging risks and unforeseen consequences?
By fostering a diverse representation of voices and perspectives, leaders can ensure that AI strategies and governance frameworks are sensitive to the needs and values of all communities. Equity considerations must guide the development and delivery of AI to mitigate biases and promote fairness. At the same time, inclusion entails creating spaces for marginalized groups to actively participate in shaping AI policies and practices. By incorporating tried and tested DE&I principles and insights into AI journeys, we can cultivate a more just and ethical approach to technological innovation that ultimately benefits all of society.
IN FOCUS:
Responsible AI governance
As with any governance system, it is crucial to prioritize transparency, accountability, and inclusivity in AI.
As part of a responsible AI strategy, an “AI Board” can serve as a linchpin in ensuring inclusive data sets and the ethical development, delivery, and oversight of AI technologies. It is essential to consider who oversees this ethics board, who is held accountable for its results, and how it is measured on its success.
What form of governance do you choose for this board, and how are its actions reported and sponsored in your organization and potentially beyond, such as in a collaborative, cross-industry approach? In establishing an AI governance board, we recommend that you integrate the principles of DE&I at every stage of decision-making, from data to the design and delivery of AI solutions. We also recommend including seasoned and thought-leading DE&I professionals to sit on your AI Board.
Authors
Heather Cairns-Lee
Affiliate Professor of Leadership and Communication
Heather Cairns-Lee is Affiliate Professor of Leadership and Communication at IMD. She is a member of IMD’s Equity, Inclusion and Diversity Council and an experienced executive coach. She works to develop reflective and responsible leaders and caring inclusive cultures in organizations and society.
Öykü Işık
Professor of Digital Strategy and Cybersecurity at IMD
Öykü Işık is Professor of Digital Strategy and Cybersecurity at IMD, where she leads the Cybersecurity Risk and Strategy program. She is an expert on digital resilience and the ways in which disruptive technologies challenge our society and organizations. Named on the Thinkers50 Radar 2022 list of up-and-coming global thought leaders, she helps businesses to tackle cybersecurity, data privacy, and digital ethics challenges, and enables CEOs and other executives to understand these issues.
Sarah E. Toms
Chief Learning Innovation Officer
Sarah Toms is Chief Learning Innovation Officer at IMD where she leads the Learning Innovation and AI strategy. Sarah previously co-founded Wharton Interactive, an initiative at the Wharton School that has scaled globally. A demonstrated thought leader in the educational technology field, she is fueled by a passion to find and develop innovative ways to make every learning environment active, engaging, more meaningful, and learner-centric. Sarah is an AWS Education Champion, and has been on the Executive Committee of Reimagine Education for 8 years. She has spent more than 25 years working at the bleeding edge of technology, and was an entrepreneur for over a decade, founding companies that built global CRM, product development, productivity management, and financial systems. In addition, Sarah is coauthor of The Customer Centricity Playbook, the Digital Book Awards 2019 Best Business Book.
Josefine van Zanten
Chief Equity, Inclusion & Diversity Officer, IMD
Josefine has been active as an HR Executive most of her global career, working in Fortune 500 organizations; as a Senior Vice President, she was in charge of departments of D&I, Culture Change and Leadership and Organizational development. Her experience spans across various industries with HP (IT), Royal Dutch Shell (Oil and Gas), Royal DSM (life sciences and chemicals), and Holcim (Construction). She currently is the ChiefDiversity, Equity & Inclusion (DE&I) officer at IMD, and works as a Senior Advisor, EI&D, with global organizations.
Gender stereotyping is the enemy of men who want to run successful, sustainable businesses. It’s also the enemy of women who want to work for successful, inclusive companies with gender diversity at…
A decade of change has led to a marked shift in the perception of what it takes to be a leader. Inclusiveness, authenticity, and the ability to ‘listen and learn’ are now…
340 million women and girls will be living in extreme poverty by 2030 unless investors shift more capital to the gender lens market. It’s time for asset managers to wake up to…
Learn Brain Circuits
Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on – they only take five minutes. Read more
Explore Leadership
What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk. Read more
Join Membership
Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises. Sign up
X
Login and subscribe to IbyIMD+ subscription
Explore first person business intelligence from top minds curated for a global executive audience