Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Human Resources

Why DE&I cannot be outsourced to AI

Published 4 June 2024 in Human Resources • 9 min read

Artificial intelligence has a significant role to play in improving DE&I, but leaders can’t rely on it to solve all their diversity and inclusion challenges — especially when it comes to supporting LGBTQ+ and disabled employees.

The onward march of AI as a business tool continues, and HR is falling into step. The launch of GPT-4o in May 2024 is just the latest phase in the rapid evolution of what has become an increasingly valuable tool for HR leaders.

AI’s appeal for HR lies primarily in the huge productivity gains it can deliver. But a key secondary factor is the promise that AI can bring not just quicker, but better outcomes — when properly designed — particularly in terms of eliminating human bias and helping organizations achieve their DE&I objectives.

Yet, it is increasingly clear that there are important limits to AI’s applicability in this field. AI relies on data, but for some aspects of DE&I, including addressing LGBTQ+ and disability-related issues, the data is either unavailable or inherently unreliable. At this point in the evolution of AI-enhanced processes, good outcomes still necessitate human involvement.

Therefore, CHROs and senior leaders cannot afford to become reliant on AI to drive progress on their DE&I objectives. Instead, they must combine carefully managed AI system implementation with a reinvigorated approach to existing DE&I initiatives.

Inclusion
“Moreover, hiring managers can instruct AI to provide shortlists that incorporate diversity and equity across relevant characteristics, such as a 50/50 gender balance or representation of the ethnicities prevalent in the local market (provided this latter data is collected).”

How AI can help on the road to Diversity, Equity, and Inclusion

AI adoption has brought substantial benefits for organizations in relation to their workforces. The primary focus to date has been on recruitment, where AI can have a huge impact — especially for large corporations receiving high volumes of job applications. AI can rapidly sift through applications and shortlist candidates.

Moreover, hiring managers can instruct AI to provide shortlists that incorporate diversity and equity across relevant characteristics, such as a 50/50 gender balance or representation of the ethnicities prevalent in the local market (provided this latter data is collected).

In this respect — and for areas such as talent planning — AI is an incredibly powerful tool. However, leaders need to recognize that it has limitations.

Businessman pressing his finger on the wooden cubes with the word bias. Racial or gender prejudice in workplace concept.
“AI must be trained to move away from idealizing the historical white male stereotype, thereby reinforcing the system of privilege that has allowed its inception.”

Tread carefully

As we have written elsewhere, concerns center on three key areas: The data used to train AI, the development of the algorithms used, and the deployment of AI solutions.

Data is especially significant in a DE&I context. The data used to train AI for HR solutions often contains significant biases: Samples have not offered an accurate cross-section of local or (for multinationals) global society and inevitably tend to focus on white male candidates.

AI must be trained to move away from idealizing the historical white male stereotype, thereby reinforcing the system of privilege that has allowed its inception. It is imperative that organizations ensure that data used to train AI systems is diverse in terms of age, ethnicity, gender, nationality, and any other relevant and identifiable dimension.

AI Risks
Corporations need to take AI-related risks seriously and ensure they allow for vital checks on AI-generated outputs.

The dangers of going too fast

If AI systems in HR are to operate effectively, it is crucial to ensure the availability of bias-free data. New regulations will only make such safeguards more important.

The EU’s AI Act is the stand-out example. It is likely to be just the first step on the long road to effectively regulating AI, but it nevertheless sends an important message: AI can potentially have negative and positive impacts. Leaders should be aware of this and its implications.

An important aspect of this awareness is developing an understanding of how AI outputs can be monitored. This is not straightforward. Businesses will adopt AI tools to make processes more efficient and less time-consuming. If leaders are implementing a process to save 10 hours on a particular task, implementing a manual checking process that adds back five is evidently counterproductive.

Yet, it is evident that going too quickly has inherent dangers. Corporations need to take AI-related risks seriously and ensure they allow for vital checks on AI-generated outputs. Critical questions need to be asked: “Does this make sense? Am I getting broad input? What — or, more importantly, who — is missing from these results?”

The second part of the solution is that leaders must increase pressure on AI providers to give assurances about the effectiveness and security of their AI solutions, from the training data through to long-term system operation. In terms of DE&I, that means being clear that thorough and thoughtful steps have been taken to eliminate biases of all kinds right from the start.

AI is exciting, but if corporations fail to put in place adequate DE&I checks and balances, we could soon feel the impact of its drawbacks more than its benefits.

Data Collection
“One reason is the law: In many countries, it is illegal to collect data relating to sexual orientation, for example.”

When data is unavailable

AI needs data to function. However, there are aspects of DE&I for which employers’ ability to collect and hold relevant data is limited. LGBTQ+ and disabilities stand out, as these rely on self-disclosure. One reason is the law: In many countries, it is illegal to collect data relating to sexual orientation, for example. Where data is available, it is likely to be based on self-reporting, which carries its own challenges; it may be incomplete and unreliable (sometimes highly so).

Leaders need to remember that these legal limits exist for good reason, especially in the case of LGBTQ+ individuals. In many countries around the world, same-gender sexual activity is considered a crime and, in some instances, punishable by prison — or even death. In others, social norms challenge the rights of LGBTQ+ community members. Even within the EU, the progressive momentum of the early 2000s has been rolled back in some states. For these reasons, it is understandable that employees are not required to disclose information about their sexual orientation or gender identity to employers.

As a result, before they can bring AI to start choosing candidates, organizations can complement the absence of quantitative HR data on various aspects of diversity and obtain qualitative information, working in partnership with employees. Some of the most effective approaches emerge from collaboration between leaders and employee resource groups (ERGs). ERGs can present experiences and opinions from their communities to leaders to help them understand how these individuals experience working life in the organization. This information, in turn, provides opportunities to eliminate hurdles and accelerate opportunities.

This requires leaders to commit to personal involvement with underrepresented communities in the workplace, sponsoring diverse individuals, and becoming an ally and an advocate for their advancement in the corporation. On a day-to-day basis, leaders must demonstrate respect and dignity to all employees, regardless of age, gender, sexual orientation, disability, nationality, and race, and they must value everyone’s contribution. Leaders should attend and sponsor ERGs, not in a “fixing” capacity, but to listen, learn, and understand how the same organization can be experienced differently based on who you are. Above all, they have to show humility.

AI And Human Interaction
The issues identified may never have occurred to busy leaders, but a far-reaching AI data analysis can quickly make hurdles clear.

Blending in-person engagement and AI-driven insight

None of this is to discount the advantages of AI, which can offer insights into other dimensions of diversity to compensate for a lack of data. AI might uncover a problem with the retention of one group where HR does have data — and extrapolate to suggest potentially related hurdles facing under-represented groups that are not part of the HR database. That analysis can then be discussed with ERGs. The issues identified may never have occurred to busy leaders, but a far-reaching AI data analysis can quickly make hurdles clear.

When AI presents leaders with this kind of analysis, they need to use it to improve conditions, processes, and behaviors. This can require delicate consideration and further investigation. For example, data might show that a company is losing many women aged 33–35. A logical conclusion may be that women in this category are considering starting a family, and they perceive the company to be unsupportive — in addition to facing many outside factors that make this life stage particularly demanding. The company may not have clear data on how this affects lesbian women, but leaders can engage via ERGs to discuss the perspectives of women who self-identify as being in that category. They most likely have similar concerns, though potentially with additional challenges, such as whether they will be negatively affected if they reveal their sexuality in a work context or how society perceives and supports their maternity choices.

Sensitively exploring the issues in dialogue with an employee group or individuals in private can help employers reach a new, more nuanced level of inclusivity.

Artificial intelligence. Abstract geometric Human head outline with circuit board. Technology and engineering concept background. Vector illustration
“AI has much to offer in the DE&I context. Its impact has already been significant. Yet, without a robust approach to monitoring and assuring AI systems, from training stages through to live operation, it seems likely that problems will emerge that could outweigh the benefits.”

What can organizations do now?

There are four key actions for CHROs to take to ensure AI solutions support DE&I.

Increase pressure on AI providers:

Employers need to ask potential suppliers tougher questions when procuring AI solutions for use in HR systems. What are the guarantees that the issues of bias, hallucination, and error, prevalent in such technology in the past, have been ironed out? And have providers anticipated future issues and safeguarded against them?

Press for assurances on data:

Organizations need to extract assurances about AI training data. Key questions to ask include: “Does the data make sense?” “Is this what we were looking for?” “Does it reflect diversity and inclusion in the ways that matter to us?” “Is it aligned with our DE&I strategy and ambitions?” Chief DE&I officers or DE&I counsel should be involved in procurement and set-up discussions.

Improve governance via AI boards:

Typically, governance practices around AI are either weak or non-existent. Everyone is learning as they go, and there is little accountability despite the substantial risks involved with AI. A group from IMD has proposed that companies create “AI boards” to oversee AI use, with members including seasoned and thought-leading DE&I professionals. Accountable bodies could also be established at the country, industry, or supplier level to develop and enforce common standards.

Adopt a partnership approach to drive change:

As regulation emerges, businesses need to understand how it will work in practice. In the European context, the EU’s AI Act represents a significant change. The best route forward involves cooperation between businesses, governments, and academics. Together, these three groups can create a powerful and positive momentum for DE&I initiatives. 

AI has much to offer in the DE&I context. Its impact has already been significant. Yet, without a robust approach to monitoring and assuring AI systems, from training stages through to live operation, it seems likely that problems will emerge that could outweigh the benefits. Harnessed correctly as part of an ever-evolving DE&I framework designed by CHROs and other HR leaders to benefit their human workforces, AI can be a powerful tool for realizing a more diverse, equitable, and inclusive future for business.

Authors

Chief Equity, Inclusion & Diversity Officer at IMD - Josefine van Zanten

Josefine van Zanten

Chief Equity, Inclusion & Diversity Officer, IMD

Josefine has been active as an HR Executive most of her global career, working in Fortune 500 organizations; as a Senior Vice President, she was in charge of departments of D&I, Culture Change and Leadership and Organizational Development. Her experience spans across various industries with HP (IT), Royal Dutch Shell (Oil and Gas), Royal DSM (Life Sciences and Chemicals), and Holcim (Construction). She currently is the Chief Diversity, Equity & Inclusion (DE&I) officer at IMD, and works as a Senior Advisor, EI&D, with global organizations.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience