Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

Who is GenAI leaving out, and does it matter? 

Published 12 November 2024 in Artificial Intelligence • 7 min read

More than 80% of organizations will be using GenAI by 2026. Addressing bias in these systems must become a priority to avoid acute organizational and societal risks, says a forthcoming white paper by IMD, Microsoft, and Ringier. 

From writing business plans and crafting personalized customer experiences to designing entirely new products and turbo-driving scientific research, Generative AI (GenAI) is transforming the way human beings make decisions and solve problems. And it’s happening at lightning speed and at scale.

More than 80% of the world’s organizations are expected to be using GenAI tools in production environments by 2026. Meanwhile, the gains in productivity will likely boost global GDP by $7tn in the next decade.

Yet even as innovation races ahead, unlocking creative potential and seemingly endless possibilities for organizations, industries, and economies, important questions need to be asked about the potential risks, among them: Who is AI designed to serve? And who might it be leaving out? Because alongside its power and promise, GenAI still has striking limitations.

It’s well-known that GenAI is vulnerable to bias. It inherits bias and fairness issues in the real world that are reflected and embedded in its data and design. Left unchallenged, these issues can seriously undermine the reliability and benefits of GenAI output. Worse, they have the potential to widen real-world gaps in representation, access, inclusion, and opportunity.

Ask an LLM for its opinion on Black people and the output will be typically positive. Ask the same model for its thoughts on people using African American English – a dialect spoken by Black Americans – and it will generate responses like “ignorant” or “aggressive.”

Diversity bias in GenAI

Diversity bias relates to unfair representation or treatment that favors or discriminates against characteristics like gender, race or ethnicity, socioeconomic status, or physical ability.

In GenAI, these fairness issues can be present in training datasets that reflect current and historical societal biases, in discriminatory algorithmic decisions during the modeling phase, and in system outputs that perpetuate stereotypes – outputs that are used and deployed by teams or organizations. Then there is the AI development process itself.

AI engineering as a field is still dominated by a relatively heterogeneous demographic subset: economically and educationally privileged white men. This creates a lack of diversity in perspective, preferences, and worldview that can impair developers’ ability to prioritize and integrate the needs of other groups or profiles, or to spot fairness issues when they arise. And they arise with alarming frequency.

Large Language Models (LLMs) in GenAI have been shown to produce gender bias. In one study by UNESCO, women were up to four times more likely to be associated with prompts like “home,” or “family.” Conversely, men’s or male-sounding names were things like “business” or “career.” Another experimental study found that GenAI models were three to six times more likely to assign occupations based on gender stereotypes. Here the LLM was told that a doctor had called a nurse because she was late. When asked who was late, the model typically decided that “she” must be the nurse. Meanwhile, GenAI image models typically generate “men” as authoritative, middle-aged, and neutral in expression. “Women,” on the other hand, are more often depicted as young, smiling, and more submissive in demeanor. And that’s not all.

Ask an LLM for its opinion on Black people and the output will be typically positive. Ask the same model for its thoughts on people using African American English – a dialect spoken by Black Americans – and it will generate responses like “ignorant” or “aggressive.” This points to a more covert, deeply ingrained bias at large within these systems that can be harder to detect – and to address.

But addressing diversity bias needs to be a priority. It needs to happen systematically and at key technical, procedural, organizational, and cultural inflection points in the design, development, and deployment of GenAI.

“If organizations and institutions fail to identify, manage, and mitigate diversity bias, there is a real risk of exclusion for entire cohorts within the communities that they serve.”

What are the risks of failing to address diversity bias?

If organizations and institutions fail to identify, manage, and mitigate diversity bias, there is not only a real risk of exclusion for entire cohorts within the communities that they serve. Companies also risk damaging their brand reputation, leaving their customers dissatisfied and making suboptimal decisions that do not reflect the complexity of their markets.

A proprietary IMD, Microsoft, and Ringier survey finds that executives are concerned that bias in GenAI systems poses a risk to brand reputation. They are also worried about customer dissatisfaction, diminished competitiveness, and suboptimal decision-making.

Diversity bias is a real risk to organizations and communities.

The reinforcement of stereotypes and discrimination against groups of people based on gender, race, socioeconomic profile, or any other marker of demographic identity has huge ramifications, particularly for sectors like healthcare, finance, law, and education. Problems tied to inaction also stack up at the organizational level.

As businesses increasingly turn to GenAI tools to accelerate efficiency and productivity, unchecked system bias can hamper the design and improvement of products and services that should meet the needs of diverse customer bases. In addition, the automation of things like hiring processes and customer services hinges on balanced and non-discriminatory systems to be even minimally successful. Diversity bias can erode trust among employees as well as clients, and failure to address it exposes companies to stringent regulatory penalties and repercussions – a risk that can only intensify as GenAI continues to reshape the way that we work and do business.

Diversity bias is a real risk to organizations and communities. It corrodes fairness, stalls innovation, squanders opportunity and diminishes trust and good faith. The question is: how do we address it? What can decision-makers do to identify, manage, and mitigate ‘bias in the machine’?

There is a need for responsible AI principles that align with organizational values

Addressing diversity bias in GenAI

IMD has teamed with Microsoft Switzerland and Ringier’s EqualVoice Initiative to pinpoint the sources and risks of diversity bias in GenAI systems and to set out key insights and recommendations for organizations to proactively address and mitigate the harms.

Our forthcoming white paper, “Addressing Diversity Bias in GenAI,” will be presented at Davos in January 2025. The paper leverages our own diverse expertise as well as proprietary survey research to shed light on the scale of the problem and effective measures to contain and manage it – measures that organizations across all sectors can enact.

Among the insights that we share is the need for an intertwined approach focusing on people, process, and technology, with values and principles at the core. Responsible AI principles that align with organizational values and that should form the basis of a strong governance framework and processes. Among other things, this means proactively addressing bias, selecting diverse data sources, and involving human expertise at every stage of model development and every loop of AI operations. It also means breaking silos and putting into place diverse, multidisciplinary councils and teams working in collaboration across the organization, bringing together tech and diversity, equity and inclusion expertise with transparency and shared accountability.

This calls for participatory and inclusive leadership; leadership that empowers an open culture and psychological safety so that people can speak up and out, contribute unique perspectives, challenge assumptions, and deploy critical thinking. It also calls for personalized and continuous education and training on diversity bias both for developers and users as our context continues to shift and change.

Awareness is critical, but so too are the technical tools, skills, aptitudes, and inclusive mindsets needed to address the issue.

Organizations are cognizant of the risk of diversity bias. The results of a survey led by IMD, Microsoft, and Ringier find that 72% of executives are concerned about diversity bias in GenAI. However, awareness is not enough. The same survey reveals that just 35% of organizations are proactively addressing the issue.

Working towards responsible AI calls for a sense of shared accountability.

Addressing diversity bias in GenAI hinges on people, process, and technology. It means:

  1. Prioritizing a robust and responsible AI framework. By embedding principles like fairness, transparency, and accountability, you establish a foundation of trust and demonstrate ethical leadership.
  2. Focusing on empowering and diversifying your teams and bringing in DE&I expertise. Diverse, multidisciplinary AI teams in a psychologically safe environment enhance critical thinking, reveal hidden biases, and ensure your technology serves a broad spectrum of perspectives, ultimately making your products stronger and more inclusive.
  3. Ensuring that ongoing bias training and education reaches across the organization, reaching developers and users, to drive awareness and actively question AI-generated results, remain vigilant to bias, and commit to continuous improvement.

Working towards responsible AI calls for a sense of shared accountability. This is essential to building and shaping GenAI in a way that earns trust, respects values, and benefits us all.

Authors

Alexander Fleischmann

Alexander Fleischmann

Equity, Inclusion and Diversity Research Affiliate

Alexander received his PhD in organization studies from WU Vienna University of Economics and Business researching diversity in alternative organizations. His research focuses on inclusion and how it is measured, inclusive language and images, ableism and LGBTQ+ at work as well as possibilities to organize solidarity. His work has appeared in, amongst others, Organization; Work, Employment and Society; Journal of Management and Organization and Gender in Management: An International Journal.

Related

X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience