
Learning how to behave: AI-conditioned robots are coming
Large behavior models (LBMs) promise to be even more impactful than large language models, says IMDâs Tomoko Yokoi ...
by Alexander Fleischmann Published 12 November 2024 in Artificial Intelligence ⢠7 min read
From writing business plans and crafting personalized customer experiences to designing entirely new products and turbo-driving scientific research, Generative AI (GenAI) is transforming the way human beings make decisions and solve problems. And itâs happening at lightning speed and at scale.
More than 80% of the worldâs organizations are expected to be using GenAI tools in production environments by 2026. Meanwhile, the gains in productivity will likely boost global GDP by $7tn in the next decade.
Yet even as innovation races ahead, unlocking creative potential and seemingly endless possibilities for organizations, industries, and economies, important questions need to be asked about the potential risks, among them: Who is AI designed to serve? And who might it be leaving out? Because alongside its power and promise, GenAI still has striking limitations.
Itâs well-known that GenAI is vulnerable to bias. It inherits bias and fairness issues in the real world that are reflected and embedded in its data and design. Left unchallenged, these issues can seriously undermine the reliability and benefits of GenAI output. Worse, they have the potential to widen real-world gaps in representation, access, inclusion, and opportunity.
Ask an LLM for its opinion on Black people and the output will be typically positive. Ask the same model for its thoughts on people using African American English â a dialect spoken by Black Americans â and it will generate responses like âignorantâ or âaggressive.â
Diversity bias relates to unfair representation or treatment that favors or discriminates against characteristics like gender, race or ethnicity, socioeconomic status, or physical ability.
In GenAI, these fairness issues can be present in training datasets that reflect current and historical societal biases, in discriminatory algorithmic decisions during the modeling phase, and in system outputs that perpetuate stereotypes â outputs that are used and deployed by teams or organizations. Then there is the AI development process itself.
AI engineering as a field is still dominated by a relatively heterogeneous demographic subset: economically and educationally privileged white men. This creates a lack of diversity in perspective, preferences, and worldview that can impair developersâ ability to prioritize and integrate the needs of other groups or profiles, or to spot fairness issues when they arise. And they arise with alarming frequency.
Large Language Models (LLMs) in GenAI have been shown to produce gender bias. In one study by UNESCO, women were up to four times more likely to be associated with prompts like âhome,â or âfamily.â Conversely, menâs or male-sounding names were things like âbusinessâ or âcareer.â Another experimental study found that GenAI models were three to six times more likely to assign occupations based on gender stereotypes. Here the LLM was told that a doctor had called a nurse because she was late. When asked who was late, the model typically decided that âsheâ must be the nurse. Meanwhile, GenAI image models typically generate âmenâ as authoritative, middle-aged, and neutral in expression. âWomen,â on the other hand, are more often depicted as young, smiling, and more submissive in demeanor. And thatâs not all.
Ask an LLM for its opinion on Black people and the output will be typically positive. Ask the same model for its thoughts on people using African American English â a dialect spoken by Black Americans â and it will generate responses like âignorantâ or âaggressive.â This points to a more covert, deeply ingrained bias at large within these systems that can be harder to detect â and to address.
But addressing diversity bias needs to be a priority. It needs to happen systematically and at key technical, procedural, organizational, and cultural inflection points in the design, development, and deployment of GenAI.
âIf organizations and institutions fail to identify, manage, and mitigate diversity bias, there is a real risk of exclusion for entire cohorts within the communities that they serve.â
If organizations and institutions fail to identify, manage, and mitigate diversity bias, there is not only a real risk of exclusion for entire cohorts within the communities that they serve. Companies also risk damaging their brand reputation, leaving their customers dissatisfied and making suboptimal decisions that do not reflect the complexity of their markets.
A proprietary IMD, Microsoft, and Ringier survey finds that executives are concerned that bias in GenAI systems poses a risk to brand reputation. They are also worried about customer dissatisfaction, diminished competitiveness, and suboptimal decision-making.
Diversity bias is a real risk to organizations and communities.
The reinforcement of stereotypes and discrimination against groups of people based on gender, race, socioeconomic profile, or any other marker of demographic identity has huge ramifications, particularly for sectors like healthcare, finance, law, and education. Problems tied to inaction also stack up at the organizational level.
As businesses increasingly turn to GenAI tools to accelerate efficiency and productivity, unchecked system bias can hamper the design and improvement of products and services that should meet the needs of diverse customer bases. In addition, the automation of things like hiring processes and customer services hinges on balanced and non-discriminatory systems to be even minimally successful. Diversity bias can erode trust among employees as well as clients, and failure to address it exposes companies to stringent regulatory penalties and repercussions â a risk that can only intensify as GenAI continues to reshape the way that we work and do business.
Diversity bias is a real risk to organizations and communities. It corrodes fairness, stalls innovation, squanders opportunity and diminishes trust and good faith. The question is: how do we address it? What can decision-makers do to identify, manage, and mitigate âbias in the machineâ?
IMD has teamed with Microsoft Switzerland and Ringierâs EqualVoice Initiative to pinpoint the sources and risks of diversity bias in GenAI systems and to set out key insights and recommendations for organizations to proactively address and mitigate the harms.
Our forthcoming white paper, âAddressing Diversity Bias in GenAI,â will be presented at Davos in January 2025. The paper leverages our own diverse expertise as well as proprietary survey research to shed light on the scale of the problem and effective measures to contain and manage it â measures that organizations across all sectors can enact.
Among the insights that we share is the need for an intertwined approach focusing on people, process, and technology, with values and principles at the core. Responsible AI principles that align with organizational values and that should form the basis of a strong governance framework and processes. Among other things, this means proactively addressing bias, selecting diverse data sources, and involving human expertise at every stage of model development and every loop of AI operations. It also means breaking silos and putting into place diverse, multidisciplinary councils and teams working in collaboration across the organization, bringing together tech and diversity, equity and inclusion expertise with transparency and shared accountability.
This calls for participatory and inclusive leadership; leadership that empowers an open culture and psychological safety so that people can speak up and out, contribute unique perspectives, challenge assumptions, and deploy critical thinking. It also calls for personalized and continuous education and training on diversity bias both for developers and users as our context continues to shift and change.
Awareness is critical, but so too are the technical tools, skills, aptitudes, and inclusive mindsets needed to address the issue.
Organizations are cognizant of the risk of diversity bias. The results of a survey led by IMD, Microsoft, and Ringier find that 72% of executives are concerned about diversity bias in GenAI. However, awareness is not enough. The same survey reveals that just 35% of organizations are proactively addressing the issue.
Working towards responsible AI calls for a sense of shared accountability.
Addressing diversity bias in GenAI hinges on people, process, and technology. It means:
Working towards responsible AI calls for a sense of shared accountability. This is essential to building and shaping GenAI in a way that earns trust, respects values, and benefits us all.
Equity, Inclusion and Diversity Research Affiliate
Alexander received his PhD in organization studies from WU Vienna University of Economics and Business researching diversity in alternative organizations. His research focuses on inclusion and how it is measured, inclusive language and images, ableism and LGBTQ+ at work as well as possibilities to organize solidarity. His work has appeared in, amongst others, Organization; Work, Employment and Society; Journal of Management and Organization and Gender in Management: An International Journal.
18 March 2025 ⢠by Tomoko Yokoi in Artificial Intelligence
Large behavior models (LBMs) promise to be even more impactful than large language models, says IMDâs Tomoko Yokoi ...
13 March 2025 ⢠by Cedrik Neike in Artificial Intelligence
Digital technologies, and artificial intelligence in particular, allow us to extract insights from data. This will allow industrial companies, the backbone of our economy, to be more resource-efficient, more productive, and more...
10 March 2025 ⢠by JosÊ Parra Moyano in Artificial Intelligence
JosĂŠ Parra Moyano of IMD outlines why a lot of AI projects fail and advises CFOs to put more emphasis on staff buy-in, data preparation, and business value. ...
28 February 2025 ⢠by ĂykĂź IĹÄąk in Artificial Intelligence
Concerns over the âunpredictabilityâ of AI are widespread, but Industrial AI proves the value proposition of this tool, says ĂykĂź IĹÄąk. This article explains how to build security into your strategy....
Explore first person business intelligence from top minds curated for a global executive audience