For society, this can be problematic. In 2024, UNESCO published research showing that GenAI systems associate women with terms like “home,” “family,” and “children” four times more frequently than men. Meanwhile, names that sound male are linked to terms like “career” and “executive.” EqualVoice used GenAI image generators in 2024 to test for bias. They found a correlation between prompts and stereotypes: “CEO giving a speech” engendered men 100% of the time and 90% of these were white men. The prompt “businesswoman” yielded images of women that were 100% young and conventionally attractive. Again, 90% of these were also white.
While this is troubling at a societal level, at the level of individual organizations, bias in GenAI can also constitute a serious threat.
Businesses using the technology to help map market segments, design and produce products, and engage with customer bases are at risk of suboptimal risk management and decision-making unless they introduce effective approaches, measures, checks, and balances to mitigate bias. Innovation and growth can be undermined, and opportunities might be missed when organizations fail to integrate diverse user needs, priorities, and perspectives. Brand reputation and loyalty suffer when organizations fail to uphold ethical standards and societal values. And in a world where integration is accelerating, it is only natural that laws and guidelines around the use of GenAI will intensify. Organizations found wanting in the regulatory context are likely to face increasingly stringent financial and operational consequences.