
Danger of the diamond: Don’t drain the pool of future leaders
AI is eroding entry-level roles, threatening future leaders. Erik Brynjolfsson warns organizations to rethink hiring and invest in strategic early-career talent....
2 hours ago • by Heather Cairns-Lee in Artificial Intelligence
AI may become one of the most significant leadership opportunities for women in decades. Its impact will depend on how capability, governance, and leadership are built around it....
If underrepresented communities aren’t using AI at the same rate, they don’t just miss out on productivity gains. Their insights and lived experiences won’t be reflected in the next generation of AI systems.- Su-Mei Thompson, CEO at Media Trust and Member of the IMD Supervisory Board
Artificial intelligence may be one of the most consequential leadership challenges organizations will face in the coming decade. As generative AI tools move rapidly from experimentation to everyday business use, the deeper question is no longer simply how organizations adopt the technology, but who shapes it.
That question was at the center of IMD’s Women leading in the age of AI event, which brought together Media Trust CEO and IMD Supervisory Board member Su-Mei Thompson, senior executive and board advisor Caroline Creven Fourrier, and independent consultant and author of The Visual Guide to AI Raquel Roses.
The stakes are already visible. The perspectives that influence how AI systems are designed, governed, and deployed will increasingly determine how organizations hire, allocate work, evaluate performance, and make strategic decisions. If participation in that process is narrow, the risk is not simply that existing inequalities persist – it is that they become embedded in the systems structuring the future of work.
That tension was immediately visible at the event, when participants were asked to share the first word that came to mind when thinking about AI and women’s leadership. Responses ranged from “opportunity” and “potential” to “bias,” “underrepresentation,” and “uncertainty” – capturing the central dilemma facing organizations today: AI represents both extraordinary possibility and real, present risk.
One signal of that risk is already visible. Women are around 20% less likely than men to use generative AI tools, according to a synthesis of global studies analyzing adoption patterns across more than 140,000 individuals. They represent around 22% of AI professionals globally and hold fewer than 14% of senior AI leadership roles – demonstrating that underrepresentation persists even at the highest levels of AI development and governance.
When participants were asked whether AI would accelerate or slow women’s career progression in their organizations, the responses were sharply divided – reflecting the same mix of hope and concern. The responses underscored how unsettled the question remains. AI could open new pathways for leadership, or reinforce existing inequalities, depending on how organizations choose to deploy it.
Early adopters develop expertise, shape use cases, and define the governance structures that guide implementation. If adoption remains uneven, and if the governance conversations happen without sufficiently diverse voices, those shaping AI systems will also be those most advantaged by them. The question of who participates in AI adoption is therefore not merely a representation issue. It is a strategic one.
Lower rates of AI adoption among women are frequently interpreted as hesitation or a lack of confidence. Yet this framing may overlook an important dynamic. Caution in the face of powerful technologies can reflect judgment rather than resistance, and the panel argued that organizations would do well to understand the difference.
Roses made the case vividly. When preparing for the panel, she had deliberately chosen not to ask an AI tool what she should say. When a colleague suggested she simply put the question to ChatGPT, her instinct was to resist: she wanted to form her own view first. When she did eventually ask, the response was, in her words, “quite plain and quite standard.”
The contrast illustrated what is lost when speed displaces thinking. “We should reframe this caution as necessary,” she said, one that applies not just to women, but to how humanity engages with technology of this power.
Creven Fourrier identified a related risk of “cognitive outsourcing” – the tendency to put questions into AI systems and accept what comes back without applying genuine critical reasoning. AI tools, she noted, are designed to be affirming: they tell users their questions are excellent, their instincts correct, their conclusions sound. That flattery, she warned, is precisely what makes uncritical adoption dangerous.
“You’re not looking objectively at what is being provided as an answer,” she said. The discipline of scrutinizing AI outputs, rather than accepting them because they arrive with confidence, is not a weakness. It is a leadership capability organizations will increasingly depend on.
Women’s approach to AI – asking which tasks genuinely benefit from automation, scrutinizing outputs rather than accepting them, preserving space for independent thought – should be recognized as an asset: one that brings greater discernment to governance, a sharper eye for the assumptions embedded in automated systems, and a stronger instinct for where human judgment remains irreplaceable.

“Inclusion in AI development and adoption becomes more than a question of fairness. It becomes a question of system quality.”
AI systems do not emerge in isolation. They are shaped by the data used to train them, the teams that build them, and the organizations that deploy them. When those inputs lack diversity, the consequences can extend far beyond representation.
Thompson drew on her organization’s work with underrepresented communities to articulate the systemic risk as a vicious circle. “The groups least represented in today’s data become the most overlooked or misrepresented in tomorrow’s algorithms, which then reinforces bias, erodes trust, and widens the gap further.”
Previous research has found that algorithms used to detect liver disease were nearly twice as likely to miss the condition in women as in men. Recruitment tools trained on historical data actively discriminate against female applicants, candidates who wear headscarves, and those with names that signal minority backgrounds. Around 42% of global companies are already using predictive AI systems for recruitment, despite repeated expert warnings about their discriminatory potential. These are not hypothetical future risks. They are present realities, already operating inside organizations.
These outcomes are rarely the result of deliberate bias. More often, they emerge from systems trained on incomplete data and designed without a sufficiently broad range of perspectives. Inclusion in AI development and adoption, therefore, becomes more than a question of fairness. It becomes a question of system quality. Technologies shaped by diverse experiences are more likely to identify blind spots early, serve broader user groups, and perform reliably across contexts.
Thompson also pointed to an action that organizations can take. “Organizations are collectively spending billions on AI tools. They can use that purchasing power to drive systemic change by requiring suppliers to disclose the training data used and the diversity of their development teams.”
Boards and leadership teams can demand greater transparency from suppliers about training data risks and responsible use and should insist that the diversity of development teams is treated as a procurement consideration, not an afterthought.
Organizations often approach AI adoption as a technical initiative. But its implications extend far beyond technology teams. Creven Fourrier was clear that AI capability needs to be positioned not as a technical skill but as a strategic leadership competency, one that organizations must build equitably, across functions, levels, and demographics, rather than concentrating on pockets of early adopters.
That means being intentional about who receives access to training and tools, who participates in pilot projects, and who is present in the governance conversations that set the rules for how AI is used. As Creven Fourrier put it, “It’s not just about using AI tools. It’s about being involved in defining the strategies around them.”
She also connected training with trust, arguing that the two barriers most commonly cited by organizations – lack of training and ethical concerns – are in fact inseparable. You need to use AI tools to understand them, she argued, and you need to understand them to know what concerns are legitimate and what safeguards are required. “The two go together,” she said.
When participants were asked about the biggest barriers to equitable AI adoption in their organizations, lack of training was the most commonly cited response, followed by unclear governance policies, ethical concerns, and a lack of psychological safety to experiment. The pattern suggests that the gap is organizational rather than individual – a reflection of how institutions structure access, experimentation, and confidence around new technologies.
Evidence suggests that relatively small interventions can make a significant difference. Research from Google found that even a few hours of targeted training can generate a significant increase in women’s AI adoption, particularly when supported by peer mentoring, visible role models, and a culture that gives permission to experiment. The barriers are not insurmountable. The interventions that work are known. What is required is the organizational will to implement them.
The norms established now – around governance, accountability, training, and participation – will shape how these technologies evolve over the next decade.
The risks of inaction are critical. The World Economic Forum estimates that gender equity is still 123 years away. AI is already reshaping labor markets and organizations. Women, already underrepresented in AI development but overrepresented in roles most exposed to automation, face disproportionate exposure unless organizations act with intention now.
The scale of the challenge demands collective action. No single company, however well-intentioned, can resolve this alone. What is required is collaboration – governments, technology companies, businesses, media, academia, and nonprofits – coalescing around shared commitments: making AI systems safe and accountable, building institutional trust, advancing women into AI leadership and design, and using procurement and investment power to raise the floor across the industry as a whole. These are the four strategic pillars that a project that IMD, Media Trust, and Code For Good Now are promoting as part of a project to ensure women have a role in building the AI systems of tomorrow, as Su-Mei Thompson put forward. With analysts estimating that more than $5tn will be invested globally in AI infrastructure over the next five years, the moment to shape those commitments is now.
The question is not whether AI will transform leadership – it will. The more important question is whether that transformation will be equitable. Organizations that act now, with intention and inclusivity, have the opportunity to shape something genuinely different. Those who wait may find themselves looking back and realizing that this was the moment they failed to seize.

Chief Inclusion Officer, Roche
Caroline Creven Fourrier is a globally recognized leader in diversity, equity, and inclusion (DEI) with more than 15 years of experience shaping inclusive workplace cultures across international organizations. She currently serves as Chief Inclusion Officer at Roche, where she drives strategies to foster belonging, psychological safety, and performance through diverse thinking.

Lead Consultant & Founder, Alpha Impact

CEO at Media Trust and Member of the IMD Supervisory Board
Su-Mei Thompson is CEO at Media Trust, a UK based non profit organization that works with the media and tech sectors to further their CSR and DEI goals. After senior management roles with Disney, the Financial Times, and Christie’s, her career in the non-profit sector began in 2007 when she became CEO of The Women’s Foundation in Hong Kong, a leading NGO known for its impactful Girls Go Tech and mentoring programs for women. She has spear-headed Media Trust’s initiatives to boost the digital capabilities of charities in partnership with Google, Meta, and TikTok, including providing free AI essentials training for charities and talent from under-represented groups. Besides running Media Trust, Thompson has been a Commissioner of both the HK and UK equalities regulators and serves on a number of boards including the Supervisory Board at IMD.

Affiliate Professor of Leadership and Communication
Heather Cairns-Lee is Affiliate Professor of Leadership and Communication at IMD. She is a member of IMD’s Equity, Inclusion and Diversity Council and an experienced executive coach. She works to develop reflective and responsible leaders and caring inclusive cultures in organizations and society.

4 hours ago in Artificial Intelligence
AI is eroding entry-level roles, threatening future leaders. Erik Brynjolfsson warns organizations to rethink hiring and invest in strategic early-career talent....

March 5, 2026 • by Michael Yaziji in Artificial Intelligence
CHROs must navigate AI adoption carefully, balancing speed and direction while making trade-offs that protect people, skills, and long-term value...

March 2, 2026 • by Tiantian Yang, Prasanna Tambe in Artificial Intelligence
As investment in AI accelerates, access to emerging technology skills is becoming a decisive driver of career progression and pay. New research shows that structural features of tech roles, not women’s choices...

February 25, 2026 in Artificial Intelligence
Spotify CHRO Anna Lundström drives AI readiness, culture, and personalized well-being to empower employees and sustain Spotify’s innovative, human-centric growth....
Explore first person business intelligence from top minds curated for a global executive audience