Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

The Interview

The real story of AI’s rise from winters to workplaces

November 14, 2025 • by Didier Bonnet in The Interview

AI has evolved from skepticism to ubiquity, transforming work and productivity while exposing deep limitations in truth, reliability, and human dependence....

AI has leapt from obscurity to ubiquity, reshaping work and society. But its flaws run deep, forcing leaders to balance its promise with its dangerous limitations.

The journey undertaken by AI since the term was coined in the 1950s has been far from straightforward, with at least two major “winters” when the AI community and the wider press expressed pessimism that any meaningful progress would be made.

Of course, much of this pessimism was unwarranted, especially the AI winter of the 1990s and early 2000s, which led Oxford University’s Nick Bostrom to complain that AI was already being widely deployed in the technologies of the day, but because it wasn’t being singled out as AI, there was a sense that the technology had stalled.

The landscape today is barely recognizable, with Gartner estimating that $1.5tn will be invested in AI in 2025 alone

Into the spotlight

The AI we’re surrounded by today has burst into the spotlight. But Professor Michael Wooldridge, Head of Computer Science and AI at Oxford University, recalls the “dark times” when the field was often “regarded as kind of slightly suspect” and “not entirely serious.” Even the foundational technology driving the current boom, neural networks, was “regarded as a kind of almost dead field.”

For those who operated during previous AI winters, the landscape today is barely recognizable, with Gartner estimating that $1.5tn will be invested in AI in 2025 alone. This has coincided with a tangible buzz and genuine cause for excitement that has been driven by the democratization of the technology and the accessible, conversational nature of the tools hitting the market. It’s an interface that Wooldridge believes “really feels like the AI that we always thought we were going to get.” Access is immediate, and the conversational ability is dazzling.

This ease of use masks some key vulnerabilities, however, because the outputs produced by AI aren’t based on any kind of search for truth but rather on a calculation of plausibility.

According to Professor Wooldridge, the AI is not “thinking, ‘What is the right answer to this question?’… It’s trying to produce the most plausible-sounding answer. That’s it. It has no conception of what the truth is.”

What’s more, this isn’t a bug that’s simple to fix, but rather something that is baked into the technology at a very, very deep level.

The core paradox

This is the central paradox with AI as it currently stands. It’s a technology that is capable of getting things wrong “in very, very plausible-sounding ways,” which makes it dangerous to use in critical applications.

What’s more, this isn’t a bug that’s simple to fix, but rather something that is “baked into the technology at a very, very deep level.” It forces us into a “very strange situation to find ourselves in” with a powerful tool that is “useful, but not always correct.”

This is crucial for leaders to understand. While Large Language Models (LLMs) have gotten so much hype in the last few years, they’re actually a small part of a much bigger picture. For instance, narrow AI, such as vision recognition in fields such as medical diagnostics, continues to provide immense and reliable value.

Such applications tend to get far less coverage than that of Artificial General Intelligence (AGI). The machine that’s capable of doing everything a human can do remains far off, and Wooldridge highlights the profound gulf between conversational AI and real-world competence:

“We’ve already mentioned we have AI that you can ask about quantum mechanics… And yet we don’t have AI that could come into your home… Locate the kitchen, tidy the kitchen, load the dishwasher.”

That simple act of domestic competence, easily performed by a minimum wage worker, remains beyond current general AI. Wooldridge proposes a useful marker for true AGI: an intelligence that “could do anything that a minimum wage human worker could do.” That still feels like a long way off.

“While there have been fears about widespread displacement, for the vast majority of white-collar workers, AI is not an immediate replacement but a productivity boost.”

AI in the workplace

Nonetheless, AI is still likely to have a profound impact on the workplace. While there have been fears about widespread displacement, for the vast majority of white-collar workers, AI is not an immediate replacement but a productivity boost. It’s a tool that offloads “routine and boring and dull stuff” and allows employees to focus on tasks requiring human insight. Wooldridge provides a compelling example: using an LLM to generate a draft PowerPoint from a report in 30 minutes, saving him an afternoon’s work.

There is a definite process of “erosion” in many roles, however, which is perhaps the first genuine impact on employment. For instance, jobs that focus on generating generic marketing copy, website text, or bespoke graphics (like those found on the gig economy site Fiverr) are being displaced. Similarly, while high-level enterprise programmers are safe, the large number of developers who do routine tasks, such as writing a script to query a database, will find their work increasingly automated.

While economic theory suggests that as the cost of code generation drops, demand will expand, the nature of competition is changing drastically. This potential for “job creation through these technologies” will likely manifest in new roles and a greater volume of automated work.

AI isn’t about replacing human decision-makers but rather augmenting them.

Navigating the “new normal”

The challenge for both leaders and society more broadly is to prevent the dumbing down of the workforce. The risks have been shown in various studies, where an over-reliance on AI can dull our senses. A better and more sustainable approach is to use AI to augment our own thinking rather than rely on the technology to provide easy, but flawed, answers.

For leaders, educators, and users alike, the new rules are simple:

  • Never pass off AI-generated content as your own.
  • Always be prepared to check AI-generated content.
  • Be highly suspicious of citations, as LLMs frequently “give you spurious references which sound very plausible.”

While Klarna CEO Sebastian Siemiatkowski grabbed headlines when he suggested that his company could become fully automated, the reality quickly dawned on him. AI isn’t about replacing human decision-makers but rather augmenting them. It’s becoming increasingly apparent that success will be less about the power of the technology but on the way in which we can adapt our organizations, our education systems, and our internal processes to benefit from the power of AI while acknowledging its deep, inherent flaws.

Expert

Michael Wooldridge

Head of Computer Science at the University of Oxford and Co-programme Director for AI at the Alan Turing Institute

With over 30 years in the field, Michael Wooldridge has published more than 450 scientific papers and nine books on AI, with translations in eight languages. His research has earned over 86,000 citations, and he holds an h-index of 104—an indicator of his wide-reaching influence in academia and beyond.

His numerous accolades include the Lovelace Medal from the British Computer Society (2020), the AAAI Outstanding Educator Award (2021), and the European AI Distinguished Service Award (2023). He holds prestigious fellowships from the ACM, AAAI, EurAI, and Academia Europaea.

Authors

Didier Bonnet

Professor of Strategy and Digital Transformation

Didier Bonnet is Professor of Strategy and Digital Transformation at IMD and program co-director for Digital Transformation in Practice (DTIP). He also teaches strategy and digital transformation in several open programs such as Leading Digital Business Transformation (LDBT), Digital Execution (DE) and Digital Transformation for Boards (DTB). He has more than 30 years’ experience in strategy development and business transformation for a range of global clients.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience