Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

AI on the brink: how close are we to losing control?

Published 4 November 2024 in Artificial Intelligence • 7 min read

As AI advances at a breakneck pace, IMD’s new AI Safety Clock warns we’re nearing a critical tipping point. With regulation lagging, can we keep AI under control before it’s too late?

The clock is ticking down to a moment when artificial intelligence could slip beyond our control. IMD’s new AI Safety Clock has been set to 29 minutes to midnight, reflecting the growing threat posed by uncontrolled artificial general intelligence (UAGI), autonomous systems that function without human oversight and may pose serious dangers.

This clock serves as a stark reminder that we are nearing a crucial point in AI development where rapid advancements, paired with insufficient regulation, are pushing us closer to potential dangers that could drastically affect society and business. 

But how is this timeline calculated? What are the real dangers, and how can governments and companies work together to mitigate these risks?

The classic doomsday scenario is when AI gains the ability to make decisions on its own, without oversight. 

The countdown to out-of-control AI begins

Introduced in October 2024, the AI Safety Clock assesses the risks of UAGI. The aim is to inform the public, policymakers, and business leaders about these risks, thereby promoting the safe development and use of AI. 

The clock’s time is calculated through a methodology that looks at several key factors. This includes measuring the sophistication of AI, regulatory frameworks, and the ways the technology interacts with the physical world, aka infrastructure. 

To reach this number, it involves tracking developments in AI models, how they are performing against human intelligence, and the speed at which they’re becoming more capable. In a nutshell: AI models are moving rapidly on both fronts. 

We also look at how autonomous these systems are. For instance, if an AI remains under human control, the risk is lower. But if it becomes independent, the danger is exponentially magnified. The classic doomsday scenario is when AI gains the ability to make decisions on its own, without oversight. 

But perhaps the most alarming factor in our methodology is the connection of AI to the physical world. If AI systems begin controlling critical infrastructure, such as power grids or military systems, the consequences could be catastrophic. Much like nuclear weapons reshaped geopolitics, uncontrolled superintelligence could be just as world-altering.

We also factor in regulation to the clock. Each time meaningful guardrails are put in place the clock moves away from midnight. For instance, the vetoing of an AI safety bill in California last month moved us closer to midnight, while Europe’s AI Act helped push the clock back.

Misinformation powered by AI is on the rise, and this technology is only becoming more sophisticated

The risks of uncontrolled AGI

Since OpenAI’s chatbot ChatGPT burst onto the scene in late 2022, a new wave of groundbreaking generative AI launches has taken the world by storm. Advocates say this new wave of AI could shift consumer behavior as profoundly as the internet and mobile phones did.

But what happens if AGI is no longer under human control? The risks are vast.

  1. Misinformation and manipulation: AI-driven misinformation campaigns are already influencing public opinion and have played a role in elections. The risk is that deepfakes and AI-generated disinformation could destabilize democracies, skew markets, and manipulate societies in ways we may not even realize until it’s too late. Examples include Russian meddling in the 2020 US presidential election. 
  2. Weaponization and infrastructure sabotage: There’s a real fear that AI could be used in military operations or to control key infrastructure; it’s already piloting autonomous drones. An AI system could, in theory, gain control of vital systems like national power grids or water supplies. The risks here are self-evident: an out-of-control AI in charge of critical infrastructure would be disastrous.
  3. Economic manipulation and job loss: The possibility, albeit still distant, of AI manipulating markets and transactions also poses a significant risk to global economies. On top of that, the rapid automation of tasks, especially in areas like manufacturing and logistics, could lead to large-scale job displacement, a reality we’re already seeing on the horizon.

Although UAGI hasn’t yet arrived in full force, there are already signs of its potential to do harm. Chatbots, for example, have already influenced people’s moral judgements, according to studies. Misinformation powered by AI is on the rise, and this technology is only becoming more sophisticated. Consider the deepfake of Russian president Vladamir Putin declaring peace with Ukraine that circulated on social media. 

The military application of AI is another pressing concern. Countries are developing AI-driven weapons systems, such as autonomous drones, and the fear is that they might eventually operate beyond human control. In the wrong hands, these technologies could spark conflicts or cause untold damage to societies. A scary prospect indeed. 

Curtail the very innovation that fuels advancement in favor of the public good.
- Gavin Newsom

Can we regulate AI in time?

One of the biggest obstacles in managing UAGI risk is the fragmented approach to regulation. While the EU has been proactive with its AI Act, other regions lag behind. In the US, for instance, there’s no nationwide AI legislation, with efforts often led by individual states. Recent attempts, like California’s proposed AI safety bill, have been vetoed out of fear that regulating too strictly could stifle innovation or push tech companies out of the state. 

California governor Gavin Newsom said recently the legislation could “curtail the very innovation that fuels advancement in favor of the public good.”

SAP CEO Christian Klein, meanwhile, cautioned EU policymakers against over-regulating artificial intelligence, saying recently that it could weaken Europe’s global standing and widen the gap with the US. “I’m totally against regulating the technology, it would harm the competitiveness of Europe a lot,” he told the FT. 

For stronger regulation to overcome such opposition, international cooperation is essential. One option would be a global body like the International Atomic Energy Agency (IAEA), which oversees nuclear technology. Such an organization could audit AI systems and ensure they adhere to global safety standards.

“If we want to avoid a future where UAGI operates beyond our control, we need governments, corporations, and global institutions to step up and work together.”

What governments and companies must do

There are several concrete steps that governments and corporations can take to mitigate the risks of UAGI:

  1. Implement robust AI regulations: Governments need to enact well-thought-out AI laws that encourage responsible development without stifling innovation. Coordination on a global scale is vital to prevent companies from “regulation shopping” in countries with looser rules.
  2. Corporate responsibility and governance: Companies should implement internal controls and safety measures, ensuring their AI technologies are developed with safeguards in place from the beginning. Having risk experts on AI development teams from the outset can help prevent the creation of dangerous or unregulated systems.
  3. International oversight: A global AI monitoring body would help ensure that AI is being developed safely and that companies and countries stick to international standards. Such a body could audit AI systems and enforce compliance with safety protocols, much like the IAEA does for nuclear technology.

The AI Safety Clock is not intended to incite panic, but it does serve as a warning. While the clock is ticking, there is still a window of opportunity to steer AI development in the right direction – but the time to act is now. If we want to avoid a future where UAGI operates beyond our control, we need governments, corporations, and global institutions to step up and work together. The goal is not to stop innovation, but to make sure it’s intelligent, ethical, and safe. 

All views expressed herein are those of the author and have been specifically developed and published in accordance with the principles of academic freedom. As such, such views are not necessarily held or endorsed by TONOMUS or its affiliates.

Authors

Michael Wade - IMD Professor

Michael R. Wade

TONOMUS Professor of Strategy and Digital

Michael R Wade is TONOMUS Professor of Strategy and Digital at IMD and Director of the TONOMUS Global Center for Digital and AI Transformation. He directs a number of open programs such as Leading Digital and AI Transformation, Digital Transformation for Boards, Leading Digital Execution, Digital Transformation Sprint, Digital Transformation in Practice, Business Creativity and Innovation Sprint. He has written 10 books, hundreds of articles, and hosted popular management podcasts including Mike & Amit Talk Tech. In 2021, he was inducted into the Swiss Digital Shapers Hall of Fame.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience