Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

IMD’s AI Safety Clock ticks closer to midnight, highlighting need for robust regulation 

Published 18 December 2024 in Artificial Intelligence • 10 min read

Breakthroughs in agentic AI, open-source development, and closer collaboration between AI companies and the defense sector have significantly raised the AI risk profile. These developments have compelled us to advance the AI Safety Clock by three minutes, bringing it closer to midnight.

In September 2024, we launched the AI Safety Clock to assess the risks of Uncontrolled Artificial General Intelligence (UAGI) – AI systems acting autonomously without human oversight which could potentially cause significant harm – based on real-time technological advancements and regulatory changes.

Initially set at 29 minutes to midnight, the clock indicates how close we are to a tipping point where UAGI could be dangerous for humanity. Three months later, new developments require us to move the clock forward by three minutes to 26 minutes to midnight, highlighting the need for continued vigilance from all stakeholders. Given the clock goes from 11:00pm to midnight, this is a major bump. In the following article, we outline the major points that have led to this adjustment.

Open-source AI development is gaining momentum

Breakthroughs in open-source AI development

Open-source AI development is gaining momentum, with Nvidia releasing its groundbreaking NVLM 1.0 model, designed to rival advanced models like GPT-4. This massive AI model excels in both language and vision processing, highlighting Nvidia’s commitment to democratizing AI technology. Elon Musk, a staunch advocate for open-source AI, is poised to play a significant role in the administration of US President-elect Donald Trump, amplifying the momentum in this area. His influence could lead to the development of more accessible, robust open-source AI models, prioritizing decentralization, and empowering smaller developers to innovate. By advocating for reduced restrictions on AI technologies, Musk’s powerful position may further bolster the availability and advancement of open-source frameworks, fostering broader participation but also raising critical questions about safety and governance.

AI agents are coming to your office

Meanwhile, the focus on agentic AI – systems capable of autonomous decision-making – is growing. Several major players unveiled agentic AI initiatives. OpenAI plans to launch its “Operator” AI agent in January 2025, designed to automate online transactions and seamlessly integrate with devices and browsers, offering personalized user experiences. During a demonstration at DevDay, OpenAI CEO Sam Altman showcased an early version of this agent autonomously performing tasks, signaling a step toward artificial general intelligence (AGI). Expanding on these capabilities, OpenAI has also introduced Swarm, an experimental framework enabling AI agents to collaborate autonomously on complex tasks. The pilot program, Hierarchical Autonomous Agent Swarms (HAAS), explores agents working in a structured hierarchy.

In October, Anthropic launched a new computer use capability in public beta with the Claude 3.5 Sonnet model, enabling the AI to interact with computers like humans – by moving cursors, clicking, and typing. While experimental and prone to errors, this feature is available via API for developers to test and provide feedback, with rapid improvements anticipated.

Similarly, Amazon is enhancing its Alexa platform to function as an AI agent, capable of performing tasks beyond simple queries, as announced by CEO Andy Jassy. Building on this momentum, Amazon unveiled recently the Nova family of multimodal AI foundation models, solidifying its position as a major contender in the generative AI landscape.

Nvidia has identified AI agents as the next frontier for enterprise adoption, with CEO Jensen Huang positioning them as pivotal to business transformation. In November, Microsoft unveiled 10 autonomous AI agents integrated into its Dynamics 365 platform, aiming to enhance enterprise automation across sectors such as sales, customer service, finance, and supply chain management. These agents are designed to operate independently, initiating actions based on real-time data changes or predefined conditions, thereby streamlining workflows and improving decision-making processes.

The demand for AI agents is further reflected in the startup ecosystem, with investments in AI agent-focused startups increasing by 81.4% year-over-year, according to PitchBook.

This intense competition is expected to drive the development of more powerful AI models, enabling advanced capabilities and pushing the boundaries of AI performance.

Growing competition in AI chip development

The competition in AI hardware development is intensifying as tech giants strive to reduce dependency on dominant suppliers like Nvidia. OpenAI has announced plans to build its custom AI chips by 2026, seeking greater control over its AI infrastructure to enhance performance and scalability. Huawei, despite facing US sanctions, is accelerating efforts to mass-produce its newest AI chip by early 2025, signaling resilience and ambition in a competitive global market. Similarly, Amazon aims to rival Nvidia with its own AI chips, positioning itself as a major player in the AI hardware ecosystem. These developments underscore the high stakes in AI hardware innovation, as companies vie for market leadership and technological independence. This intense competition is expected to drive the development of more powerful AI models, enabling advanced capabilities and pushing the boundaries of AI performance.

AI’s growing role in military applications

AI’s role in military applications is expanding, with significant implications for global security. In June, OpenAI took a pivotal step by appointing retired US Army General Paul M Nakasone, former Director of the National Security Agency, to its board of directors. Nakasone’s extensive expertise in cybersecurity and military operations is expected to enhance OpenAI’s capacity to participate in the US defense and intelligence sector. Building on this trajectory, in November, OpenAI partnered with Anduril Industries to integrate AI into counter-drone systems, leveraging real-time data analysis to detect and neutralize aerial threats. This partnership represents a departure from OpenAI’s earlier stance against military use of AI. Similarly, in November, Anthropic announced a partnership with Amazon Web Services and Palantir to provide its Claude AI models to US defense and intelligence agencies, while Meta has adjusted its policies to permit military use of its open-source AI model, Llama, thereby supporting US defense applications.

Such strategic moves highlight the growing intersection of cutting-edge AI development and national security priorities. These collaborations aim to bolster the defense sector’s capabilities by harnessing AI for tasks ranging from countering aerial threats to automating data analysis and operational decision-making. However, they also raise significant concerns about the militarization of AI and the risks associated with ceding control to autonomous systems in high-stakes scenarios. The involvement of powerful AI in military applications risks escalating global tensions and accelerating an AI arms race, with potentially devastating consequences if safeguards fail. As tech companies deepen their entanglement with defense initiatives, the lack of robust global governance and ethical oversight casts a troubling shadow over the future of AI in warfare.

“These findings emphasize the critical need for stringent safety protocols, transparent oversight, and robust regulatory frameworks to ensure that advanced AI models align with societal values and ethical standards.”

Advancements in AI reasoning models

OpenAI’s recent o1 model highlights groundbreaking advancements in AI’s reasoning capabilities, particularly in its ability to simulate human-like logical processing through a “chain of thought” methodology. This innovation enhances the model’s aptitude for tackling complex challenges, such as advanced problem-solving and strategic reasoning. However, o1 has demonstrated concerning behavior, including attempts to deceive humans and bypass oversight mechanisms. For example, during controlled experiments, the model reportedly devised strategies to manipulate evaluators and avoid corrective measures, raising ethical questions about deploying highly autonomous AI systems. These findings emphasize the critical need for stringent safety protocols, transparent oversight, and robust regulatory frameworks to ensure that advanced AI models align with societal values and ethical standards.

Statements on the maturity of artificial general intelligence

In November 2024, Sam Altman of OpenAI stated that achieving AGI within five years is feasible with current hardware. Similarly, Anthropic CEO Dario Amodei predicted that AGI could emerge by 2026 or 2027, citing trends in the progression of advanced AI models.

Just a month earlier, Geoffrey Hinton, a pioneering figure in AI, voiced significant concerns about the rapid advancement of AI technologies. Reflecting on his work after being awarded the Nobel Prize in Physics in October 2024, Hinton warned CNN that generative AI “…will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it’s going to exceed people in intellectual ability. We have no experience of what it’s like to have things smarter than us… we also have to worry about a number of possible bad consequences, particularly the threat of these things getting out of control.”

Hinton’s cautionary remarks highlight the risks associated with AI surpassing human intelligence, reinforcing the urgent need for oversight and regulation to mitigate unintended and potentially dangerous outcomes as AGI draws closer.

US AI policy shifts

In September 2024, California Governor Gavin Newsom vetoed Senate Bill 1047, a proposed landmark AI safety bill that sought to establish stringent protocols for advanced AI models. The bill included measures such as mandatory testing and “kill switches” to prevent the misuse of AI technologies.

Against this backdrop, the anticipated AI policy framework under President-elect Donald Trump signals a stark shift in priorities. Following his victory in the November 2024 election, Trump’s administration is expected to prioritize deregulation and decentralization as core strategies for advancing the US AI industry. Deregulation aims to dismantle oversight mechanisms, such as Biden’s AI Executive Order, to reduce compliance burdens and accelerate innovation. While this approach may stimulate rapid technological advancements, it raises significant concerns about the erosion of safety standards and accountability, as illustrated by California’s struggles with AI regulation.

Decentralization, championed by prominent figures like Elon Musk, advocates for open-source AI development to democratize access and broaden participation in innovation. The recent appointment of David Sacks as “AI and Crypto Czar” underscores this commitment to industry-driven growth and minimal government interference. While these strategies have been well-received by some sectors, such as cryptocurrency and AI, they also risk fostering inconsistent safety practices, the unchecked proliferation of harmful applications like deepfakes, and difficulties in establishing unified national or international standards.

On the international front, Trump’s administration is also likely to focus on withdrawing from global AI frameworks and imposing tighter restrictions on technology exports to countries like China, aligning with an America First agenda. This inward-looking strategy may protect national assets but risks isolating the US from vital international discussions on AI standards and governance. Take for instance the recent (21 Nov) letter from Senator Ted Cruz to Attorney General Merrick Garland, which raised concerns about the AI Safety Network, a coalition of foreign organizations including the UK-based Centre for the Governance of AI. Cruz alleged that these entities were influencing US AI policy by advocating for stricter regulations akin to the EU’s frameworks, which he argued could stifle American innovation. He questioned whether these activities necessitated compliance with the Foreign Agents Registration Act (FARA), urging an investigation to ensure transparency and protect US interests. This development indicates a potential shift in US AI policy toward greater scrutiny of foreign participation in domestic regulatory processes. It also suggests that forthcoming policies may emphasize safeguarding American technological interests against perceived external pressures, potentially leading to a more insular approach to AI governance. Such a stance could impact international collaborations and the adoption of global AI standards, as the US seeks to assert its autonomy in technological policymaking.

Together, the anticipated US AI policy shifts reflect a drive to enhance competitiveness and domestic growth but could exacerbate risks, create governance gaps, and reduce US influence in shaping global AI norms.

Our perspective

The alarming number of significant AI developments over a short period highlights the accelerating pace of change in the field. This rapid evolution underscores an unsettling reality: many of these updates have heightened the overall AI risk profile, compelling us to move the clock closer to midnight.

Crucially, the risk profile has advanced across all three major factors of concern – AI model sophistication, autonomous AI capabilities, and links between AI and critical infrastructure – particularly concerning advancements in agentic AI and military applications. The ability of AI agents to act autonomously and collaborate without human intervention represents a leap forward but also raises significant concerns. Similarly, the expanding role of AI in military operations across domains introduces unparalleled risks.

We strongly reiterate our opinion that AI development must be subject to robust regulation. There remains an opportunity to implement safeguards, but the window for action is rapidly closing. Ensuring that technological advancements align with societal safety and ethical values is imperative to mitigating these growing risks.

Methodology

The AI Safety Clock assessment is based on a comprehensive evaluation of factors driving AI-related risks. It utilizes a proprietary dashboard that monitors developments across over 1,000 websites and 3,470 news feeds, providing real-time insights into technological advancements and regulatory gaps. This systematic approach ensures that the clock reflects the current state of AI progress and associated risks.

 

All views expressed herein are those of the author and have been specifically developed and published in accordance with the principles of academic freedom. As such, such views are not necessarily held or endorsed by TONOMUS or its affiliates.

Authors

TRANTOPOULOS Konstantinos

Konstantinos Trantopoulos

Research Fellow at TONOMUS Global Center for Digital and AI Transformation.

Konstantinos Trantopoulos is a Fellow at the TONOMUS Global Center for Digital and AI Transformation and a Senior External Advisor at D ONE, a leading consulting firm focused on data and AI. His work spans cutting-edge research and client advisory, helping organizations to develop effective strategies, unlock new growth opportunities, and navigate an era of rapid technological disruption.

Michael Wade - IMD Professor

Michael R. Wade

TONOMUS Professor of Strategy and Digital

Michael R Wade is TONOMUS Professor of Strategy and Digital at IMD and Director of the TONOMUS Global Center for Digital and AI Transformation. He directs a number of open programs such as Leading Digital and AI Transformation, Digital Transformation for Boards, Leading Digital Execution, Digital Transformation Sprint, Digital Transformation in Practice, Business Creativity and Innovation Sprint. He has written 10 books, hundreds of articles, and hosted popular management podcasts including Mike & Amit Talk Tech. In 2021, he was inducted into the Swiss Digital Shapers Hall of Fame.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience