The countdown to out-of-control AI begins
Introduced in October 2024, the AI Safety Clock assesses the risks of UAGI. The aim is to inform the public, policymakers, and business leaders about these risks, thereby promoting the safe development and use of AI. 
The clock’s time is calculated through a methodology that looks at several key factors. This includes measuring the sophistication of AI, regulatory frameworks, and the ways the technology interacts with the physical world, aka infrastructure. 
To reach this number, it involves tracking developments in AI models, how they are performing against human intelligence, and the speed at which they’re becoming more capable. In a nutshell: AI models are moving rapidly on both fronts. 
We also look at how autonomous these systems are. For instance, if an AI remains under human control, the risk is lower. But if it becomes independent, the danger is exponentially magnified. The classic doomsday scenario is when AI gains the ability to make decisions on its own, without oversight. 
But perhaps the most alarming factor in our methodology is the connection of AI to the physical world. If AI systems begin controlling critical infrastructure, such as power grids or military systems, the consequences could be catastrophic. Much like nuclear weapons reshaped geopolitics, uncontrolled superintelligence could be just as world-altering.
We also factor in regulation to the clock. Each time meaningful guardrails are put in place the clock moves away from midnight. For instance, the vetoing of an AI safety bill in California last month moved us closer to midnight, while Europe’s AI Act helped push the clock back.