AI Safety Clock
The IMD AI Safety Clock is a tool designed to evaluate the risks of Uncontrolled Artificial General Intelligence (UAGI) – autonomous AI systems that operate without human oversight and could potentially cause significant harm.
Our mission is to evaluate and communicate these risks to the public, policymakers, and business leaders, helping ensure the safe development and use of AI technologies.
As AI rapidly advances, the risks increase
The IMD AI Safety Clock takes a systematic approach to evaluating AI’s progress, based on real-time technological and regulatory changes, focusing on:
The closer we get to midnight, the higher the risk of AI becomes
“As AI advancements push us closer to midnight, effective regulations have the potential to slow down or even reverse the clock’s progress,” explains Michael Wade, TONOMUS Professor of Strategy and Digital and Director of the TONOMUS Global Center for Digital and AI Transformation.
Our methodology
We’ve built a proprietary dashboard that tracks real-time information from over 1,000 websites, 3,470 news feeds, and expert reports. This advanced tool, combined with manual desk research, provides comprehensive and up-to-date insights into AI developments across technology and regulation.
Our methodology blends quantitative metrics with qualitative insights and expert opinions, delivering a multifaceted view of AI risks. By leveraging automated data collection and continuous expert analysis, we ensure a balanced, in-depth understanding of the evolving AI landscape.
Explore our news and research on the topic of artificial intelligence.
If you have any questions about the AI Safety Clock or would like to connect with the team behind the research, fill out the form and we’ll come back to you at the earliest opportunity.