
AI and the COO: Operational excellence in the AI era
Artificial intelligence is reshaping the COO role from operational executor to architect of predictive, intelligent, and resilient enterprise systems....

by Michael R. Wade, Konstantinos Trantopoulos Published March 16, 2026 in Artificial Intelligence • 10 min read
The IMD AI Safety Clock has moved two minutes forward to 23:42, just 18 minutes to midnight, highlighting unresolved tension between rapid capability expansion and meaningful oversight. The period from October 2025 to March 2026 has seen notable advances across the three dimensions tracked by the AI Safety Clock: sophistication, autonomy, and execution.
We launched the AI Safety Clock in September 2024 to assess the risks of uncontrolled artificial general intelligence (UAGI) – AI systems acting autonomously without human oversight, which could cause significant harm, based on real-time technological advancements and regulatory changes.
Initially set at 29 minutes to midnight, the clock indicates how close we are to a tipping point where UAGI could be dangerous for humanity.
Since then, large language models have grown more capable, more efficient, and more widely available, with four major frontier models released in a single 25-day span. Autonomous AI agents moved from experimental prototypes to mainstream enterprise deployments across major platforms, with Microsoft, Google, and GitHub all launching agent orchestration tools. AI systems became physically embodied in robots, embedded in critical infrastructure, and increasingly integrated into military applications. Meanwhile, the regulatory landscape diverged sharply: the European Union began enforcing the world’s first comprehensive AI law with substantial penalties, while the Trump administration pursued an aggressive military AI strategy, declaring the Pentagon an “AI-first warfighting force” and pressuring leading AI companies to remove safety guardrails from their frontier models.
The pace of foundation model releases since October 2025 has been relentless. In just 25 days, four major labs shipped flagship models. xAI released Grok 4.1 on 17 November, claiming the top position on LMArena’s leaderboard with strong real-time reasoning and live data integration. Google followed a day later with Gemini 3, a “thinking” model deeply integrated with its developer ecosystem, excelling in code, multimodal reasoning, and agent-based workflows. Anthropic released Claude Opus 4.5 on 24 November, excelling in coding, agentic workflows, and real-world software engineering, followed by Claude Opus 4.6 in February 2026 with a one-million-token context window and enhanced safety testing. OpenAI shipped GPT-5.2 on 11 December, targeting professional knowledge work with stronger reasoning, long-context performance, and fewer hallucinations. Google’s Gemini 3.1 Pro, previewed in February 2026, surpassed its predecessor with record benchmark scores, leading Humanity’s Last Exam and the APEX-Agents evaluation. The competitive intensity was such that OpenAI’s Sam Altman issued an internal “code red” memo after Gemini 3 topped leaderboards and Claude gained enterprise market share in coding applications.
Researchers at Tenable uncovered seven vulnerabilities in ChatGPT that could allow attackers to steal user data through indirect prompt injections.
Chinese AI labs mounted a formidable challenge, building on the momentum of the January 2025 “DeepSeek Moment” that shocked global markets. DeepSeek released V3.2 in December 2025, matching GPT-5 performance at reportedly 90% lower training costs, with features designed specifically for agentic workflows. DeepSeek V4 is expected imminently, with reports indicating it may have been trained on Nvidia’s Blackwell chips in an apparent violation of export controls. By the end of 2025, Chinese open-source models had surpassed those from any other country in downloads on Hugging Face. Moonshot AI’s Kimi K2 Thinking, a one-trillion-parameter open-source model released in November 2025, outperformed GPT-5 and Claude Sonnet 4.5 on reasoning and agent benchmarks. Alibaba unveiled Qwen3.5 in February 2026 with enhanced multimodal and agentic capabilities, while Weibo’s VibeThinker-1.5B proved that even small models can outperform far larger ones on math and coding tasks when trained with novel methods, all on a post-training budget of just $7,800.
Safety research matured alongside capability advances. Guide Labs launched Steerling-8B in February 2026, an open-source LLM with token-level training data traceability for enterprise compliance. Anthropic released a new AI constitution for Claude to improve transparency and alignment, while OpenAI published gpt-oss-safeguard, open-weight reasoning models for customizable safety classification. However, risks remain. Anthropic’s own models showed “glimmers of self-reflection,” detecting injected thoughts in neural states, a finding suggesting emerging self-monitoring capabilities that raise concerns about hidden reasoning and potential deception. Researchers at Tenable uncovered seven vulnerabilities in ChatGPT that could allow attackers to steal user data through indirect prompt injections, underscoring that sophistication and security do not advance in lockstep.

“The US–China chip rivalry intensified: the White House blocked Nvidia’s most advanced Blackwell chips from sale to China, while later approving exports of up to 35,000 chips to Saudi Arabia and the UAE.”
The AI chip landscape featured massive deals and geopolitical maneuvering. Meta signed agreements worth over $100bn with AMD and a multi-billion-dollar deal with Google for TPU access. Samsung partnered with Nvidia to build an AI Mega Factory with 50,000 GPUs, while Nvidia committed to supply over 260,000 Blackwell GPUs to South Korea. The US–China chip rivalry intensified: the White House blocked Nvidia’s most advanced Blackwell chips from sale to China, while later approving exports of up to 35,000 chips to Saudi Arabia and the UAE, signaling the emergence of “compute diplomacy” in which chip access becomes a tool for securing geopolitical alignment. China responded by banning foreign AI chips from state-funded data centers, adding Huawei and Cambricon chips to official procurement lists. ByteDance began developing its own “SeedChip” in partnership with Samsung, and Baidu unveiled two new AI chips for inference and training. Vertical integration is becoming a strategic imperative: companies that control both models and silicon will have decisive advantages.
The most defining shift of this period is that agentic AI crossed from experimentation to mainstream deployment. Microsoft embedded AI agents directly in the Windows 11 taskbar and enhanced 365 Copilot with autonomous Researcher with Computer Use capabilities running in secure sandboxes. Google launched Workspace Studio in December 2025, an AI automation hub enabling users to build powerful agents for Gmail, Drive, and Chat using natural language, no scripting required. GitHub launched Agent HQ, enabling developers to orchestrate multiple AI coding agents from OpenAI, Anthropic, Google, and others in parallel. Oracle introduced role-based AI agents across its Fusion Cloud Applications, and DHL deployed AI agents to automate logistics communications.
A UN report found women’s jobs face disproportionately higher AI risk, with 4.7% of women’s employment at the highest risk of automation globally versus 2.4% for men.
Institutional scaffolding for agentic AI formed rapidly. OpenAI co-founded the Agentic AI Foundation under the Linux Foundation alongside Anthropic, Block, and other partners, contributing to the AGENTS.md specification for interoperable agent deployment. NIST launched an AI Agent Standards Initiative in February 2026 to establish governance, security, and risk frameworks. At Davos 2026, leaders warned that AI agents risk becoming “insider threats” and urged zero-trust, least-privilege access frameworks.
AI’s impact on employment became tangible. Jack Dorsey’s Block cut 40% of its staff, over 4,000 people, explicitly citing AI efficiencies. Amazon eliminated 14,000 middle-management roles, suggesting AI’s disruption may first target corporate managers who handle planning, reporting, and decision-making. A UN report found women’s jobs face disproportionately higher AI risk, with 4.7% of women’s employment at the highest risk of automation globally versus 2.4% for men, rising to 9.6% in high-income countries. Computer scientist and Nobel laureate Geoffrey Hinton warned that tech giants’ profits fundamentally depend on replacing human labor.

“Nearly 90% of all humanoid robots sold globally in 2025 were Chinese, with six of the highest-selling companies in the sector from China.”
AI moved beyond screens into physical systems. Google’s Gemini Robotics now enables
physical robots to perform adaptive, dexterous tasks, including object manipulation and even origami. Alibaba launched RynnBrain, a robotics-focused “physical AI” model for perception and real-world interaction. SoftBank and Nvidia invested $1bn in Skild AI at a $14bn valuation, backing a robot-agnostic intelligence platform designed to power diverse robotic systems.
Humanoid robotics moved from demonstration to deployment. Tesla is converting its Fremont production lines to Optimus manufacturing, with a dedicated facility under construction at Giga Texas targeting up to 10 million units per year by 2027. CEO Elon Musk projects a target price under $30,000 once production scales. Figure AI introduced Figure 03 in October 2025, featuring a redesigned sensory suite with tactile sensors sensitive enough to detect three grams of pressure. By February 2026, Figure had placed its fleet on 24/7 duty, and Toyota had integrated Digit humanoids into its operations. Nearly 90% of all humanoid robots sold globally in 2025 were Chinese, with six of the highest-selling companies in the sector from China. At CES 2026, Boston Dynamics unveiled its all-new Electric Atlas, an enterprise-grade humanoid for material handling and order fulfillment.
Nokia and Nvidia formed a $1bn partnership to develop AI-native 5G-Advanced and 6G networks. The US Department of Transportation plans to use AI to draft federal regulations and is expanding agentic AI capabilities for operations. A majority of energy experts in one industry report said AI is essential for energy transformation.
AI weaponization entered a dangerous new phase. In January 2026, Defense Secretary Pete Hegseth issued a sweeping AI Acceleration Strategy declaring the Pentagon would become an “AI-first warfighting force” and that “2026 will be the year we emphatically raise the bar for Military AI Dominance.” The Pentagon awarded contracts worth up to $200m each to OpenAI, Google, Anthropic, and xAI, while Elon Musk’s xAI agreed to deploy its Grok model on classified military networks without usage restrictions. In late February, the Pentagon issued an ultimatum to Anthropic: remove all safety guardrails for military use by Friday or face contract termination, designation as a supply chain risk, and potential invocation of the Defense Production Act. Anthropic’s CEO Dario Amodei refused, declaring the company “cannot in good conscience accede” to demands permitting mass domestic surveillance and fully autonomous weapons, capabilities he described as “simply outside the bounds of what today’s technology can safely and reliably do.” Shortly afterwards, Anthropic announced it was replacing its Responsible Scaling Policy with a more flexible Frontier Safety Roadmap with nonbinding commitments, a striking shift for a company founded over safety concerns.
On the battlefield, Ukraine scaled drone production from 2.2 million units in 2024 to 4.5 million in 2025, with AI-enabled autonomous navigation raising strike success rates from 10–20% to 70–80%. Germany-based Helsing has delivered thousands of AI-equipped loitering munitions to Ukraine and is developing Europa, an autonomous fighter jet drone slated for 2029. A
February 2026 study by Kenneth Payne, Professor of Strategy at King’s College London, revealed that when frontier AI models were placed in simulated nuclear crisis scenarios, they deployed tactical nuclear weapons in 95% of games, showing no sense of horror at nuclear escalation and treating battlefield nuclear weapons as routine tools.
One signal to watch is the emergence of fully autonomous agent systems operating continuously across organizations with minimal human oversight.
The governance landscape diverged sharply between the world’s two largest democratic blocs. The EU AI Act moved from statute to enforcement, with prohibited AI practices, including social scoring, real-time biometric surveillance in public spaces, and manipulative AI, becoming enforceable across all 27 member states, with penalties up to €35m ($40.1m) or 7% of global revenue. Finland became the first member state with full enforcement powers in December 2025. Notably, the Act explicitly exempts AI systems used exclusively for military, defense, or national security purposes, a carve-out that has drawn criticism from analysts who argue it leaves a governance vacuum precisely where the risks are highest.
The United States moved in the opposite direction. President Trump revoked Biden-era AI safety executive orders on his first day in office, framing AI deregulation as a national security imperative. In December 2025, a further executive order sought to preempt state-level AI regulation entirely, directing the Attorney General to create a litigation task force to challenge state AI laws.
The next move of the AI Safety Clock will likely depend less on the release of yet another powerful AI model and more on whether certain thresholds are crossed.
One signal to watch is the emergence of fully autonomous agent systems operating continuously across organizations with minimal human oversight. Another is whether frontier models begin to demonstrate longer-term strategic planning or coordination with other AI systems, expanding their ability to act independently in complex environments.
Equally important is the growing role of AI in military decision-making and critical infrastructure, where mistakes or misuse could have far-reaching consequences.
Ultimately, the question for the coming year is whether governance, security, and oversight mechanisms can mature as quickly as the technology itself does. The answer may determine whether the AI Safety Clock stabilizes or moves closer to midnight.

Professor of Strategy and Digital
Michael R Wade is Professor of Strategy and Digital at IMD and Director of the Global Center for Digital and AI Transformation. He directs a number of open programs such as Leading Digital and AI Transformation, Digital Transformation for Boards, Leading Digital Execution, Digital Transformation Sprint, Digital Transformation in Practice, Business Creativity and Innovation Sprint. He has written 10 books, hundreds of articles, and hosted popular management podcasts including Mike & Amit Talk Tech. In 2021, he was inducted into the Swiss Digital Shapers Hall of Fame.

Advisor and Research Fellow at IMD
Konstantinos Trantopoulos is an Advisor and Fellow at IMD, working with executives, boards, and investors on strategy, growth, and organizational performance. His work helps companies develop new business, drive profitability, and unlock value through AI and emerging technologies. His insights have appeared in Harvard Business Review, MIT Sloan Management Review, California Management Review, MIS Quarterly, Το Βήμα, and Forbes. He is also the co-author of Twin Transformation, available on Amazon.

March 13, 2026 • by Carlos Cordon, Konstantinos Trantopoulos , Michael R. Wade in Artificial Intelligence
Artificial intelligence is reshaping the COO role from operational executor to architect of predictive, intelligent, and resilient enterprise systems....

March 11, 2026 • by Heather Cairns-Lee in Artificial Intelligence
AI may become one of the most significant leadership opportunities for women in decades. Its impact will depend on how capability, governance, and leadership are built around it....

March 11, 2026 • by I by IMD in Artificial Intelligence
AI is eroding entry-level roles, threatening future leaders. Erik Brynjolfsson warns organizations to rethink hiring and invest in strategic early-career talent....

March 5, 2026 • by Michael Yaziji in Artificial Intelligence
CHROs must navigate AI adoption carefully, balancing speed and direction while making trade-offs that protect people, skills, and long-term value...
Explore first person business intelligence from top minds curated for a global executive audience