
Strategic leapfrogging:Â Moving past AI maturity models with diagnostic precision
Transform your AI adoption strategy from a high-stakes gamble into a portfolio of calculated moves using this plan, write Faisal Hoque and Paul Scade....

by Michael Yaziji Published February 18, 2026 in Artificial Intelligence • 7 min read
The velocity problem is permanent: AI evolves in six- to 12-month cycles while organizations plan in three- to five-year cycles, creating a fundamental mismatch that is difficult to resolve. Traditional planning approaches – both centralized control and chaotic adoption – fail to address this reality.
Radical federalism as a solution: Push AI decisions to individual departments while maintaining minimal central coordination. Give teams budget autonomy to determine their optimal human–AI mix, hold them accountable for outcomes rather than processes, and let them iterate at domain-appropriate speeds.
Reward configuration creativity: Success requires fundamentally changing incentives. Managers should be celebrated for achieving better outcomes with smaller teams and sophisticated AI, rather than being judged by headcount.
A Fortune 500 company’s legal team recently discovered that its marketing department had been using ChatGPT for six months, despite a company-wide ban on generative AI tools. The CMO’s defense was simple: “We asked IT for approval in January. They said they’d evaluate options and get back to us in Q3. We couldn’t wait that long.”
Meanwhile, the same company’s procurement team had spent three months getting central approval for an AI contract analysis tool that, by the time of implementation, had been superseded by two newer versions with different capabilities.
This isn’t a story about rogue employees or bureaucratic IT. It’s a scenario that is increasingly common across organizations – if not today, then certainly in the near future.
The fundamental problem is speed mismatch, or what I call the velocity problem. Think of it as three clocks running at different speeds:
When these clocks run at such different speeds, it’s no surprise that traditional organizational design fails to keep pace. A traditional planning cycle might look like this:
This isn’t a temporary adjustment period that will settle back into our usual pace of operations. This is it. This is the new normal.

“In this scenario, every team tends to do whatever it wants, with the (again, predictable) lack of coordination that follows.”
When faced with this speed of change, organizations typically try one of two approaches, both of which predictably fail.
The first approach revolves around central planning. Firms will perhaps appoint a Chief AI Officer to design the perfect AI strategy across the company. The problem with this is that this person would need to simultaneously understand AI applications in marketing (monthly changes), legal (quarterly shifts), finance (different vendor ecosystem), IT (different risk profiles), and operations (annual updates). Given how fast things move and how different each domain is, this is at best a herculean task, and at worst an impossible one. As a result, the central planning team becomes a bottleneck rather than a value-add.
The second approach is more a case of chaotic adoption. In this scenario, every team tends to do whatever it wants, with the (again, predictable) lack of coordination that follows. An accounting team solves a problem that finance still struggles with, but they never talk. The company pays for five different AI tools that do similar things. Nothing works together. No one learns from anyone else.
Each department designs its own human–AI configuration based on its specific variance, velocity, and local knowledge.
The third way, which I refer to as radical federalism, is built on first principles: push adaptive capacity to the unit level while maintaining minimal coordination centrally. Each department designs its own human–AI configuration based on its specific variance, velocity, and local knowledge. Payroll might leverage AI as a wholesale substitute. Marketing might pursue human–AI collaboration. Manufacturing might limit AI to predictive maintenance.
This approach builds on management insights that were prescient but early. Bartlett and Ghoshal’s “differentiated networks” addressed the challenges of geographic complexity. Today’s challenge is temporal – different parts of the organization live in different technological moments at different clock speeds. When compared to finance, for example, marketing is effectively three months into the future.
This temporal fracturing, combined with high variance and local knowledge requirements, makes federalism not just useful but necessary.
The operating model follows four concrete principles.
What it means: Give departments budgets and let them decide their optimal mix of people, AI subscriptions, and hybrid approaches.
Why it matters: Payroll might replace 80% of its work with AI and operate with two specialists. Marketing might use AI as a creative collaborator across a 15-strong team. Manufacturing might limit AI to predictive maintenance. The optimal configuration varies so dramatically by department that central prescription is impossible.
What it means: The center defines what to deliver, not how to deliver it.
Why it matters: Only the accounts payable team knows which invoice exceptions require human judgment. Only the sales team knows which email drafts need review. Local knowledge about automation potential exceeds what any central planner could know.
What it means: Different departments can use different AI platforms as long as data exchanges cleanly.
Why it matters: Marketing might need creative AI tools, legal needs contract analysis, and operations needs predictive maintenance. Each has different vendors, different risk profiles, and different update cycles. Forcing one platform across all domains sacrifices effectiveness for false uniformity.
What it means: Each department reorganizes at the pace its domain requires.
Why it matters: Marketing might feel AI changes monthly whereas facilities management might shift annually. Forcing them to reorganize to the same schedule unnecessarily holds fast movers back or destabilizes slow movers.
This model fails unless incentives change. Traditional managers maximize headcount because power and prestige are often measured that way. Federal managers, by contrast, need to be rewarded for configuration creativity – achieving better outcomes regardless of the human-AI mix.
The successful 2026 manager might run a department of two humans and sophisticated AI, delivering what previously required 20 people. That should be celebrated and rewarded, not quietly demoted for managing a “small team.”
CHROs should stop trying to create uniform job architectures across the company. Instead, design incentive systems that reward configuration creativity. Protect “deliberate inefficiencies” where humans need to be kept in the loop to maintain skill development, even when AI could do it at a lower cost.
The CTO role shifts from standardizing tools to building robust interfaces and guardrails. Create platforms that federated units can build upon. Focus on data security and system compatibility, not which AI vendor everyone uses.
CEOs need to explicitly abandon the promise of future coherence. Be crystal clear about the few non-negotiables (ethics, security, financial controls) versus the many experimental zones (tool selection, task allocation, team structure). Model comfort with internal variance as a strategic capability, not organizational failure.
But organizational theory tells us that when variance is high, and change is faster than planning cycles, adaptation beats optimization every time.
Most organizations are unable to make this shift. Their DNA assumes things change slowly, that central planners have the best information, and that consistency equals excellence. They pursue comprehensive AI strategies to achieve beautiful uniformity while competitors achieve messy adaptation.
Organizations that embrace federalism won’t look efficient by traditional metrics. They’ll have redundancy. They’ll have incompatibility. They’ll have different departments operating in seemingly contradictory ways.
But organizational theory tells us that when variance is high, and change is faster than planning cycles, adaptation beats optimization every time. The question isn’t, “How should we adopt AI?” but “Can our organization accept that the world changes too fast and varies too much for coherence to be achievable?”

Michael Yaziji is an award-winning author whose work spans leadership and strategy. He is recognized as a world-leading expert on non-market strategy and NGO-corporate relations and has a particular interest in ethical questions facing business leaders. His research includes the world’s largest survey on psychological drivers, psychological safety, and organizational performance and explores how human biases and self-deception can impact decision making and how they can be mitigated. At IMD, he is the co-Director of the Stakeholder Management for Boards training program.

4 hours ago • by Faisal Hoque, Paul Scade in Artificial Intelligence
Transform your AI adoption strategy from a high-stakes gamble into a portfolio of calculated moves using this plan, write Faisal Hoque and Paul Scade....

February 17, 2026 • by Julia Binder in Artificial Intelligence
CSOs should harness artificial intelligence to embed sustainability at the center of strategy and growth. We explore how the most successful companies are already doing this. ...

February 12, 2026 • by Karl Schmedders, José Parra Moyano in Artificial Intelligence
Finance and digital strategy experts debate whether AI returns will materialize quickly enough to prevent a market correction. ...

February 10, 2026 • by Tomoko Yokoi in Artificial Intelligence
Tomoko Yokoi explains why GenAI isn’t delivering on its promise — and how clearer success metrics, upskilling, and purposeful scaling can finally unlock its value....
Explore first person business intelligence from top minds curated for a global executive audience