Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
When-AI-evolves-quarterly-2

Artificial Intelligence

When AI evolves quarterly, but your organization plans annually

Published February 18, 2026 in Artificial Intelligence • 7 min read

Organizations should let departments choose the right balance between humans and AI, and focus on outcomes, not rigid processes, to keep pace with rapid technological change.

Rapid read:

The velocity problem is permanent: AI evolves in six- to 12-month cycles while organizations plan in three- to five-year cycles, creating a fundamental mismatch that is difficult to resolve. Traditional planning approaches – both centralized control and chaotic adoption – fail to address this reality.

Radical federalism as a solution: Push AI decisions to individual departments while maintaining minimal central coordination. Give teams budget autonomy to determine their optimal human–AI mix, hold them accountable for outcomes rather than processes, and let them iterate at domain-appropriate speeds.

Reward configuration creativity: Success requires fundamentally changing incentives. Managers should be celebrated for achieving better outcomes with smaller teams and sophisticated AI, rather than being judged by headcount.

A Fortune 500 company’s legal team recently discovered that its marketing department had been using ChatGPT for six months, despite a company-wide ban on generative AI tools. The CMO’s defense was simple: “We asked IT for approval in January. They said they’d evaluate options and get back to us in Q3. We couldn’t wait that long.”

Meanwhile, the same company’s procurement team had spent three months getting central approval for an AI contract analysis tool that, by the time of implementation, had been superseded by two newer versions with different capabilities.

This isn’t a story about rogue employees or bureaucratic IT. It’s a scenario that is increasingly common across organizations – if not today, then certainly in the near future.

Why this keeps happening

The fundamental problem is speed mismatch, or what I call the velocity problem. Think of it as three clocks running at different speeds:

  • AI technology evolves in three- to nine-month cycles (new models, new capabilities, new vendors).
  • Organizations plan in three- to five-year cycles (budgets, headcount, systems).
  • Humans adapt at their own varied pace.

When these clocks run at such different speeds, it’s no surprise that traditional organizational design fails to keep pace. A traditional planning cycle might look like this:

  • January: Roles designed based on current AI capabilities
  • March: New models automate half of these tasks
  • June: Automation needs different oversight than initially planned
  • September: Entire categories of work shift from human to AI
  • December: Planning for next year using already obsolete assumptions

This isn’t a temporary adjustment period that will settle back into our usual pace of operations. This is it. This is the new normal.

Digital transformation technology strategy internet of things transformation of ideas an the adoption technology in business in the digital age businessman using smartphone with tech icon
“In this scenario, every team tends to do whatever it wants, with the (again, predictable) lack of coordination that follows.”

The two failed responses

When faced with this speed of change, organizations typically try one of two approaches, both of which predictably fail.

The first approach revolves around central planning. Firms will perhaps appoint a Chief AI Officer to design the perfect AI strategy across the company. The problem with this is that this person would need to simultaneously understand AI applications in marketing (monthly changes), legal (quarterly shifts), finance (different vendor ecosystem), IT (different risk profiles), and operations (annual updates). Given how fast things move and how different each domain is, this is at best a herculean task, and at worst an impossible one. As a result, the central planning team becomes a bottleneck rather than a value-add.

The second approach is more a case of chaotic adoption. In this scenario, every team tends to do whatever it wants, with the (again, predictable) lack of coordination that follows. An accounting team solves a problem that finance still struggles with, but they never talk. The company pays for five different AI tools that do similar things. Nothing works together. No one learns from anyone else.

Each department designs its own human–AI configuration based on its specific variance, velocity, and local knowledge.

A third way

The third way, which I refer to as radical federalism, is built on first principles: push adaptive capacity to the unit level while maintaining minimal coordination centrally. Each department designs its own human–AI configuration based on its specific variance, velocity, and local knowledge. Payroll might leverage AI as a wholesale substitute. Marketing might pursue human–AI collaboration. Manufacturing might limit AI to predictive maintenance.

This approach builds on management insights that were prescient but early. Bartlett and Ghoshal’s “differentiated networks” addressed the challenges of geographic complexity. Today’s challenge is temporal – different parts of the organization live in different technological moments at different clock speeds. When compared to finance, for example, marketing is effectively three months into the future.

This temporal fracturing, combined with high variance and local knowledge requirements, makes federalism not just useful but necessary.

The operating model follows four concrete principles.

1 – Budget autonomy, not headcount

What it means: Give departments budgets and let them decide their optimal mix of people, AI subscriptions, and hybrid approaches.

Why it matters: Payroll might replace 80% of its work with AI and operate with two specialists. Marketing might use AI as a creative collaborator across a 15-strong team. Manufacturing might limit AI to predictive maintenance. The optimal configuration varies so dramatically by department that central prescription is impossible.

2 – Outcome accountability, not process compliance

What it means: The center defines what to deliver, not how to deliver it.

Why it matters: Only the accounts payable team knows which invoice exceptions require human judgment. Only the sales team knows which email drafts need review. Local knowledge about automation potential exceeds what any central planner could know.

3 – Interface standardization, not tool standardization

What it means: Different departments can use different AI platforms as long as data exchanges cleanly.

Why it matters: Marketing might need creative AI tools, legal needs contract analysis, and operations needs predictive maintenance. Each has different vendors, different risk profiles, and different update cycles. Forcing one platform across all domains sacrifices effectiveness for false uniformity.

4 – Local iteration speed

What it means: Each department reorganizes at the pace its domain requires.

Why it matters: Marketing might feel AI changes monthly whereas facilities management might shift annually. Forcing them to reorganize to the same schedule unnecessarily holds fast movers back or destabilizes slow movers.

Getting incentives right

This model fails unless incentives change. Traditional managers maximize headcount because power and prestige are often measured that way. Federal managers, by contrast, need to be rewarded for configuration creativity – achieving better outcomes regardless of the human-AI mix.

The successful 2026 manager might run a department of two humans and sophisticated AI, delivering what previously required 20 people. That should be celebrated and rewarded, not quietly demoted for managing a “small team.”

What this means for leaders

CHROs should stop trying to create uniform job architectures across the company. Instead, design incentive systems that reward configuration creativity. Protect “deliberate inefficiencies” where humans need to be kept in the loop to maintain skill development, even when AI could do it at a lower cost.

The CTO role shifts from standardizing tools to building robust interfaces and guardrails. Create platforms that federated units can build upon. Focus on data security and system compatibility, not which AI vendor everyone uses.

CEOs need to explicitly abandon the promise of future coherence. Be crystal clear about the few non-negotiables (ethics, security, financial controls) versus the many experimental zones (tool selection, task allocation, team structure). Model comfort with internal variance as a strategic capability, not organizational failure.

But organizational theory tells us that when variance is high, and change is faster than planning cycles, adaptation beats optimization every time.

The uncomfortable truth

Most organizations are unable to make this shift. Their DNA assumes things change slowly, that central planners have the best information, and that consistency equals excellence. They pursue comprehensive AI strategies to achieve beautiful uniformity while competitors achieve messy adaptation.

Organizations that embrace federalism won’t look efficient by traditional metrics. They’ll have redundancy. They’ll have incompatibility. They’ll have different departments operating in seemingly contradictory ways.

But organizational theory tells us that when variance is high, and change is faster than planning cycles, adaptation beats optimization every time. The question isn’t, “How should we adopt AI?” but “Can our organization accept that the world changes too fast and varies too much for coherence to be achievable?”

Authors

Michael Yaziji

Michael Yaziji is an award-winning author whose work spans leadership and strategy. He is recognized as a world-leading expert on non-market strategy and NGO-corporate relations and has a particular interest in ethical questions facing business leaders. His research includes the world’s largest survey on psychological drivers, psychological safety, and organizational performance and explores how human biases and self-deception can impact decision making and how they can be mitigated. At IMD, he is the co-Director of the Stakeholder Management for Boards training program.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience