Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

What AI velocity really requires from CHROs

Published March 5, 2026 in Artificial Intelligence • 8 min read

CHROs have a responsibility to manage both the speed and direction of AI adoption, which will involve value calls and unavoidable trade-offs.

Rapid read:

  • Velocity has two dimensions. Managing AI adoption means controlling both speed (how fast you move) and direction (what you’re optimizing for – shareholder value, employee dignity, competitive position).
  • Most organizations treat this as a speed problem. But direction is about unavoidable trade-offs: efficiency, employee well-being, and competitive advantage don’t always align.
  • CHROs must make explicit value judgments about whose interests take priority rather than promising impossible win-wins – particularly around preserving capability development versus maximizing short-term efficiency.

 

In a companion piece, I argued for radical federalism as an organizational model for AI adoption. This approach pushes decisions to the edge while maintaining minimal central coordination. The article addressed how to manage AI’s velocity problem. But there’s a prior question that determines whether any organizational system succeeds or fails: What are we optimizing for?

This isn’t a purely academic philosophy. It’s the most practical question facing CHROs today, because if you get the structure right but the values wrong, you’ve built an efficient machine that moves fast in harmful directions.

You don’t make these choices alone, but you do shape the menu of options

The real weight of your decisions

As a CHRO at a major organization, your velocity decisions shape your workforce directly and influence industry practices more broadly. When you decide how fast to adopt AI, you’re determining displacement rates, adaptation timeframes, and market signals about responsible adoption. When you decide what direction to move, you’re choosing what kind of organization to build and what work means in your context.

You don’t make these choices alone, but you do shape the menu of options. Your job is to surface the trade-offs to the CEO and board, propose a stance, and ensure that people and capability implications are treated as first-order strategic issues rather than afterthoughts.

These decisions come with trade-offs, not clear right answers.

It’s not just about speed

The velocity problem I described previously has two dimensions that are often conflated but need to be managed separately: speed and direction.

Speed is about pace. How fast do we move? This is where the mismatch problem becomes concrete. AI capabilities advance every three to nine months with new models and vendors. Your organizational systems operate on three- to five-year cycles. So, by the time you’ve assessed impact, established guardrails, trained people, and adapted processes, the technology has shifted multiple times. And your people operate on highly varied timelines, needing time to develop new mental models, build new skills, and process anxiety about what augmentation really means when it starts to look like replacement.

Direction is about purpose. What goals are we moving toward? This is where the more challenging questions reside. Are we maximizing shareholder value? Are we preserving employee dignity? Are we maintaining our competitive position? Are we democratizing expertise? When these goals conflict, and they almost certainly will, whose interests take priority?

The mistake many organizations make is treating velocity purely as a speed problem. They ask, “How do we move faster?” when they should first be asking, “Should we move faster here, and if so, toward what end?”

“Your customer service AI could resolve most inquiries faster and cheaper than humans. But speed isn't all that customers in distress actually need.”

The trade-offs you can’t avoid

There isn’t always a win-win. The consulting promise that AI will simultaneously maximize efficiency, well-being, and competitive advantage is often false; when you pretend otherwise, you still make trade-offs – you just make them by default rather than by choice.

  • Your finance team could use AI to automate 80% of accounts payable (AP) work, dramatically improving efficiency. But those junior AP roles are how people learn the business, develop judgment about exceptions, and build relationships across the organization. Eliminate them, and you’ve saved money today, but you may be destroying your talent pipeline for tomorrow.
  • Your customer service AI could resolve most inquiries faster and cheaper than humans, but speed isn’t all that customers in distress need. Instead, they need empathy, judgment, and someone who can break the rules when the rules are wrong. If you optimize for efficiency and solve the wrong problem, you can degrade both loyalty and brand trust.

Similarly, your organization could implement AI-powered performance monitoring that identifies struggling employees earlier and more accurately than managers ever could. The system might be technically superior yet practically destructive if people experience it as surveillance rather than support.

These aren’t edge cases or science fiction. These are the daily decisions CHROs face, and they require explicit choices about what matters most when everything can’t be maximized simultaneously.

High-stakes, hard-to-reverse decisions that affect multiple stakeholders demand more deliberation and more explicit value judgments, regardless of competitive pressure.

A framework for trade-off decisions

To make these trade-offs more concrete, evaluate each AI adoption decision through three lenses:

  • Time horizon. What happens at three months, three years, and 10 years? Automating junior roles might deliver clear efficiency gains in the next quarter while creating a capability crisis further down the line.
    Ask: What looks good this quarter but leaves us weaker in the future?
  • Stakeholder impact. Map consequences for current and future employees, customers, shareholders, and society. If benefits and harms are unevenly distributed, you need an explicit stance on whose interests take precedence. Ask: Who pays the cost of this efficiency gain?
  • Reversibility. If this turns out to be wrong, how hard is it to undo? You can switch vendors or tweak workflows relatively easily. You cannot quickly rebuild trust, culture, or lost expertise.
    Ask: If this destroys trust or capability, what would it take to rebuild it – and how long would it take?

When evaluating any AI use case, score it on each dimension. High-stakes, hard-to-reverse decisions that affect multiple stakeholders demand more deliberation and more explicit value judgments, regardless of competitive pressure.

When to use the gas and when to use the brake

Once you are clear on direction, speed decisions follow. The gas and brake are not just about pace; they are tactical tools for maintaining your chosen direction. Sometimes moving fast is the most values-consistent choice. Sometimes deliberate slowness is.

You should accelerate when speed creates broadly shared benefits, rather than merely extracting value. That includes:

  • Democratizing expertise that was previously reserved for a small group of executives.
  • Eliminating genuine drudgery and freeing people for work that uses judgment, creativity, and relationships.
  • Enabling earlier detection of burnout, disengagement, or conflict so you can intervene before problems escalate.

You should brake when speed destroys what cannot be rebuilt. That includes:

  • Maintaining junior roles even when AI can technically perform much of the work; practical wisdom develops through lived experience, not just training modules.
  • Giving people time to understand what is happening and why, because trust and legitimacy, once lost, are extraordinarily costly and slow to restore.
  • Preserving human expertise in systems that are complex, safety-critical, or not fully understood, even if AI performs better on average. When edge cases arise, you will need people who can exercise independent judgment, not operators who have forgotten how to do so.

Braking decisions are harder to defend because they amount to deliberate inefficiency in cultures that reflexively worship efficiency. Yet deliberate inefficiency is sometimes the responsible choice. The task is not to eliminate inefficiency but to distinguish the inefficiency that preserves what matters from the inefficiency that is simply waste.

Before you let teams iterate at their own pace, you need explicit guidelines on when to brake

What this means in practice

The organizational model described in the companion piece – radical federalism with budget autonomy, outcome accountability, and local iteration speed – only works if you have first answered the directional questions.

Before you give departments autonomy to configure their human–AI mix, you need clarity on what outcomes they are accountable for. Is payroll optimizing purely for cost reduction, or are they also responsible for maintaining capability development? Is marketing free to eliminate all junior roles, or must they preserve a pathway for future creative directors to learn the craft?

Before you let teams iterate at their own pace, you need explicit guidelines on when to apply the brake. Which capabilities are so critical that you will maintain human expertise even when it looks inefficient? Which transitions require slower rollouts and more communication because trust is at stake, not just productivity?

Before you celebrate the manager who delivers previous outputs and sophisticated AI with two people instead of 20, you need to ask: What happened to skills development in that department? Where will future leaders come from if we have eliminated the roles in which they used to learn? Are we optimizing for this quarter’s efficiency at the expense of the next decade’s capability?

In practical terms, this means the CHRO’s role in AI adoption has at least three non-delegable elements:

  • Define non-negotiable outcomes. Articulate what must be preserved, even when inefficient – such as trust, development pipelines, and certain forms of human judgment.
  • Codify braking conditions. Specify where local autonomy is constrained by central values: which roles, processes, and capabilities cannot be fully automated without senior review.
  • Align incentives. Ensure managers are not punished for protecting long-term capability and trust simply because they declined the fastest or cheapest option.

Without this work, radical federalism degenerates into unmanaged fragmentation. With it, local experimentation happens inside a clear, value-informed frame.

Radical federalism provides the how—a way to push decisions to the edge while maintaining minimal coherence at the center. But before any organizational model can work, you must answer the why and the what for.

The honest work ahead

There are hard constraints you cannot wish away.

  • You cannot eliminate velocity mismatches; the speed gap between technology, organizations, and people is structural, not a temporary phase.
  • You cannot avoid hard trade-offs. Efficiency and well-being will sometimes conflict, and promising otherwise simply means you are making choices without admitting them.
  • You cannot escape pressure to move faster than responsibility allows. Competitors are experimenting, boards are asking questions, and the technology is improving regardless of your readiness.

What you can do is refuse to let these choices be made by default. You can insist that AI decisions are explainable, defensible, and aligned with what your organization claims to stand for. You can protect the conditions for long-term capability and trust, even when that means defending deliberate inefficiency in the short term.

Radical federalism provides the how – a way to push decisions to the edge while maintaining minimal coherence at the center. But before any organizational model can work, you must answer the why and the what for. That is the uncomfortable, essential work that only CHROs and their peers in the C-suite and the board can do.

Authors

Michael Yaziji

Michael Yaziji is an award-winning author whose work spans leadership and strategy. He is recognized as a world-leading expert on non-market strategy and NGO-corporate relations and has a particular interest in ethical questions facing business leaders. His research includes the world’s largest survey on psychological drivers, psychological safety, and organizational performance and explores how human biases and self-deception can impact decision making and how they can be mitigated. At IMD, he is the co-Director of the Stakeholder Management for Boards training program.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience