
AI and the CFO: Financial leadership in the AI era
Artificial intelligence has become critical to core financial and operating processes, allowing leaders to architect systems of decision-making, productivity and governance....

by Michael Yaziji Published March 5, 2026 in Artificial Intelligence • 8 min read
In a companion piece, I argued for radical federalism as an organizational model for AI adoption. This approach pushes decisions to the edge while maintaining minimal central coordination. The article addressed how to manage AI’s velocity problem. But there’s a prior question that determines whether any organizational system succeeds or fails: What are we optimizing for?
This isn’t a purely academic philosophy. It’s the most practical question facing CHROs today, because if you get the structure right but the values wrong, you’ve built an efficient machine that moves fast in harmful directions.

As a CHRO at a major organization, your velocity decisions shape your workforce directly and influence industry practices more broadly. When you decide how fast to adopt AI, you’re determining displacement rates, adaptation timeframes, and market signals about responsible adoption. When you decide what direction to move, you’re choosing what kind of organization to build and what work means in your context.
You don’t make these choices alone, but you do shape the menu of options. Your job is to surface the trade-offs to the CEO and board, propose a stance, and ensure that people and capability implications are treated as first-order strategic issues rather than afterthoughts.
These decisions come with trade-offs, not clear right answers.
The velocity problem I described previously has two dimensions that are often conflated but need to be managed separately: speed and direction.
Speed is about pace. How fast do we move? This is where the mismatch problem becomes concrete. AI capabilities advance every three to nine months with new models and vendors. Your organizational systems operate on three- to five-year cycles. So, by the time you’ve assessed impact, established guardrails, trained people, and adapted processes, the technology has shifted multiple times. And your people operate on highly varied timelines, needing time to develop new mental models, build new skills, and process anxiety about what augmentation really means when it starts to look like replacement.
Direction is about purpose. What goals are we moving toward? This is where the more challenging questions reside. Are we maximizing shareholder value? Are we preserving employee dignity? Are we maintaining our competitive position? Are we democratizing expertise? When these goals conflict, and they almost certainly will, whose interests take priority?
The mistake many organizations make is treating velocity purely as a speed problem. They ask, “How do we move faster?” when they should first be asking, “Should we move faster here, and if so, toward what end?”

“Your customer service AI could resolve most inquiries faster and cheaper than humans. But speed isn't all that customers in distress actually need.”
There isn’t always a win-win. The consulting promise that AI will simultaneously maximize efficiency, well-being, and competitive advantage is often false; when you pretend otherwise, you still make trade-offs – you just make them by default rather than by choice.
Similarly, your organization could implement AI-powered performance monitoring that identifies struggling employees earlier and more accurately than managers ever could. The system might be technically superior yet practically destructive if people experience it as surveillance rather than support.
These aren’t edge cases or science fiction. These are the daily decisions CHROs face, and they require explicit choices about what matters most when everything can’t be maximized simultaneously.
High-stakes, hard-to-reverse decisions that affect multiple stakeholders demand more deliberation and more explicit value judgments, regardless of competitive pressure.
To make these trade-offs more concrete, evaluate each AI adoption decision through three lenses:
When evaluating any AI use case, score it on each dimension. High-stakes, hard-to-reverse decisions that affect multiple stakeholders demand more deliberation and more explicit value judgments, regardless of competitive pressure.
Once you are clear on direction, speed decisions follow. The gas and brake are not just about pace; they are tactical tools for maintaining your chosen direction. Sometimes moving fast is the most values-consistent choice. Sometimes deliberate slowness is.
You should accelerate when speed creates broadly shared benefits, rather than merely extracting value. That includes:
You should brake when speed destroys what cannot be rebuilt. That includes:
Braking decisions are harder to defend because they amount to deliberate inefficiency in cultures that reflexively worship efficiency. Yet deliberate inefficiency is sometimes the responsible choice. The task is not to eliminate inefficiency but to distinguish the inefficiency that preserves what matters from the inefficiency that is simply waste.

The organizational model described in the companion piece – radical federalism with budget autonomy, outcome accountability, and local iteration speed – only works if you have first answered the directional questions.
Before you give departments autonomy to configure their human–AI mix, you need clarity on what outcomes they are accountable for. Is payroll optimizing purely for cost reduction, or are they also responsible for maintaining capability development? Is marketing free to eliminate all junior roles, or must they preserve a pathway for future creative directors to learn the craft?
Before you let teams iterate at their own pace, you need explicit guidelines on when to apply the brake. Which capabilities are so critical that you will maintain human expertise even when it looks inefficient? Which transitions require slower rollouts and more communication because trust is at stake, not just productivity?
Before you celebrate the manager who delivers previous outputs and sophisticated AI with two people instead of 20, you need to ask: What happened to skills development in that department? Where will future leaders come from if we have eliminated the roles in which they used to learn? Are we optimizing for this quarter’s efficiency at the expense of the next decade’s capability?
In practical terms, this means the CHRO’s role in AI adoption has at least three non-delegable elements:
Without this work, radical federalism degenerates into unmanaged fragmentation. With it, local experimentation happens inside a clear, value-informed frame.
Radical federalism provides the how—a way to push decisions to the edge while maintaining minimal coherence at the center. But before any organizational model can work, you must answer the why and the what for.
There are hard constraints you cannot wish away.
What you can do is refuse to let these choices be made by default. You can insist that AI decisions are explainable, defensible, and aligned with what your organization claims to stand for. You can protect the conditions for long-term capability and trust, even when that means defending deliberate inefficiency in the short term.
Radical federalism provides the how – a way to push decisions to the edge while maintaining minimal coherence at the center. But before any organizational model can work, you must answer the why and the what for. That is the uncomfortable, essential work that only CHROs and their peers in the C-suite and the board can do.

Michael Yaziji is an award-winning author whose work spans leadership and strategy. He is recognized as a world-leading expert on non-market strategy and NGO-corporate relations and has a particular interest in ethical questions facing business leaders. His research includes the world’s largest survey on psychological drivers, psychological safety, and organizational performance and explores how human biases and self-deception can impact decision making and how they can be mitigated. At IMD, he is the co-Director of the Stakeholder Management for Boards training program.

17 hours ago • by Salvatore Cantale, Konstantinos Trantopoulos , Michael R. Wade in Artificial Intelligence
Artificial intelligence has become critical to core financial and operating processes, allowing leaders to architect systems of decision-making, productivity and governance....

March 24, 2026 • by Jerry Davis in Artificial Intelligence
AI-driven ‘algorithmic corporations’ could replace humans with ruthless, automated systems that exploit labor, manipulate pricing, and prioritize shareholder value. But governments have the tools to prevent this dystopian future if they choose...

March 17, 2026 • by Michael R. Wade in Artificial Intelligence
Senior leaders can leverage AI to boost creativity, guide decisions, and optimize talent, while balancing risks like bias and overreliance....

March 16, 2026 • by Michael R. Wade, Konstantinos Trantopoulos in Artificial Intelligence
IMD’s AI Safety Clock shows tension between rapidly expanding artificial intelligence capabilities and lack of meaningful oversight, raising risk. ...
Explore first person business intelligence from top minds curated for a global executive audience