
AI and the CFO: Financial leadership in the AI era
Artificial intelligence has become critical to core financial and operating processes, allowing leaders to architect systems of decision-making, productivity and governance....

by Faisal Hoque, Pranay Sanklecha, Paul Scade Published April 1, 2026 in Artificial Intelligence • 10 min read
In 2023, an investigation by ProPublica revealed that Cigna, one of America’s largest health insurers, had built a system that denied certain insurance claims without any human review of the patient files. First, an algorithm flagged mismatches between patient diagnoses and a list of approved procedures. When a patient’s doctor had ordered a procedure that was not on the approved list, these claims were routed for denial. Cigna’s medical directors – physicians employed specifically to exercise their clinical judgment by reviewing claims – signed off on the algorithm’s decisions in batches. One doctor denied over 60,000 claims in a single month. On average, physicians spent just 1.2 seconds on each case. “We literally click and submit,” one former Cigna doctor told ProPublica. “It takes all of 10 seconds to do 50 at a time.”
The goal of the system was clear. Paying humans to carefully assess and judge claims is expensive. Allowing an algorithm to make the decision instead is much faster and brings with it significant cost savings. Once the company believed the algorithm could do the job effectively, the only reason for retaining humans in the decision-making loop was regulatory compliance.
The logic behind Cigna’s system is one that many companies would happily apply to their entire middle management layer: replace the expensive humans with machines that can make the same rule-based judgments at a far lower cost. This is not a hypothetical threat. Gartner estimates that half of middle management positions could disappear at many companies by the end of this year as AI is deployed more widely. Many companies are now looking to flatten their hierarchies as they explore AI-driven automation and middle management roles already account for a growing share of white-collar layoffs.
The reason middle managers seem so vulnerable to algorithmic replacement is because their roles are often viewed through a mechanistic lens. There is a long-standing distinction in management thinking between leadership and management. On this view, leaders make choices that drive change: they set direction, define strategy, and determine what the organization should become. Managers, by contrast, are responsible for the systematic execution of those choices. Their job is to take what leadership has decided and make it happen – translating strategy into operations, coordinating resources, ensuring compliance.
Despite the controversial distinction between leadership and management, there is no doubt that many management functions are conceived of as being essentially mechanical. It is easy to think of the ideal manager as pursuing the perfectly optimal path toward a pre-determined goal. And if this is indeed how the management function works, then the conclusion seems inescapable: to the extent that algorithmic systems can execute tasks more quickly, consistently, and cheaply than humans, the human manager becomes redundant.
This view is profoundly misguided – and not merely as a matter of theory. The reduction of management to the optimally efficient performance of mechanical tasks is ethically dangerous. Moreover, it is frequently bad for business. The middle management role often involves making the kind of judgment that cannot be performed by an algorithm. These judgments carry distinctive ethical responsibilities and leaders who fail to recognize this risk hollowing out both the effectiveness and the moral integrity of their organizations.

“Many managerial decisions are not optimization problems; they are judgment problems.”
Many managerial decisions are not optimization problems; they are judgment problems. The view that management is a mechanical activity – one that can be performed by algorithmic systems without meaningful loss – rests on an assumption that is false. This is the assumption that the optimal execution of strategy is a fully determined process that can, in principle, be specified completely in advance. There are at least two reasons why this assumption fails, one practical and one ethical.
The practical reason is that many management decisions involve weighing considerations that cannot be measured precisely or objectively and that cannot be compared to one another using some universal scoring system. In these situations, humans use their faculty of judgment. When a manager decides how to deliver critical feedback to an employee, they are balancing a highly complex web of interrelated considerations: the employee’s need to hear the truth, their emotional state that day, the relationship the manager hopes to maintain, the signal the conversation sends to the rest of the team, and a whole host of other factors. There is no formula for making such a decision. It is not a measurement problem that better data could solve; it is intrinsic to the situation that such factors must be weighed through judgment, not calculation.
The ethical reason stems from a basic fact: management decisions frequently have human consequences. When a customer service policy affects whether a vulnerable person receives help, when a staffing decision determines whether someone keeps their job, or when an algorithmic recommendation is applied to a real human case – these are not merely operational matters. They affect the lives and welfare of real human beings.
As such, these decisions fall into the realm of ethics – and ethical choices cannot and should not be outsourced to algorithms. They must remain the privilege and the responsibility of human beings. As an IBM training manual put it nearly 50 years ago: “A computer can never be held accountable, therefore a computer must never make a management decision.” This principle extends to any decision that affects human lives.
This point applies even to those who believe, as Milton Friedman does, that a corporation should be bound only by legal constraints and some minimal, poorly defined notion of ‘ethical custom.’ Ultimately, it doesn’t matter whether you think your company has moral obligations that extend beyond the most basic – what matters is that your customers, employees, and regulators do, and they will act accordingly if you fail to meet them. The algorithm cannot anticipate every situation in which applying its logic will provoke public outrage, regulatory scrutiny, or legal liability. Human judgment is required not only to do the right thing, but also to recognize when doing the wrong thing will be costly – and those two considerations are not always distinguishable in practice.
The fact that middle management has an unavoidably ethical dimension means that it is crucial to understand the ethical obligations that come with the role.
The fact that middle management has an unavoidably ethical dimension means that it is crucial to understand the ethical obligations that come with the role.
Some of the ethical obligations that middle managers have apply to them because they apply to all humans: don’t lie, don’t steal, don’t harm others unnecessarily. These are clearly important, but they are also relatively straightforward to specify. We will simply note that middle managers have these duties, and then turn to the more interesting question of the special duties middle managers have by virtue of the position they occupy.
What are special duties, and how do they differ from the general duties of all human beings? Consider the example of a judge. While in the courtroom, she has a duty to be impartial; once she is at home, her role as a mother demands partiality and preference for the interests of her children. The special duties flow from the position itself: the role creates the duty.
Just like judges or parents, middle managers have certain special duties that arise from their specific roles. We can think of those duties in terms of three categories:
The thread that connects the special duties of middle management may be summed up in one word: agency. Much more than the traditional view realizes, and perhaps much more than even many middle managers realize, effective and ethical middle management requires independent thought, judgment, and action – even when the easier option would be to do nothing.
Leaders must recognize that middle managers are not mechanical executors of strategy. They are co-creators of it. Middle management is the layer where abstract principles become concrete action, and where the organization’s conscience resides. If leaders design systems that treat managers as button-pushers – optimizing for speed and mechanical obedience to rules above all else – they will hollow out both the effectiveness and the ethical integrity of the organization. Instead, leaders must design systems that support and enhance the agency of middle management.
Concretely, this means:
When leaders fail to treat middle management in this way, change efforts are highly unlikely to succeed. Organizational change requires middle managers to do more than implement new processes – it requires them to interpret the changes needed and apply them in a thousand particular situations, to adapt the new processes to local realities as they emerge minute by minute, and to bring their teams along with them. Middle managers who are treated as cogs lose the adaptive capacity that change requires.
The pressure to automate the middle layer will only intensify as AI systems grow more capable – which makes it all the more important that leaders understand what is at stake.
The organizations that answer these questions honestly – and design accordingly – will be the ones that harness AI’s power without losing their moral compass.
AI will transform middle management – and some of that transformation is overdue. There is no reason to preserve human involvement in tasks that are genuinely mechanical, and no virtue in retaining inefficiency in the name of job preservation. But leaders must make hard-nosed distinctions between the parts of the middle management function that can be handed to algorithms and the parts that cannot. As machines take over the routine – the scheduling, the processing, the pattern-matching – what remains is precisely what matters most: the judgment that completes strategy, the contextual sensitivity that reads shifting situations, the conscience that asks whether what can be done should be done.
Every organization deploying AI must ask: are we building systems that preserve the space for judgment, or are we engineering it out? Are our middle managers genuinely in the loop, or are they simply there to absorb blame when something goes wrong? The organizations that answer these questions honestly – and design accordingly – will be the ones that harness AI’s power without losing their moral compass. The ones that don’t learn, as Cigna did, that humans in the loop are only as good as the loop allows them to be.

Executive Fellow at IMD and founder of SHADOKA and NextChapter
Faisal Hoque is a transformation and innovation leader with over 30 years of experience driving sustainable innovation, growth, and transformation for global organizations, including Mastercard, American Express, GE, PepsiCo, JPMorgan Chase, IBM, Northrop Grumman, the US Department of Defense, and the Department of Homeland Security. He is the founder of SHADOKA and NextChapter, among other companies, and is a three-time winner of Deloitte’s Technology Fast 50 and Fast 500 awards. Hoque is a best-selling and award-winning author of 11 books, including the USA Today and LA Times bestsellers Reimagining Government (2026) and Transcend (2025), a Financial Times book of the month named a “must-read” by the Next Big Idea Club. His 2023 book Reinvent was published in association with IMD and became a #1 Wall Street Journal bestseller. His research and thought leadership have been recognized globally; he also serves as a judge for MIT’s IDEAS Social Innovation Program.

Founder of The Philosophy Practice and partner at SHADOKA
Pranay Sanklecha is a philosopher, writer, and management consultant focusing on the intersection of technology, ethics, and practical leadership. Formerly an academic philosopher at the University of Graz, Sanklecha’s research on intergenerational justice includes a book published with Cambridge University Press. He now works with businesses to design and implement philosophy-led frameworks that deliver practical value. He is the founder of The Philosophy Practice and a partner at SHADOKA.

Honorary Fellow at the University of Liverpool and a partner at SHADOKA
Paul Scade is an historian of ideas and an innovation and transformation consultant. His academic work focuses on leadership, psychology, and philosophy, and his research has been published by world-leading presses, including Oxford University Press and Cambridge University Press. As a consultant, Scade works with C-suite executives to help them refine and communicate their ideas, advising on strategy, systems design, and storytelling. He is an Honorary Fellow at the University of Liverpool and a partner at SHADOKA.

March 25, 2026 • by Salvatore Cantale, Konstantinos Trantopoulos , Michael R. Wade in Artificial Intelligence
Artificial intelligence has become critical to core financial and operating processes, allowing leaders to architect systems of decision-making, productivity and governance....

March 24, 2026 • by Jerry Davis in Artificial Intelligence
AI-driven ‘algorithmic corporations’ could replace humans with ruthless, automated systems that exploit labor, manipulate pricing, and prioritize shareholder value. But governments have the tools to prevent this dystopian future if they choose...

March 17, 2026 • by Michael R. Wade in Artificial Intelligence
Senior leaders can leverage AI to boost creativity, guide decisions, and optimize talent, while balancing risks like bias and overreliance....

March 16, 2026 • by Michael R. Wade, Konstantinos Trantopoulos in Artificial Intelligence
IMD’s AI Safety Clock shows tension between rapidly expanding artificial intelligence capabilities and lack of meaningful oversight, raising risk. ...
Explore first person business intelligence from top minds curated for a global executive audience