
Learning how to behave: AI-conditioned robots are coming
Large behavior models (LBMs) promise to be even more impactful than large language models, says IMD’s Tomoko Yokoi ...
by José Parra Moyano Published 10 March 2025 in Artificial Intelligence • 6 min read
CFOs are pivotal to businesses’ efforts to navigate the AI-led transformation of industries. With their enterprise-wide oversight and guiding hand in developing a financial strategy, CFOs are uniquely positioned to manage AI adoption, balancing innovation with measurable outcomes.
Generative AI (GenAI) – including the coming wave of AI agents – can optimize operations, enhance decision-making, and drive revenue growth. Barclays Investment Bank analysts estimate that AI agents can execute any of around seven billion tasks independently, enabling system-wide productivity enhancement.
However, realizing these benefits requires more than simple technical implementation. CFOs must ensure that AI initiatives align with overarching business goals, avoiding divergent, siloed efforts and driving competitive advantage. If finance leaders can approach implementation strategically, AI has the potential to deliver significant returns on investment.
But many organizations are struggling to turn AI’s promise into meaningful results, with numerous AI projects failing to deliver on their objectives. Most AI projects often underperform – not just traditional AI, but also GenAI, and likely the AI agents that are set to emerge this year.
To navigate these complexities, organizations should focus on three essential dimensions of AI adoption: business value, data, and people. Together, these elements form a value-data-people framework, a structure for conceptualizing the critical questions that decision-makers must address. What value does the organization aim to create with AI? Does it have access to the required data? And how will employees and stakeholders perceive and adapt to the changes?
By keeping these considerations in focus, CFOs can better prioritize resources, mitigate risks, and increase the likelihood of long-term success.
Companies that thrive with AI adoption tend to take a focused, pragmatic approach. They prioritize solving specific, manageable problems, accumulating small wins, and avoiding costly failures.
The first dimension of the framework challenges organizations to articulate the value they intend to create with AI. While this may seem obvious, many struggle to provide a straightforward answer when asked about the specific problem they are trying to solve.
Rather than adopting AI for its own sake, successful organizations focus on solving measurable challenges – such as using algorithms to improve sales performance. For example, a salesperson using AI to predict which clients to approach and thereby increasing revenue from $1m a year to $1.3m – a specific use case that demonstrates tangible business value.
Companies that thrive with AI adoption tend to take a focused, pragmatic approach. They prioritize solving specific, manageable problems, accumulating small wins, and avoiding costly failures. Incremental successes not only improve outcomes but also build internal momentum for more ambitious initiatives. The paradox here is that if CFOs focus on funding initiatives aimed at solving tangible problems, they will indirectly be contributing to the generation of knowledge within the organization. This approach could even spark the cultural evolution needed to utilize AI to tackle larger, more complex problems further down the line.
The second dimension of successful AI adoption focuses on data. Commentators often summarize this aspect using the mantra “garbage in, garbage out.” AI’s effectiveness depends on the quality and accessibility of data, yet organizations often lack the volume, diversity, or structure required for effective AI training. The key is to look at data in terms of access rather than just ownership. You may ‘own’ data but cannot use it because of lack of consent. But there is also data you do not own but can access.
Data collaboration platforms enable organizations to train AI models while safeguarding privacy. These systems work by sending algorithms to where the data is stored, rather than moving data into the organization to train the tool. This ensures personal information remains securely stored at its source without impeding analysis.
Such platforms range from proprietary services offered by private companies to open-source solutions used by organizations or consortia. The widespread use of such platforms underlines the growing recognition of the value of securely tapping into shared or sensitive data. Importantly, these tools can address a critical challenge for AI development: the lack of high-quality training data. For instance, hospitals and pharmaceutical companies can collectively train algorithms to support enhanced diagnostics or treatments without sharing raw data.
In more complex B2B environments, where regulations or privacy concerns prevent companies from using customer data to train AI models, these platforms allow firms to train algorithms while being impeccable in respecting privacy and facilitating compliance and innovation.
By maintaining privacy while enabling insights, data collaboration platforms are unlocking new possibilities across industries, from healthcare to autonomous vehicles, while navigating the growing complexities of data regulation.
“AI can enhance human capabilities and empower people to do more. The key message is that even with AI in place, human expertise will remain essential, especially as new challenges emerge.”
The third dimension – people – often determines whether an AI initiative succeeds or fails. AI can be seen as a threat, particularly due to concerns about job displacement. This reflects wider public anxiety. A Pew Research Center study found growing concern about the role of AI in public life, with 52% of US respondents saying they feel more concerned than excited about the increased use of AI.
Organizations must address these fears head-on, emphasizing that AI can enhance human capabilities and empower people to do more. The key message is that even with AI in place, human expertise will remain essential, especially as new challenges emerge. It is important to manage how employees perceive this transition – if they see AI as a threat, they may resist or even undermine initiatives.
Successful AI initiatives focus on communication and change management, recognizing that the wrong perception of AI can significantly increase the risk of failure. CFOs and other executives must engage stakeholders early and often, gaining buy-in and ensuring alignment and trust throughout the transition process.
If employees do not support the initiative, the organization must be prepared for a complex and costly change management process.
A value-data-people framework is intended to offer clear guidance for CFOs and other decision-makers tasked with navigating AI’s complexities. Before approving an AI initiative, CFOs should ask three key questions:
If no clear answers are forthcoming to these questions, organizations may need to rethink their approaches. Without the right data, the focus should shift to acquiring or gaining access to it rather than just pressing ahead. Likewise, if employees do not support the initiative, the organization must be prepared for a complex and costly change management process.
By implementing AI initiatives using a value-data-people framework, organizations can enhance their chances of success while mitigating risks. However, measuring impact is essential – successful organizations are those that systematically track outcomes. For CFOs, any major AI investment should include a clear plan and budget for ongoing evaluation. With the right strategy and a commitment to measuring impact, AI can become a powerful driver of productivity, innovation, and long-term growth.
Professor of Digital Strategy
José Parra Moyano is Professor of Digital Strategy. He focuses on the management and economics of data and privacy and how firms can create sustainable value in the digital economy. An award-winning teacher, he also founded his own successful startup, was appointed to the World Economic Forum’s Global Shapers Community of young people driving change, and was named on the Forbes ‘30 under 30’ list of outstanding young entrepreneurs in Switzerland. At IMD, he teaches in a variety of programs, such as the MBA and Strategic Finance programs, on the topic of AI, strategy, and Innovation.
18 March 2025 • by Tomoko Yokoi in Artificial Intelligence
Large behavior models (LBMs) promise to be even more impactful than large language models, says IMD’s Tomoko Yokoi ...
13 March 2025 • by Cedrik Neike in Artificial Intelligence
Digital technologies, and artificial intelligence in particular, allow us to extract insights from data. This will allow industrial companies, the backbone of our economy, to be more resource-efficient, more productive, and more...
28 February 2025 • by Öykü Işık in Artificial Intelligence
Concerns over the ‘unpredictability’ of AI are widespread, but Industrial AI proves the value proposition of this tool, says Öykü Işık. This article explains how to build security into your strategy....
24 February 2025 • by Michael R. Wade in Artificial Intelligence
Business leaders need to ensure that their organizations consider all AI-related decisions and operations through an ethical lens, warns IMD’s Michael Wade. ...
Explore first person business intelligence from top minds curated for a global executive audience