Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
Four imperatives to help demystify AI use cases 

Artificial Intelligence

Four imperatives to help demystify AI use cases 

Published 8 April 2025 in Artificial Intelligence • 12 min read • Audio availableAudio available

A good AI use case results from a ‘matching exercise’ where value is found at the intersection of data sets and business problems and opportunities. This can be hard to achieve, but these guidelines will help you ask the right questions and avoid common pitfalls.

With AI fever in overdrive, everyone is searching for winning AI use cases – business applications that provide competitive insights or productivity breakthroughs to improve performance. However, the process feels like solving a giant jigsaw puzzle – but without a picture on the box. A lot of trial and error is needed, probably along with substantial investments in technology and capabilities.

Defining what constitutes a use case is far from clear. A business executive told us it was “an industry-specific application of AI tools to increase efficiency or improve revenue.” In contrast, a technology vendor described it as “a proven application of our AI technology and competencies that has been successfully deployed in several customer environments.” Perspective is important.

Our research has led us to conclude that a good AI use case results from a “matching exercise” where value is found at the intersection of data sets and business problems/opportunities. That’s not easy. Many companies we interviewed still struggle with data quality, readiness, aggregation, etc. Conversely, business problems are hard to describe. The context and content are specific, with both changing over time. We often observed a “language gap” between business executives and their data science counterparts. When designing AI use cases, language matters.

So, what’s the answer to designing good AI use cases? The matching process between a dataset and a business problem/opportunity is rarely a one-off. It is highly iterative, progresses with learning, and takes time. There are four imperatives to follow when designing an AI use case. They may not guarantee that your AI implementation will be 100% successful, but at least at the design stage of your use case, they will help you avoid some common pitfalls.

An example of an AI experiment could be testing a machine-learning algorithm to gauge whether it can detect fraud patterns in historical data.

Imperative 1: Match your problem/opportunity with the right type of AI initiative

As we said, language matters, and we found overlaps in the definitions of what types of AI initiatives are undertaken within organizations. AI initiatives have different lengths, complexity, levels of uncertainty/risk, and outcomes, so it pays off to be clear at the outset.

AI experiments are small-scale, time-bound activities to test a hypothesis or explore a specific question. The goal is to validate or not the original assumption without heavy investment. An example could be testing a machine-learning algorithm to gauge whether it can detect fraud patterns in historical data. If positive, the outcome of an experiment should be to proceed to more structured initiatives such as a proof of concept or a pilot.

AI proof of concepts (POCs) or pilots are focused initiatives to prove the feasibility of an AI application under controlled conditions. They require more time than an experiment and usually involve a subset of real data and testing with operational systems (For example, testing that an AI chatbot can accurately answer customer support queries using a small dataset). If the technical feasibility is proven, the outcome is usually to validate performance, usability, and scalability to put the system into operation.

AI projects are structured and well-defined efforts that follow a clear methodology (e.g., agile development). They take months or even years, depending on their complexity. An example would be an industrial company deciding to roll out a company-wide AI-driven predictive maintenance system.

Abstract technology background Network connection Big data visualization 4k rendering
“The successful outcome of a use case is a full-scale AI project.”

Where do AI use cases fit in?

AI use cases are specific scenarios or problems where AI is applied to validate business-oriented opportunities and create value from AI deployment. Use cases are the starting point and guide the direction of experiments, pilots, and POCs. They provide the context and criteria against which these initiatives are designed and evaluated. Developing successful AI initiatives is a highly iterative process. Use cases guide the matching exercise between a dataset and a business problem/opportunity and usually lead to an experiment. Experiments test the hypotheses that underpin the use case. Once validated, experiments lead to POCS and, in turn, successful POCS lead to scaled pilots. Pilots inform the broader deployment strategy, and the successful ones become full-blown AI projects operationally deployed across the enterprise. The successful outcome of a use case is, therefore, a full-scale AI project.

The business context should drive the development of an AI use case – for example, when a new, potentially transformative technology becomes available (e.g., GenAI) or when a larger business case is required to justify a potentially costly transformation. The business needs a measurable validation of outcomes from a limited scope to secure funding. Use cases help the organization define where the real value pools are in the organization to steer the implementation of AI strategies.

A machine learning model was developed using historical trial data, patient demographics, and site performance metrics, iteratively matching the business challenge with relevant datasets

What criteria make a successful AI use case?

In our research, we found that successful AI use cases display specific characteristics, including:

  • An iterative matching between a business problem/opportunity and a given data set and/or AI model.
  • A way to test and validate remaining hypotheses and assumptions of where value can be created.
  • In most cases, a focus on industry and specific domains.
  • A medium-term timeline, e.g., three to nine months, with a specific cut-off point for a go/no-go decision.
  • Clear milestones and KPIs to measure the expected outcome.
  • Championing by a senior executive who is accountable for success and can become an advocate for the scaling phase.

For example, in our interviews, a pharma industry executive described a machine-learning use case aimed at optimizing clinical trial site selection. A machine learning model was developed using historical trial data, patient demographics, and site performance metrics, iteratively matching the business challenge with relevant datasets. The project objective was to validate hypotheses about reducing site selection errors through retrospective and prospective testing over a six-month timeline. Clear milestones and KPIs, such as recruitment speed improvements and adherence to timelines, ensured measurable outcomes, with a midpoint decision determining whether to move to a scaled project. The initiative was championed by a senior executive accountable for trial success, ensuring alignment and advocacy for broader adoption of the use case if successful. The iterative, hypothesis-driven approach demonstrated the potential of this AI use-case to deliver significant value to the organization and incorporate it into operations company-wide.

For instance, a credit card company was looking at potential applications of AI in credit card fraud

Imperative 2: Define your matching dimensions

Common wisdom dictates that use cases should start with a business problem/opportunity and work back to the data required to solve it. With AI, it’s more “chicken and egg”: sometimes you start with a business problem/opportunity and sometimes with a data set. A common and elastic technology backbone is important but should never be the starting point. The content of a business problem/opportunity is often narrow, and the context matters. Good business problem definitions need to be specific, relevant, objective, and quantifiable (as with AI, data will be at the core of the solution).

For example, a healthcare executive described a use case where the early business problem was defined as: “We want to leverage AI to make our hospital admission process more efficient.” The executive admitted that such a definition was unlikely to get the company far as it had no specific problem area, context, success metrics, or indication of the data source. The company iterated on the definition and restated it as: “We want to lower the rate of patient readmission by identifying individuals at high risk of returning within 30 days and ensuring proper follow-up care with the objective of reducing readmission rates by 10% and improving patient outcome.” This redefinition started a fruitful matching exercise. The team began by looking at electronic health records, patient demographics, treatment plans, and historical readmission data, and then applied an AI model on top of those datasets.

Existing or accessible data sets can also be a good starting point. When powered with AI, useful patterns can be uncovered from data sets that can lead to assumptions or insights into a business problem/opportunity.

For instance, a credit card company was looking at potential applications of AI in credit card fraud. The company applied unsupervised machine learning to large volumes of transaction logs without a pre-defined question (context). The AI system uncovered a cluster of transactions originating from different merchant categories and regions that consistently showed suspicious timing anomalies and unusual card usage sequences (pattern identification). The pattern suggested the potential existence of a coordinated fraud ring operating across multiple merchants (hypothesis/insight). From this data insight, the company was able to define a use case and develop a targeted fraud detection model to proactively flag and block these sophisticated attack vectors.

Unfortunately, matching datasets and business problems/opportunities does not work like matching individuals on a dating site. The business and the data side will have dynamic characteristics and evolve. Datasets are not static: they exhibit complementarities where the value of the data increases when meshed with other data with complementary attributes (e.g., the weather conditions in which a machine is used). Equally, business problems/opportunities evolve as economic conditions, market, competition, and customer needs and behaviors change (e.g., growing health-conscious consumers seeking organic foods, transparency in sourcing, nutritional data, and sustainable production methods).

In addition, both datasets and business problems/opportunities have known and unknowns that will need to be identified. For example, a dataset covers a specific timeframe, but data patterns might change if we look further into our historical archives. A business opportunity might be based on today’s privacy regulatory environment, but regulatory changes may affect its feasibility.

So, under those circumstances, how do we start the matching exercise?

First, the business problem and the data need to be assessed and qualified regardless of the starting point. The key criteria for a business problem/opportunity are its feasibility (Can we deliver the outcome?) and its impact (Will the outcome substantially affect performance?). The key criteria to assess the data side are findability (can we find the reliable data needed to effectively inform the decision/action?) and accessibility (Can we economically access the data?). Second, you will need to iterate to properly qualify your matching dimensions. For example, could the feasibility of a business problem be improved if we were to change or adapt our workflow? Or could we find proxy data or public data that will inform the decision with a sufficiently high confidence ratio?

A deep understanding of your matching dimensions is critical to setting up the use case and will increase your chances of success.

Build joint teams from the start. Pairing data science with deep domain and process knowledge drives better results.

Imperative 3: Iterate your matching exercise

Once you’re clear on the data and the business dimensions that will define your AI use case, you can start matching and iterating. Don’t have business teams defining business problems/opportunities and then passing them on to the data science team (or vice versa). Build joint teams from the start. Pairing data science with deep domain and process knowledge drives better results.

The matching exercise is built around three phases:

  1. Assumptions. Clearly articulate the underlying assumptions for both dimensions. For the business problem, this can be a list of conditions under which the business problem occurs (What would need to happen?). For the data side, it can be a first set of variables that characterize the dataset and what other data sources may be used to complement the AI use case. Ideally, both sets of assumptions will already hint at a solution path: Can the business problem be solved with the available data, and with what AI model?
  2. Validation. This phase is about increasing the confidence level of the business and the data assumptions. The business problem/opportunity usually involves performing qualitative in-depth interviews with key process owners or business stakeholders. For the data side, it is about ensuring the datasets exist, are accessible, and have sufficient quality to support the use case.
  3. Insight. This phase is where the close matching between the two dimensions occurs. The mix of data science and domain knowledge expertise will drive the insights and the value. Can we sustainably solve the problem or exploit the opportunity with this matching construct? Do we need organizational and workflow adjustments to solve the business side? Do we need more/different data to increase the confidence level? Do we have the right AI model/algorithm? The answers to these questions will drive the feedback loop and the level of iteration required. Value drivers become clearer as we refine the matching, allowing for a first outline of the business benefits that the use case can drive.
Graph 1: Iterative framework for AI use case
Graph 1: Iterative framework for AI use case
The use-case execution process should have a defined end. A common reason for failure is when organizations let use cases morph into other forms of experimentation or other versions of project-related work. Set a timeframe, clear metrics, and a well-defined outcome. Organizational learning is key to further AI progress.

Imperative 4: Plan for scaling your AI use cases early

Use cases should be essential components of your AI strategy. The successful ones will increase your certainty about where the real AI-driven business value is in your business. Alone, however, they will not drive ROI. The business case is realized when the application is scaled, not in the use cases of the pilot phase. Early on, you should ask the following to ensure the transition from a successful use case to a transformative project:

  • Will the scaled use case contribute significantly to our strategic objectives (e.g., customer experience, operational excellence, employee experience, business model, etc.)?
  • Is the use-case solving a repeatable business issue (i.e., not a one-off) worth solving with a long-term AI solution?
  • Do we have the resources and budget capacity to scale? (e.g., financial planning, vendor and partner management, etc.)
  • Do we have a scalable technical foundation? (e.g., modular and cloud-native architectures, machine learning ops practices, the robustness of the data pipeline, etc.) What underlying standards across the use case portfolio should be applied (common data and technology layer)?
  • Do we have high-quality and sustainable data sources for the long term? (e.g., data governance and quality controls, flexible data partnerships, etc.)
  • Can we manage security, compliance, and ethics at scale? (e.g., regulations, data privacy, ethical guardrails, etc.)
  • Do we have the skills and organizational readiness to deploy at scale? (e.g., cross-functional collaboration, talent, and training plans?)
  • Will there be clear budgetary responsibility and a business owner who will be accountable for embedding the AI solution into the business process landscape?

Most organizations run multiple use cases in parallel. This is fine if it follows certain rules and has clear internal governance. Focus your AI strategy on a portfolio of use cases enabled by a common technology and data backbone.

When scalability is treated early as a core objective rather than an afterthought, AI use cases can transition from exploration to production and pave the way for AI deployment that will deliver meaningful business value over the long run.

Business problems/opportunities and datasets can be a marriage made in heaven for your AI strategy. Successful use cases are at the heart of finding business value. But, as we’ve seen, it is a long, iterative, organizationally complex, and structured journey. Like Spotify – the more your “organizational algorithm” practices the use-case iterative muscle, the faster and better the matching will be. Leadership, business transformation, and accountability matter to success. To paraphrase Steve Jobs: “If you look closely, most AI overnight successes took a long time.”

Authors

Didier Bonnet

Professor of Strategy and Digital Transformation

Didier Bonnet is Professor of Strategy and Digital Transformation at IMD and program co-director for Digital Transformation in Practice (DTIP). He also teaches strategy and digital transformation in several open programs such as Leading Digital Business Transformation (LDBT), Digital Execution (DE) and Digital Transformation for Boards (DTB). He has more than 30 years’ experience in strategy development and business transformation for a range of global clients.

Achim Plueckebaum

Achim Plueckebaum is an Executive-in-Residence at IMD. He is a global, entrepreneurial senior executive with strong experience in the life sciences industry, combining a highly successful CIO and business-leader digital/data career track, with additional experience in management and startup consulting and finance/M&A. Achim holds a master’s degree in information systems from Stevens Institute of Technology, USA and an MBA from the University of Giessen, Germany, and Napier University, Edinburgh, Scotland.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience