How to demonstrate the impact of executive education
In the wonderfully quirky BBC radio show The Hitchhiker's Guide to the Galaxy, a supercomputer called DEEP THOUGHT is created to find the answer to "the ultimate question of life, the universe and everything". After many millions of years of careful calculation, DEEP THOUGHT presents the answer to this monumental and frustratingly vague question as...forty two.
No one understands what it means, of course, and so some philosophers who have built their careers on predicting DEEP THOUGHT's answer to the question of life, the universe and everything demand clarification. To which DEEP THOUGHT responds (and I paraphrase): "you won't understand the answer I've given until you really understand the question you've asked."
So it is with measuring the impact of executive education.
We are frequently asked about how we measure the impact of what we do. The question once had a financial dimension to it, as if, like DEEP THOUGHT, we could provide a number that reflected a direct correlation between our work and a financial outcome. That is, "if you invest x amount in executive education you can expect y dollars/euros/yuan in return". Smart and targeted executive development should maximize the odds in favour of business success, but it can't guarantee an outcome. So these days, the question of impact usually reflects admirable desire to prove that executive development really does add value, even if that value can't always be measured with numbers.
It's a fair question. Most companies make investments only when they have a good sense of the payback. They expect a clear demonstration of impact. So how do we measure the returns we get from executive education? How do we demonstrate impact?
DEEP THOUGHT hints at the answer: a meaningful response to the question of impact starts with a precise definition of the question. That is, the clearer the objectives for an executive education initiative, the easier it is for us to measure our progress against those objectives. Since there are a rich variety of outcomes that our clients want from their work with us, it is not always easy to settle on a small and precise list of goals. But settling on that list is a prerequisite to putting in place meaningful measurements.
What kinds of outcomes are our clients after, and how do we measure them?
At the risk of simplifying the rich and complex benefits our clients expect from their executive development work, I'll list six examples of the types of outcomes that many of our clients hope to achieve. For each one, I'll include at least one example of how we measure progress. But please note that the list is neither exhaustive nor exclusionary. Most clients want more than two or three of the outcomes I mention here, and many work hard to achieve outcomes that I've left off this list. Also note that while each objective listed here deserves at least a chapter of explanation, for the sake of efficiency and space I'll limit my comments to only a few sentences.
A stronger (or different) culture
Many organizations use executive development as a culture-building or culture-changing exercise. Through their development work they espouse certain values and behaviours, and they reinforce how culture can be both an important differentiator to the business and a powerful influencer over the decisions executives make.
Culture is difficult to measure, but some excellent tools have been created to evaluate the contributors to high performance culture. These tools can also be used to track perceived changes in culture over time. One tool, the Denison Culture Survey, uses feedback from executives to evaluate the relative strengths and weaknesses of an organization's culture on the dimensions of Mission (Strategic Direction & Intent, Goals & Objectives, Vision), Adaptability (Creating Change, Customer Focus, Organizational Learning), Involvement (Empowerment, Team Orientation, Capability Development), Consistency (Core Values, Agreement, Coordination & Integration). We often use the Denison Culture Survey to set a baseline for an organization's culture at the beginning of our work, and to re-evaluate it when the culture-change process is complete. These measurements help us understand if our work is moving the organization in the right direction.
Stronger internal networks
Whether it's a primary or secondary priority, almost all our clients expect their work with us to build stronger relationships amongst their executives. We use the Network Balance Sheet, created by IMD colleagues Bill Fischer, Jim Pulcrano and Janet Shaner, to help us map the breadth and depth of connections amongst participant groups before and after we work with them. These tools tell us how connected the participants are at the beginning of the learning process, and how connected they become once we engage them in our typically highly-interactive and participative learning experiences. We have found the network balance sheet analysis to be a particularly powerful way for project teams who want to evaluate the strength of the networks they need to get their work done.
Retention is sometimes a secondary consideration when it comes to creating company-specific programs, but for one financial services client it was the exclusive reason for developing a multi-module, multi-location executive education initiative for high potential leaders. The program content was diverse and inspiring, and the learning process energetic and engaging, but the client's main goal was not learning. It simply wanted to demonstrate to these important talents that they were critical to the organization and therefore worthy of the attention they were being given.
Retention is rarely one of our clients' primary objectives, which is unfortunate because it is one of the easiest to measure. In the case of our financial services client, the learning and development team analysed the retention rates in the target group before our program and at different milestones afterwards to evaluate the extent to which the talents were more likely to stay with the organization because of their experience on the program.
Preparing executives for greater responsibilities
Helping executives strengthen their capabilities so that they can take on bigger roles is the core task of talent management. It is also the primary objective of much of our company-specific program work. In most cases, this work is designed to enable a targeted group of executives to develop the skills they need to succeed in their next roles. And in most cases our clients have given a great deal of thought to which skills are most in need of building.
We usually counsel our clients to focus on a narrow set of skills when we create these capability-building initiatives. Our impact measurement work tests how these important skills develop over the course of the program. When the skills being developed are primarily behavioural, we use "before-and-after" psychometric tools to measure perceived changes in behaviour. Custom-built or off-the-shelf multi-rater feedback instruments are particularly handy for this kind of measurement.
We also counsel our clients to use post-program surveys of the participants and their line managers to help us identify not only how skills have developed over the life of the program, but also what aspects of the organization's culture, structures and processes might be preventing the executives from building even stronger muscles.
Building organizational capabilities
A happy development in the world of organizational learning is the growing enthusiasm with which organizations use formal development programs to build new organizational capabilities that are driven by changes in strategy. These new capabilities are critical to the organization's ability to win in the market. We call them strategic capabilities.
Two things differentiate strategic capabilities from the normal leadership capabilities associated with talent development. First, our clients consider the development of strategic capabilities as crucial to the strategy execution process, and so the initiatives we develop to build these capabilities are deployed rapidly, covering as many leadership layers as possible in a short timeframe. Second, rapid deployment requires focus, and so it is unusual for a client to work on the development of more than one strategic capability at a time.
Because building strategic capabilities is as much about winning in the market as it is about organizational learning, business-focused action learning is usually a key part of the process. Accordingly, we measure impact by tracking the action learning work. For example, a client who developed new strengths in its innovation processes measured improvements in the speed with which it brought innovations to market. When the same client worked with us to build a strategic capability in product deployment, we measured impact by evaluating its success in deploying new products through to its global markets.
Achieving a specific business goal
Our work with CEOs and line-leaders is usually aimed at helping them achieve specific business results. In some cases, we have helped business unit teams execute growth strategies in certain markets. In other cases we have enabled functions to improve their contribution to the organization's success: redesigning the supply chain or building global brands, for example. Our Booster work with project teams has a similar objective. It helps project teams accelerate their progress towards achieving the project goal.
In all of these cases, learning is a secondary outcome. The primary program objective is the achievement of a business goal. And so we measure our impact by measuring our progress against the objective: for example, the extent to which the project team has reached its goal after going through a Booster process with an IMD team, or the success of product launches that were identified as critical to the organization's growth strategy during our Must-win battle work. In some ways, these are the easiest types of programs when it comes to measuring impact. The desired outcomes are concrete, and so too is our contribution to meeting them.
In The Hitchhiker's Guide to the Galaxy, DEEP THOUGHT is eventually commissioned to clarify what the "ultimate question of life, the universe and everything" really means. But it can't. So instead it designs another computer whose sole purpose is to define the question. After ten million years of work, this computer is mistakenly destroyed... a short five minutes from completing its task. A sad outcome for those who designed the computer, and happily not the outcome we can expect when it comes to the question of measuring the impact of executive education. How do we measure impact? In many ways. It all depends on the specific outcomes you are trying to achieve.
Michael Stanford is Executive Director of IMD's Custom Program business, and serves on the Boards of the University Consortium for Executive Development (UNICON) and the International Consortium for Executive Development Research (ICEDR). He has been at IMD for 18 years.