Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

The AI productivity illusion: Fixing the blind spots to make genuine gains

Published January 26, 2026 in Artificial Intelligence • 12 min read • Audio availableAudio available

Confusing efficiency with productivity is to mistake speed for direction, and execution for value. Hamilton Mann explains how to avoid the pitfalls in your AI transformation.

The race for the productivity gains promised by the deployment of AI, particularly generative AI (GenAI), is the new gold rush. There’s no shortage of deployment stories that seem promising. For example, a global B2B software company rolled out a “GenAI productivity transformation” for its 2,800 go-to-market employees. In theory, everything looked perfect: instant draft campaigns for marketing teams, call summaries and proposal generators for the sales department, and leadership dashboards lit up with the chart everyone wanted to see. The result? More output, faster.

But look closer and the picture appeared less bright. In less than six weeks, email volume to prospects had tripled. Sales representatives could spin up tailored sequences in minutes, so they often did. Unsubscribes and soft bounces began to creep up, and sales teams reported spending more time skimming AI-written drafts than crafting relevance. Response rates dipped, then kept dipping, but the activity graph still appeared heroic. Topline KPIs favored volume over usefulness, so the decline was hidden in plain sight.

Product marketing began pushing “helpful” explainers with every feature launch – none were technically incorrect, but most were unnecessary. A growing long-tail of “zombie” assets (rarely read but often duplicated) cluttered channels and muddied priorities.

The brand team quietly added a new control step to review tone and claims. Legal added another. Average time-to-send increased. Work-in-progress piled up between handoffs, and rework cycles expanded.

Downstream, the sales ops queue ballooned. Reps copied AI-drafted proposals with the right numbers but the wrong assumptions, which meant deal desks and engineers spent evenings reconciling “fast” documents with actual capacity and scope. Local speed upstream created systemic drag downstream.

Quarterly business reviews improved in look but not in substance, with beautiful slides, but thin judgment. Managers noticed they were coaching less and curating more, triaging a feed of machine-made outputs to find the few that mattered. Core skills, probing, reframing, and cross-team sense-making atrophied. Customer-facing teams reported spending more time validating, prioritizing, and explaining. Validation effort went unmeasured and was invisible in the “efficiency” story.

A quiet change crept in: people began to accept the model’s framing as the default. If the assistant summarized a call around price objections, the discussion gravitated there, even when the real issue was trust or fit. The tool set the pace and humans followed. Meeting notes started to mirror the assistant’s categories, narrowing the space for alternative hypotheses.

Nothing catastrophic happened.

Nothing catastrophic happened. Revenue didn’t fall, and it was even possible to detect an inflection point that could be interpreted as the early ROI-positive signal of the transformation underway. However, a closer look at the metrics showed that part of this “lift” stemmed from how it was being counted – throughput was measured as assets per FTE and messages sent (not outcomes), and automation-generated outputs were tallied in the same way as human-crafted ones. Denominators shifted (review time excluded and downstream queues ignored), making like-for-like comparisons impossible. In contrast, win–loss interviews revealed a pattern whereby prospects felt “blanketed, not understood” compared to the previous survey.

By quarter three, leader sentiment and employee experience had diverged.
Executive dashboards celebrated an apparent and promising 6% “productivity gain” in certain functions, while outcome measures such as win rate and cycle time to qualified pipeline remained flat or declined. In practice, internal retrospectives repeatedly named the same friction points:

  • Overproduction of low-value materials.
  • Bottlenecks where the rest of the system couldn’t absorb faster speeds.
  • Shrinking use of core human skills (hard questioning, cross-team sense-making, etc.).
  • A transformation program grafted onto legacy processes that were never redesigned to host it.
  • Longer cycle times in legal and sales ops.
  • More “fast” proposals needing correction.
  • Managers spending more time validating and prioritizing.
“Confusing productivity and efficiency mistakes speed for direction and execution for value.”

Defining genuine productivity gain

The setbacks described above are not specific to a particular industry sector or profession, but stem from a set of beliefs we hold about work. These are primarily based on a technologist’s assumption that every technical advance drives efficiency and a net productivity gain that guarantees optimal organizational performance. However, it ignores usage dynamics, dependency effects, and indirect consequences on skills, rhythms, human decision-making, and collective production logics.

There is no equals sign between efficiency gain and productivity gain, nor a systematically positive causal link between them. One can exist without the other, or even to the detriment of the other. Efficiency is local, functional, and operational; it becomes a strategic illusion when detached from human, social, or organizational purpose. 

Productivity, on the other hand, is systemic, collective, and purpose-driven. In certain tightly structured and well-aligned systems, such as high-precision manufacturing or algorithmic trading, efficiency gains can translate directly into productivity gains. Yet in most complex, human-centered organizations, the chain from efficiency to productivity is mediated by cultural, cognitive, and systemic factors that can just as easily neutralize or reverse the expected benefits. Confusing productivity and efficiency mistakes speed for direction and execution for value.

Five blind spots must be acknowledged to unleash sustainable productivity gains from AI transformation programs:

Each department designs its own human–AI configuration based on its specific variance, velocity, and local knowledge.

1 – Unit gains are not net productivity

Leaders often mistakenly assume that automating unit tasks with AI aggregates into linear productivity gains. Micro-efficiency triggers composition effects, cognitive externalities, and rebound use. As AI accelerates, what was occasionally helpful becomes structurally overused. People regularly offload thinking, devalue intermediate skills, and align decisions with algorithmic frames – a classic automation bias that erodes analysis, judgment, and independent reasoning. Performance then falters where ambiguity and interpretation matter most. This shift causes a slow but deep erosion of human faculties associated with productive effort, such as analyzing problems, confronting ideas in teams, forming judgments without algorithmic assistance, producing autonomous reasoning, or simply facing the discomfort of doubt.

Evidence from automation contexts shows that users often over-rely on automated suggestions, even when they are incorrect, a cognitive phenomenon widely known as automation bias. It’s not enough for machines to work faster for the organization to function better. Humans must retain the capacity, will, and discernment to engage in the tasks that give meaning to what is produced. Without that, the unit gain becomes a disguised collective loss.

What to do: Embed “human-in-the-loop” safeguards by ensuring that critical decision points always require human review and rationale, especially in contexts of uncertainty or ambiguity. This sets out a landscape of continuous skills development that should be reinforced through targeted training programs. In parallel, AI deployments must be evaluated with expanded KPIs that track not only task-level efficiency but also indicators of human capacity retention, such as decision-making quality, cross-team knowledge exchange, and resilience in novel situations. This will ensure that gains in speed do not come at the expense of the competencies that sustain long-term performance.

Low marginal cost fuels redundant summaries, contextless reports, and incessant automation that floods attention channels.

2 – Neither automation nor autonomy is immune to unproductive pseudo-work

Many people assume that there are no unproductive tasks that AI can perform, but this is to confuse execution capacity with strategic or organizational necessity. As AI becomes more efficient, the range of functions it can deliver at high speed expands dramatically, increasing the likelihood of their rapid and large-scale reproducibility. Unfortunately, this includes tasks with no real utility, no measurable impact, or even those that are detrimental to collective functioning. All of this creates pseudo-work that looks cheap and responsive yet adds no value.

Low marginal cost fuels redundant summaries, contextless reports, and incessant automation that floods attention channels. Humans have to perform cognitive triage and act as filters for automated overproduction, thereby raising techno-stress and shifting, rather than reducing the workload. This phenomenon remains structurally invisible in classical productivity metrics. Since unproductive AI-generated tasks aren’t necessarily costly, they aren’t perceived as losses. Yet they consume invisible resources, including attention, strategic clarity, cognitive bandwidth, and collective motivation. Their cost is diffuse but cumulative.

What to do: Before deploying AI for any task, benchmark outputs with the expected definition of “done” and enforce and monitor by using a strategic relevance filter. This involves defining clear criteria for what constitutes a valuable output and ensuring that AI is applied only to tasks that meet them. In addition, broadening the productivity metrics to capture the hidden costs of validation, filtering, and cognitive triage is a no-brainer. Performance evaluations should account for both visible outputs and the invisible resource drain AI may impose, thereby recursively challenging the assumed productivity ROI.

3 – Local speed may lead to system bottlenecks

More productivity isn’t always better. In interdependent systems, hyper-optimizing a task can exceed what downstream teams can effectively absorb, leading to saturation and broken workflows. AI magnifies this by generating instant, large-scale output that misaligns with cadence, capacity, and priorities. To illustrate, a 2024 Upwork Research Institute survey found a large gap between leadership expectations and employee experiences. Most executives (96%) anticipated productivity gains from AI, while 77% of employees said it had increased their workload. Around four in 10 spend more time reviewing or moderating AI-generated content, 71% report burnout, and 65% feel heightened pressure from productivity expectations. Local efficiency becomes a negative externality. When a tool moves faster than the collective culture in which it is embedded, it desynchronizes interaction norms, time anchors, and shared thresholds of acceptability. The ecosystem becomes incoherent: some accelerate while others resist, and some produce without context while others absorb without direction. The result is a global loss of productive coherence, where hyper-efficiency in one area degrades the flow and clarity of the whole. Productivity is a function of the ecosystem in which it operates and is never absolute.

What to do: Model AI integration as a networked process, mapping interdependencies before deployment to anticipate where speed mismatches might occur. This involves stress-testing workflows under AI-accelerated conditions, aligning upstream and downstream capacities, and defining absorption thresholds – the maximum pace at which different teams, systems, or partners can process new outputs without loss of quality or clarity. All processes are then redesigned accordingly. Measuring performance at the ecosystem level, rather than at isolated task nodes, ensures that gains in one area do not quietly erode value elsewhere. 

People should not be slaves to machines, but the relationship should be viewed as a partnership. The cook is happy to welcome a bit of extra help in the 1950s film, ‘Forbidden Planet’

4 – Legacy processes are not adapted for new tech by default

Productivity gains from AI are not necessarily accessible without profound transformation of organizational settings and underlying norms. Many of AI’s potential productivity gains are not simply “waiting to be harvested” by adding the technology into existing workflows. In legacy settings, new tools are often absorbed by old routines, thereby muting their impact. Teams tend to reinterpret AI’s role so that it fits within established habits, norms, and comfort zones – deploying it in ways that reinforce rather than challenge the status quo. This adaptation-by-containment neutralizes much of the transformative potential, creating a facade of adoption without meaningful performance improvement. AI becomes a new tool for old processes, rather than a catalyst for reimagining how value is created. Moreover, every organization operates within a dense mesh of implicit principles, conventions, and procedures embedded in its culture. These invisible frames shape the logic of task sequencing, decision-making authority, validation loops, and quality thresholds. Without questioning these foundational stances, AI is forced to operate within constraints designed for a pre-AI reality.

What to do: Pair AI adoption with a structured “process deconstruction audit” before deployment. This involves systematically mapping workflows, identifying legacy bottlenecks, and questioning implicit norms that limit AI’s transformative scope. To do so, cross-functional design labs should be established to prototype AI-enabled workflows from scratch. Adoption metrics should shift from “time saved per task” to “systemic performance uplift” – ensuring that productivity is measured in terms of collective, end-to-end impact, not just local acceleration.  

Routine, low-intangibility tasks may be automated, but most productivity remains dependent on intangible tasks.

5 – Work is not made by tangible capacities only

Many people believe that productivity is an exclusive, mechanistic relationship between output and tangible inputs. This mechanistic view reduces the complexity of human work to an exchange of labor, capital, and materials for outputs, while overlooking the intricate interplay of intangible inputs that drive sustainable value creation. The assumption that optimizing tangible resources, such as automating tasks, replacing labor with machines, or improving efficiency in material use, will inevitably lead to a linear increase in productivity, regardless of the human qualities and context required to achieve meaningful results.

However, reducing productivity to an equation of tangible input versus output ignores the profound role played by intangible resources (including critical thinking, judgment, leadership, emotional intelligence, cultural sensitivity, passion, and love) in driving long-term, sustainable outcomes. In the real world, not all that is measured gets managed, and not all that gets managed is measured. Routine, low-intangibility tasks may be automated, but most productivity remains dependent on intangible tasks. Productivity is not just about doing more with less – it also involves ensuring that the human skills necessary for the collective work ecosystem remain preserved and nurtured.

What to do: Weight roles not only by task type but by the intangible human capacities they require (e.g., empathy, contextual judgment, and cultural fluency) and design performance metrics to track and reward these capacities, ensuring they remain cultivated alongside efficiency gains. Protect high-intangible-value workflows from full automation, instead using AI in augmentation mode to free humans from low-value tasks, allowing them to invest more time in uniquely human contributions. True progress lies not in what AI is able to do, but in what humans deliberately choose not to delegate or to abandon. 

Real progress will not lie in making AI ever more powerful at mimicking human cognitive intelligence, but in designing systems capable of functional integrity, of knowing why they act, in what context, with what limits, and for what collective ends.

The essential shift from productivity to purpose

Current AI, even in Agentic form, has no intrinsic hierarchy of values, no genuine understanding of what deserves to be done, nor of when it is ethically, morally, or socially right not to act.  It is performance without orientation, fast without judgment, available without responsibility, and, in short, intelligent without integrity. Investment and research should focus more on Artificial Integrity to counter the blind spots that undermine the definition of what is beneficial.

As long as AI is designed as an amplifier of efficiency without a systemic social compass – let alone the equally vital ethical and moral one – it will contribute to productivity to a limited extent while also sustaining the illusion of productivity without ensuring its positive effects.

AI may appear technically remarkable when viewed from a fixed point, but it becomes unstable and potentially undesirable across the end-to-end spectrum of the messy entanglements inherent to any human organization. Real progress will not lie in making AI ever more powerful at mimicking human cognitive intelligence, but in designing systems capable of functional integrity, of knowing why they act, in what context, with what limits, and for what collective ends.

Authors

Hamilton Mann

Author of Artificial Integrity

Hamilton Mann is an AI researcher, originator of the concept of Artificial Integrity, and best-selling author of Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future. He lectures at INSEAD and HEC Paris and mentors at the MIT Priscilla King Gray (PKG) Center. Recognized globally for his expertise, he was inducted into the Thinkers50 Radar in 2024 and honored in 2025 with the Thinkers50 Distinguished Achievement Award in Digital Thinking for his substantial contributions to leadership in digital transformation and responsible business, and for his work on harnessing AI for positive change.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience