
What AI velocity really requires from CHROs
CHROs must navigate AI adoption carefully, balancing speed and direction while making trade-offs that protect people, skills, and long-term value...

by Hamilton Mann Published January 26, 2026 in Artificial Intelligence ⢠12 min read ⢠Audio available
The race for the productivity gains promised by the deployment of AI, particularly generative AI (GenAI), is the new gold rush. Thereâs no shortage of deployment stories that seem promising. For example, a global B2B software company rolled out a âGenAI productivity transformationâ for its 2,800 go-to-market employees. In theory, everything looked perfect: instant draft campaigns for marketing teams, call summaries and proposal generators for the sales department, and leadership dashboards lit up with the chart everyone wanted to see. The result? More output, faster.
But look closer and the picture appeared less bright. In less than six weeks, email volume to prospects had tripled. Sales representatives could spin up tailored sequences in minutes, so they often did. Unsubscribes and soft bounces began to creep up, and sales teams reported spending more time skimming AI-written drafts than crafting relevance. Response rates dipped, then kept dipping, but the activity graph still appeared heroic. Topline KPIs favored volume over usefulness, so the decline was hidden in plain sight.
Product marketing began pushing âhelpfulâ explainers with every feature launch â none were technically incorrect, but most were unnecessary. A growing long-tail of âzombieâ assets (rarely read but often duplicated) cluttered channels and muddied priorities.
The brand team quietly added a new control step to review tone and claims. Legal added another. Average time-to-send increased. Work-in-progress piled up between handoffs, and rework cycles expanded.
Downstream, the sales ops queue ballooned. Reps copied AI-drafted proposals with the right numbers but the wrong assumptions, which meant deal desks and engineers spent evenings reconciling âfastâ documents with actual capacity and scope. Local speed upstream created systemic drag downstream.
Quarterly business reviews improved in look but not in substance, with beautiful slides, but thin judgment. Managers noticed they were coaching less and curating more, triaging a feed of machine-made outputs to find the few that mattered. Core skills, probing, reframing, and cross-team sense-making atrophied. Customer-facing teams reported spending more time validating, prioritizing, and explaining. Validation effort went unmeasured and was invisible in the âefficiencyâ story.
A quiet change crept in: people began to accept the modelâs framing as the default. If the assistant summarized a call around price objections, the discussion gravitated there, even when the real issue was trust or fit. The tool set the pace and humans followed. Meeting notes started to mirror the assistantâs categories, narrowing the space for alternative hypotheses.
Nothing catastrophic happened.
Nothing catastrophic happened. Revenue didnât fall, and it was even possible to detect an inflection point that could be interpreted as the early ROI-positive signal of the transformation underway. However, a closer look at the metrics showed that part of this âliftâ stemmed from how it was being counted â throughput was measured as assets per FTE and messages sent (not outcomes), and automation-generated outputs were tallied in the same way as human-crafted ones. Denominators shifted (review time excluded and downstream queues ignored), making like-for-like comparisons impossible. In contrast, winâloss interviews revealed a pattern whereby prospects felt âblanketed, not understoodâ compared to the previous survey.
By quarter three, leader sentiment and employee experience had diverged.
Executive dashboards celebrated an apparent and promising 6% âproductivity gainâ in certain functions, while outcome measures such as win rate and cycle time to qualified pipeline remained flat or declined. In practice, internal retrospectives repeatedly named the same friction points:

âConfusing productivity and efficiency mistakes speed for direction and execution for value.â
The setbacks described above are not specific to a particular industry sector or profession, but stem from a set of beliefs we hold about work. These are primarily based on a technologistâs assumption that every technical advance drives efficiency and a net productivity gain that guarantees optimal organizational performance. However, it ignores usage dynamics, dependency effects, and indirect consequences on skills, rhythms, human decision-making, and collective production logics.
There is no equals sign between efficiency gain and productivity gain, nor a systematically positive causal link between them. One can exist without the other, or even to the detriment of the other. Efficiency is local, functional, and operational; it becomes a strategic illusion when detached from human, social, or organizational purpose.Â
Productivity, on the other hand, is systemic, collective, and purpose-driven. In certain tightly structured and well-aligned systems, such as high-precision manufacturing or algorithmic trading, efficiency gains can translate directly into productivity gains. Yet in most complex, human-centered organizations, the chain from efficiency to productivity is mediated by cultural, cognitive, and systemic factors that can just as easily neutralize or reverse the expected benefits. Confusing productivity and efficiency mistakes speed for direction and execution for value.
Five blind spots must be acknowledged to unleash sustainable productivity gains from AI transformation programs:
Each department designs its own humanâAI configuration based on its specific variance, velocity, and local knowledge.
Leaders often mistakenly assume that automating unit tasks with AI aggregates into linear productivity gains. Micro-efficiency triggers composition effects, cognitive externalities, and rebound use. As AI accelerates, what was occasionally helpful becomes structurally overused. People regularly offload thinking, devalue intermediate skills, and align decisions with algorithmic frames â a classic automation bias that erodes analysis, judgment, and independent reasoning. Performance then falters where ambiguity and interpretation matter most. This shift causes a slow but deep erosion of human faculties associated with productive effort, such as analyzing problems, confronting ideas in teams, forming judgments without algorithmic assistance, producing autonomous reasoning, or simply facing the discomfort of doubt.
Evidence from automation contexts shows that users often over-rely on automated suggestions, even when they are incorrect, a cognitive phenomenon widely known as automation bias. Itâs not enough for machines to work faster for the organization to function better. Humans must retain the capacity, will, and discernment to engage in the tasks that give meaning to what is produced. Without that, the unit gain becomes a disguised collective loss.
What to do: Embed âhuman-in-the-loopâ safeguards by ensuring that critical decision points always require human review and rationale, especially in contexts of uncertainty or ambiguity. This sets out a landscape of continuous skills development that should be reinforced through targeted training programs. In parallel, AI deployments must be evaluated with expanded KPIs that track not only task-level efficiency but also indicators of human capacity retention, such as decision-making quality, cross-team knowledge exchange, and resilience in novel situations. This will ensure that gains in speed do not come at the expense of the competencies that sustain long-term performance.
Low marginal cost fuels redundant summaries, contextless reports, and incessant automation that floods attention channels.
Many people assume that there are no unproductive tasks that AI can perform, but this is to confuse execution capacity with strategic or organizational necessity. As AI becomes more efficient, the range of functions it can deliver at high speed expands dramatically, increasing the likelihood of their rapid and large-scale reproducibility. Unfortunately, this includes tasks with no real utility, no measurable impact, or even those that are detrimental to collective functioning. All of this creates pseudo-work that looks cheap and responsive yet adds no value.
Low marginal cost fuels redundant summaries, contextless reports, and incessant automation that floods attention channels. Humans have to perform cognitive triage and act as filters for automated overproduction, thereby raising techno-stress and shifting, rather than reducing the workload. This phenomenon remains structurally invisible in classical productivity metrics. Since unproductive AI-generated tasks arenât necessarily costly, they arenât perceived as losses. Yet they consume invisible resources, including attention, strategic clarity, cognitive bandwidth, and collective motivation. Their cost is diffuse but cumulative.
What to do: Before deploying AI for any task, benchmark outputs with the expected definition of âdoneâ and enforce and monitor by using a strategic relevance filter. This involves defining clear criteria for what constitutes a valuable output and ensuring that AI is applied only to tasks that meet them. In addition, broadening the productivity metrics to capture the hidden costs of validation, filtering, and cognitive triage is a no-brainer. Performance evaluations should account for both visible outputs and the invisible resource drain AI may impose, thereby recursively challenging the assumed productivity ROI.
More productivity isnât always better. In interdependent systems, hyper-optimizing a task can exceed what downstream teams can effectively absorb, leading to saturation and broken workflows. AI magnifies this by generating instant, large-scale output that misaligns with cadence, capacity, and priorities. To illustrate, a 2024 Upwork Research Institute survey found a large gap between leadership expectations and employee experiences. Most executives (96%) anticipated productivity gains from AI, while 77% of employees said it had increased their workload. Around four in 10 spend more time reviewing or moderating AI-generated content, 71% report burnout, and 65% feel heightened pressure from productivity expectations. Local efficiency becomes a negative externality. When a tool moves faster than the collective culture in which it is embedded, it desynchronizes interaction norms, time anchors, and shared thresholds of acceptability. The ecosystem becomes incoherent: some accelerate while others resist, and some produce without context while others absorb without direction. The result is a global loss of productive coherence, where hyper-efficiency in one area degrades the flow and clarity of the whole. Productivity is a function of the ecosystem in which it operates and is never absolute.
What to do: Model AI integration as a networked process, mapping interdependencies before deployment to anticipate where speed mismatches might occur. This involves stress-testing workflows under AI-accelerated conditions, aligning upstream and downstream capacities, and defining absorption thresholds â the maximum pace at which different teams, systems, or partners can process new outputs without loss of quality or clarity. All processes are then redesigned accordingly. Measuring performance at the ecosystem level, rather than at isolated task nodes, ensures that gains in one area do not quietly erode value elsewhere.Â

Productivity gains from AI are not necessarily accessible without profound transformation of organizational settings and underlying norms. Many of AIâs potential productivity gains are not simply âwaiting to be harvestedâ by adding the technology into existing workflows. In legacy settings, new tools are often absorbed by old routines, thereby muting their impact. Teams tend to reinterpret AIâs role so that it fits within established habits, norms, and comfort zones â deploying it in ways that reinforce rather than challenge the status quo. This adaptation-by-containment neutralizes much of the transformative potential, creating a facade of adoption without meaningful performance improvement. AI becomes a new tool for old processes, rather than a catalyst for reimagining how value is created. Moreover, every organization operates within a dense mesh of implicit principles, conventions, and procedures embedded in its culture. These invisible frames shape the logic of task sequencing, decision-making authority, validation loops, and quality thresholds. Without questioning these foundational stances, AI is forced to operate within constraints designed for a pre-AI reality.
What to do: Pair AI adoption with a structured âprocess deconstruction auditâ before deployment. This involves systematically mapping workflows, identifying legacy bottlenecks, and questioning implicit norms that limit AIâs transformative scope. To do so, cross-functional design labs should be established to prototype AI-enabled workflows from scratch. Adoption metrics should shift from âtime saved per taskâ to âsystemic performance upliftâ â ensuring that productivity is measured in terms of collective, end-to-end impact, not just local acceleration. Â
Routine, low-intangibility tasks may be automated, but most productivity remains dependent on intangible tasks.
Many people believe that productivity is an exclusive, mechanistic relationship between output and tangible inputs. This mechanistic view reduces the complexity of human work to an exchange of labor, capital, and materials for outputs, while overlooking the intricate interplay of intangible inputs that drive sustainable value creation. The assumption that optimizing tangible resources, such as automating tasks, replacing labor with machines, or improving efficiency in material use, will inevitably lead to a linear increase in productivity, regardless of the human qualities and context required to achieve meaningful results.
However, reducing productivity to an equation of tangible input versus output ignores the profound role played by intangible resources (including critical thinking, judgment, leadership, emotional intelligence, cultural sensitivity, passion, and love) in driving long-term, sustainable outcomes. In the real world, not all that is measured gets managed, and not all that gets managed is measured. Routine, low-intangibility tasks may be automated, but most productivity remains dependent on intangible tasks. Productivity is not just about doing more with less â it also involves ensuring that the human skills necessary for the collective work ecosystem remain preserved and nurtured.
What to do:Â Weight roles not only by task type but by the intangible human capacities they require (e.g., empathy, contextual judgment, and cultural fluency) and design performance metrics to track and reward these capacities, ensuring they remain cultivated alongside efficiency gains. Protect high-intangible-value workflows from full automation, instead using AI in augmentation mode to free humans from low-value tasks, allowing them to invest more time in uniquely human contributions. True progress lies not in what AI is able to do, but in what humans deliberately choose not to delegate or to abandon.Â
Real progress will not lie in making AI ever more powerful at mimicking human cognitive intelligence, but in designing systems capable of functional integrity, of knowing why they act, in what context, with what limits, and for what collective ends.
Current AI, even in Agentic form, has no intrinsic hierarchy of values, no genuine understanding of what deserves to be done, nor of when it is ethically, morally, or socially right not to act. Â It is performance without orientation, fast without judgment, available without responsibility, and, in short, intelligent without integrity. Investment and research should focus more on Artificial Integrity to counter the blind spots that undermine the definition of what is beneficial.
As long as AI is designed as an amplifier of efficiency without a systemic social compass â let alone the equally vital ethical and moral one â it will contribute to productivity to a limited extent while also sustaining the illusion of productivity without ensuring its positive effects.
AI may appear technically remarkable when viewed from a fixed point, but it becomes unstable and potentially undesirable across the end-to-end spectrum of the messy entanglements inherent to any human organization. Real progress will not lie in making AI ever more powerful at mimicking human cognitive intelligence, but in designing systems capable of functional integrity, of knowing why they act, in what context, with what limits, and for what collective ends.

Author of Artificial Integrity
Hamilton Mann is an AI researcher, originator of the concept of Artificial Integrity, and best-selling author of Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future. He lectures at INSEAD and HEC Paris and mentors at the MIT Priscilla King Gray (PKG) Center. Recognized globally for his expertise, he was inducted into the Thinkers50 Radar in 2024 and honored in 2025 with the Thinkers50 Distinguished Achievement Award in Digital Thinking for his substantial contributions to leadership in digital transformation and responsible business, and for his work on harnessing AI for positive change.

March 5, 2026 ⢠by Michael Yaziji in Artificial Intelligence
CHROs must navigate AI adoption carefully, balancing speed and direction while making trade-offs that protect people, skills, and long-term value...

March 2, 2026 ⢠by Tiantian Yang, Prasanna Tambe in Artificial Intelligence
As investment in AI accelerates, access to emerging technology skills is becoming a decisive driver of career progression and pay. New research shows that structural features of tech roles, not womenâs choices...

February 25, 2026 in Artificial Intelligence
Spotify CHRO Anna LundstrĂśm drives AI readiness, culture, and personalized well-being to empower employees and sustain Spotifyâs innovative, human-centric growth....

February 18, 2026 ⢠by Faisal Hoque, Paul Scade in Artificial Intelligence
Transform your AI adoption strategy from a high-stakes gamble into a portfolio of calculated moves using this plan, write Faisal Hoque and Paul Scade....
Explore first person business intelligence from top minds curated for a global executive audience