What McKinsey Got Right, and the Layer They Didn’t Reach — Luminity Digital
Infrastructure Imperative  ·  Companion Analysis
Agentic AI  ·  Enterprise Architecture

What McKinsey Got Right, and the Layer They Didn’t Reach

A McKinsey partner named the 90% failure rate this week. The three conditions they prescribe are accurate as far as they go. None of them touch the infrastructure layer underneath the agent — and that omission is precisely where the 90% lives.

April 2026 Tom M. Gomez Luminity Digital 5 Min Read

A McKinsey partner put a useful number on the table this week: 90 percent of companies attempting agentic AI transformations see no real financial benefit. The diagnosis that follows — wait-and-see leadership, talent gaps, missing execution governance — is accurate as far as it goes. It doesn’t go far enough.

The three success conditions McKinsey names are entirely organizational. Know where to embed agents. Redesign human-agent collaboration. Build execution governance. These are real requirements. But not one of them addresses the infrastructure layer underneath the agent, and that omission is precisely where the 90% lives.

90%

of companies attempting agentic AI transformations see no real financial benefit, per McKinsey research. The prescribed remedies are all organizational. The root cause is architectural.

The Transition That Most Architecture Hasn’t Made

Enterprises spent the better part of the last decade building data infrastructure for cognitive AI. The architectural contract was straightforward: ingest data, transform it, surface predictions or recommendations, and route the output to a human who decides what to do next. The human was always in the loop — not as a governance preference, but as a structural requirement. The infrastructure didn’t need to support autonomous action because no autonomous action was expected of it.

That contract doesn’t hold for agentic AI.

An agent doesn’t surface a recommendation and wait. It determines what is actionable, assesses what it is permitted to do, executes, and moves to the next decision. The substrate it runs on has to support all of that — not just retrieval, but operational context. Not just permissions managed at the application layer, but authorization encoded in the substrate itself. Not just data that is accurate, but data that is auditable by design, because agents make consequential decisions at velocity with no human review step between retrieval and action.

Most enterprise data infrastructure was never built to that specification. It was built for a different principal — the analyst, the decision-support tool, the dashboard consumer. Cognitive AI fit that infrastructure because cognitive AI was still serving the same principal. Agentic AI doesn’t.

Cognitive AI Infrastructure

Built for Human Principals

Surfaces predictions and recommendations that a human reviews and acts on. The human closes the loop. Authorization lives at the application layer. Audit trails are retrospective — activated when a human wants to understand a result.

Excellent for its intended purpose. The wrong load-bearing architecture for autonomous decision-making.

Insight-Grade
Agentic AI Infrastructure

Built for Autonomous Actors

Supports determination, permission assessment, and execution without human review between steps. Authorization encoded at the substrate level. Provenance and audit are continuous by design — not assembled after the fact.

The substrate closes the loop the agent requires. A different architectural contract entirely.

Decision-Grade

What the Gap Actually Looks Like

Decision-grade infrastructure has to pass five architectural tests — tests that most enterprise data platforms were never designed to meet, because they were optimized for the cognitive AI workload, not the agentic one.

  • 01
    Discoverability by autonomous actors Not queryable by analysts who bring schema knowledge. Traversable by agents that arrive without pre-loaded context and cannot ask clarifying questions.
  • 02
    Contextual richness at the point of consumption Provenance, confidence, and operational state encoded in the substrate itself — not assembled downstream by a human interpreter.
  • 03
    Action-orientation Exposes what is actionable, not just what is true. An agent that retrieves data but cannot determine what it is permitted to do with it is operating on insight infrastructure.
  • 04
    Permission-native architecture Authorization at the substrate layer — not enforced by the application sitting above it. A precondition for alignment-grade governance that the harness layer cannot compensate for.
  • 05
    Provenance and auditability by design Continuous and structural — not retrospective. Post-hoc reconstruction is not sufficient when agents make consequential decisions at velocity.

These are architectural tests, not a feature checklist. An enterprise can have best-in-class data engineering and still fail every one of them, because they were optimized for a different workload. We’re working through how the major data platforms actually measure against these criteria — and what the gap looks like in practice — in an analysis we’ll be publishing shortly.

Governance built on top of a substrate that can’t carry the weight of autonomous action is coordination theater. The harness enforces what the substrate can’t structurally support — and it fails silently until a consequential decision exposes the gap.

Why This Week’s Number Matters

The 90% failure rate isn’t a strategy failure. It’s an infrastructure mismatch showing up as organizational friction. Companies read the symptoms — slow deployment, inconsistent agent behavior, governance overhead, POC results that don’t survive production — and conclude the technology isn’t ready. In most cases, the technology isn’t the problem. The substrate is.

McKinsey’s execution governance condition points in the right direction. But governance built on top of a substrate that can’t carry the weight of autonomous action is coordination theater. The harness layer can only enforce what the substrate makes structurally enforceable. When the substrate wasn’t designed for autonomous actors, the harness compensates for a gap it was never designed to fill — and the 90% is what that compensation failure looks like at scale.

The Argument

Organizational readiness is necessary. Infrastructure fitness is what makes it sufficient. The transition from cognitive to agentic AI is not a strategy problem with an infrastructure dimension — it is an infrastructure problem that manifests as strategy failure when the substrate goes unexamined.

The Infrastructure Imperative prologue laid out the cognitive-to-agentic transition as a data architecture problem, not a deployment problem. McKinsey’s number is what that mismatch looks like at scale. The next question — which platforms are built for this, and which are scaffolding dressed as substrate — is one we’re working through now.

The Infrastructure Gap — From Cognition to Agency

The prologue to the Infrastructure Imperative series establishes why the cognitive-to-agentic transition is an architectural problem before it is an organizational one.

Read the Prologue
References & Sources

Share this:

Like this:

Like Loading...