Earning the AI Build — Luminity Digital
Practitioner’s Essay · April 2026

Earning the AI Build

From basecamp to higher-altitude enterprise AI. A Luminity Digital perspective on product-model discipline in the age of the enterprise platform — where most AI initiatives stall, and what the climb above that altitude actually requires.

April 2026 Tom M. Gomez Luminity Digital 6 Min Read
This essay sits alongside The Infrastructure Imperative, the Great Compression dispatches, and the Substrate or Scaffolding series as a practitioner’s bearing. Where those works diagnose structural conditions in the enterprise AI stack, this one names the operating discipline required to work inside those conditions — and the altitude at which that discipline begins to matter.

Marty Cagan’s April 16, 2026 piece — Build to Learn vs Build to Earn — restates a distinction that enterprise AI has, at scale, stopped observing. There are two ways to develop a product. One is about output: roadmap, spec, design, build. The other is about outcomes: frame the problem, discover a solution with evidence, then deliver it. Cagan’s shorthand, borrowed from engineering culture: discovery is where you build to learn; delivery is where you build to earn.

Read against what enterprises are actually doing with AI right now, this distinction is not a productivity tip. It is the diagnostic. The enterprises stalled at basecamp are running project-model AI programs — output-focused, feature-driven, usage-measured. The enterprises climbing above it are running product-model AI programs — outcome-focused, discovery-earned, decision-tied. The distance between those two modes is not technical. It is measured in altitude.

The Data, Triangulated

The most-cited figure of the year comes from MIT Project NANDA’s State of AI in Business 2025. Only five percent of enterprise GenAI pilots reach production with measurable P&L impact. Tens of billions of dollars of enterprise spending are producing no measurable return for roughly ninety-five percent of organizations. The methodology has been scrutinized, and the precise number deserves appropriate care. The broader pattern is corroborated.

BCG’s Build for the Future 2025 study of 1,250 organizations reports that sixty percent generate no material AI value, and only five percent — the cohort BCG calls “future-built” — create substantial value at scale. McKinsey’s State of AI in 2025, surveying nearly 2,000 organizations across 105 countries, finds that eighty-eight percent of companies use AI in at least one function, yet only thirty-nine percent see EBIT impact, most of it under five percent. Roughly two-thirds of respondents remain in experimentation or piloting.

Three studies, three methodologies, one pattern. Enterprise AI adoption is broad. Enterprise AI value capture is narrow.

MIT’s Diagnosis — Read Against the Frame

The failure mode is not models. It is not infrastructure. It is not talent. MIT’s authors call it the learning gap — the fact that deployed systems do not retain context, adapt to workflow, or integrate into how work actually gets done. The pilots run. They demo well. They do not compound.

The Basecamp Problem

The altitude metaphor fits this moment better than any other. At basecamp, everyone looks prepared. The gear is unpacked. The team photo is taken. Pilots are running. Token dashboards are filling up. A chief AI officer has been named. The press release has gone out.

And then the climb starts. Most teams turn back at basecamp because the endurance required above it is architectural, organizational, and cultural — not technical. You cannot buy your way out of the learning gap. You cannot spec your way past the operating-model gap. You cannot project-manage your way through probabilistic systems that behave differently in production than they did in the demo. The cognition-to-agency gap is the structural form of this problem. The altitude metaphor is its operational form.

BCG’s recent Enterprise as Code thesis makes the same point from a different angle: enterprises have to move from intuition to specification, capturing operating logic in forms both people and systems can reason about. Governance and controls have to live inside that logic from day one, not arrive later as afterthoughts. This is discovery-grade work. It does not happen on a sprint schedule.

Earning the Build

Cagan’s frame becomes more powerful when applied recursively. Before we build to earn with AI, we build to learn with AI.

That means using Claude, and models like it, first as instruments of discovery — to interrogate the problem, stress-test hypotheses, surface architectural gaps, probe failure modes — and only then, if the evidence holds, committing AI to the delivery path. This is not a framework. It is a gate.

AI does not earn its place in the build by being available. It earns its place by surviving the discovery that precedes it.

Luminity Digital — The Practitioner’s Principle

We call this earning the build. It is how Luminity Digital operates. It is the principle we think separates enterprises climbing above basecamp from those still unpacking their gear at the trailhead.

Four Principles of Altitude

Our public method at luminitydigital.com begins with Decision Architecture — inventory the decisions that drive the business, design the information flows that serve those decisions, and map AI capabilities to specific decision points rather than to generic use cases. That discipline is visible on our site. It already operates above the line where most enterprise AI consulting sits. The altitude does not end there. Four architectural principles shape the next climb, described here as principles rather than artifacts — the artifacts belong in a different conversation.

Substrate over stack. The durable capability is not a collection of features bolted to an LLM. It is a composable substrate of discovery-grade capabilities that can be recombined as model providers, frameworks, and protocols evolve. Stack thinking locks the enterprise into what exists today. Substrate thinking prepares it for what exists in eighteen months.

Loose coupling as an architectural commitment. The model layer will shift. The framework landscape will consolidate and fragment. Protocols — MCP, A2A, and their successors — will standardize unevenly. Enterprises that tightly couple to any single element of this stack are building fragility into the foundation. Loose coupling is the structural answer to what we have described elsewhere as the Great Compression — the ongoing absorption of middleware and vertical SaaS layers into the platform substrate. You cannot architect against that pressure with a monolith.

Decision Architecture is the anchor, not the ceiling. The Decision Inventory is where the method starts. What happens after a decision has been architected, executed, and traced — how that trace becomes organizational intelligence that compounds over time — is where the real enterprise moat lives. The Substrate or Scaffolding series traces what decision-grade substrate actually requires. BCG’s recent framing of a shared AI platform with “freedom within a frame” — data connectors, orchestration, model management, tiered autonomy — describes the external shape of this altitude. The interior work is harder, more proprietary, and more interesting.

Security designed in, not bolted on. Once Decision Architecture has evolved and the substrate has been shaped, the security wrapper closes around the build — at design, before the first prototype, not after. Adversarial conditions do not arrive in production; they were always in the specification, waiting to be modeled. The protocols now extending enterprise AI into production environments open attack surfaces faster than governance practices can follow, and the retrofitted response — patch, harden, monitor — is a basecamp posture. The altitude posture is to fence the perimeter during design, with threat models anchored in OWASP Top 10 for LLMs, the NIST AI Risk Management Framework, and MITRE ATLAS, so that what gets built was adversarially-conditioned from the first line.

The Climb

Discovery-grade AI use, in practice, follows a sequence we apply to every substantive engagement.

One — investigate. We interrogate the problem with Claude as a thinking partner: hypotheses, reference architectures, edge cases, failure modes. This is deliberate, probing, question-led work. We are not prompting for output. We are stress-testing the problem frame.

Two — surface the gaps. Some are gaps in the model’s knowledge. Some are gaps in the problem’s formulation. Some are gaps only a practitioner with field experience notices — the kind that do not show up in an eval harness because they live in how enterprises actually operate, not in how they are modeled.

Three — validate the gaps. Is this real, or is it an artifact of framing? Evidence first, not vibe first.

Four — apply practitioner discipline. This is where enterprise architecture reasserts itself. We ask the practitioner’s question: what best practices, patterns, and architectural commitments fill this gap? The gap dictates the practice, not the other way around. The Harness Imperative is one expression of this — what agents actually need around them, versus what frameworks ship with.

Five — decide whether AI has earned the build. Only now, and only if the evidence supports it, do we commit AI to the delivery path: tied to a specific outcome, under specific guardrails, with specific observability, at a specific decision point.

The sequence is repeatable. It is also slow in ways basecamp teams find uncomfortable. That is the point. The oxygen is thinner up here, and the decisions made at altitude compound for years.

The Governance Seat

Governance is not security. Security is adversarial conditioning — threat models, controls, guardrails, the discipline that fences the perimeter before the first prototype. Governance is the layer above it: the decision rights, accountability, and risk posture that determine who decides what AI can do, who is accountable when it does, what policies apply, how risk is accepted, and how regulatory obligation is met. These two disciplines intertwine constantly, but they are not the same, and collapsing them together is one of the ways enterprise AI programs become illegible to the boards that are supposed to govern them.

Governance sits with the client. The board, the executive committee, the Chief Risk Officer, the General Counsel, the Chief Compliance Officer. Luminity does not own that seat and does not pretend to. Enterprise AI governance is not something a consultancy can deliver; it is something an enterprise lives. What Luminity provides is the guide — translating board-level risk posture into architectural reality, making traceability a structural output of the Decision Architecture rather than a reporting exercise grafted on afterward, and ensuring that the decisions architected into the Decision Inventory are the decisions the governance function actually needs visibility into. Governance, auditability, and human oversight — from day one, not bolted on later.

Governance, security, and assurance converge at the emerging class of agent-specific standards. ISO/IEC 42001 validates the AI management system at the organizational baseline. The NIST AI Risk Management Framework structures operational practice. The EU AI Act defines the regulatory floor in one jurisdiction. And AIUC-1 — the first agent-specific, technically-testable, insurance-enabling certification, grounded in quarterly adversarial evaluation — is where architectural work, governance commitments, and adversarial discipline become legible to the enterprise, the board, and the market. We have written elsewhere about what the Q2 2026 AIUC-1 update gets structurally right. Guiding clients from ISO 42001 baseline through AIUC-1 attestation is not a specialty service. It is the default shape of enterprise AI advisory work at altitude.

Where the Firm Is Headed

Luminity Digital is building for this altitude deliberately. We bring more than twenty-five years of field-tested experience across SaaS, cloud, and enterprise security to the work of composing reliable enterprise AI stacks — and we are building the firm in a shape that fits the substrate providers whose research discipline actually earns enterprise trust. The enterprises climbing past basecamp will need partners who have done this altitude before. We intend to be one of them.

MIT’s own data makes the partnership case in the language of evidence. Enterprises that partner with specialized vendors succeed roughly twice as often as those attempting to build AI capability internally. The reason is not mysterious. Partnerships concentrate the discovery-grade discipline. They import altitude experience. They keep the climb honest.

The Essay’s Closing Move

Earning the AI build is how we got here. Higher altitude is where the work now lives.

Where the Work Continues →

The Infrastructure Imperative, the Substrate or Scaffolding series, and the Great Compression dispatches together develop the structural and practical vocabulary this essay draws on. Start at the prologue of any of them — the corpus is designed to be entered from more than one direction.

References & Sources

Share this:

Like this:

Like Loading...