What the Protocol Doesn’t Carry — Luminity Digital
Guided Determinism Comes Up Short  ·  Post 2 of 3
Enterprise AI Infrastructure

What the Protocol Doesn’t Carry

MCP is a transport standard. Transport confidence — tool calls completing, trust rules inherited, data flowing — is a real and valuable condition. It is not substrate fitness. These are different things at different architectural depths, and the distance between them is where most deployments will fail.

April 2026 Tom M. Gomez 9 Min Read

In Post 1 of this series, we established that Guided Determinism — Salesforce’s term for the capability to define fixed handoff rules while LLMs handle the reasoning in between — governs downstream of context reception. It assumes a substrate condition it cannot verify. This post names that condition precisely, defines the term that separates it from connectivity, and shows why the most sophisticated data platform available to enterprise architects carries a specific architectural conflict the protocol cannot resolve.

Model Context Protocol is an open standard developed by Anthropic for connecting data sources and tools to AI models. Its design is architecturally honest: MCP defines how context flows into an agent’s reasoning environment. It does not define what makes that context decision-grade. This is not a limitation of MCP — it is precisely what a transport standard should be. Anthropic has never claimed otherwise. The confusion belongs entirely to the platforms and deployment programs that treat MCP connectivity as a proxy for substrate readiness.

MCP’s adoption has been rapid and legitimate. As an open standard, it has become the connectivity layer of choice across enterprise AI tooling — from Salesforce’s Agent Fabric to Anthropic’s own Claude integrations to hundreds of third-party server implementations. When a platform claims MCP integration, what it is claiming is accurate: the transport layer works. Tool calls complete. Trust rules propagate. Data flows from source to context window. That is a real architectural contribution and a genuine precondition for agentic deployment. It is also the point where most deployment programs stop asking questions.

Naming the Condition That Connectivity Confirms

There is a specific condition that MCP connectivity verifies, and it deserves a name. Transport confidence is the state an enterprise reaches when the integration architecture is working: tool calls are completing, authentication is functioning, trust rules are being inherited, data is flowing from connected sources into the agent’s context. Transport confidence is necessary. It is not sufficient. And the gap between transport confidence and substrate fitness is where most deployments encounter the conditions Gartner is counting in its 60% failure projection.

Luminity Digital — Working Definition

Transport Confidence

The state an enterprise reaches when its MCP integration architecture is operational: tool calls completing, authentication functioning, trust rules propagating, data flowing from connected sources into the agent’s reasoning environment. Transport confidence confirms the plumbing works. It does not confirm what the plumbing is carrying — whether the data delivered is current, logically consistent, structurally fit for machine reasoning, or decision-bound at the moment of consumption.

Transport confidence is a necessary precondition for agentic deployment. It is not a sufficient one. The gap between transport confidence and substrate fitness is the architectural distance this post is built around.

The distinction matters because every enterprise that has deployed MCP connectivity and declared itself agentic-ready has confirmed transport confidence. The audit trail is clean. The integration is operational. The governance layer is configured. None of that verification reaches the substrate condition beneath it. An agent operating with full transport confidence can still reason from data that is stale relative to the decision it must make, logically inconsistent across retrieval points, missing the operational state that would make an action binding, or structured for how a human analyst navigates a report rather than how a machine must reason to a conclusion.

Transport Confidence

What MCP Connectivity Verifies

What the integration architecture confirms when it is working correctly.

  • Tool calls are completing successfully
  • Authentication and trust rules are propagating
  • Data is flowing from connected sources
  • The agent’s context window is receiving input
  • Access controls are being enforced at the transport layer
Necessary — not sufficient
Substrate Fitness

What Substrate Fitness Verifies

What the data architecture must satisfy before the agent reasons over it.

  • Retrieved data reflects current operational state
  • Context is logically consistent across retrieval points
  • Operational state is first-class and transactionally bound
  • Data structures support machine reasoning, not human navigation
  • Context is decision-bound at consumption time
Prerequisite — not verified by MCP

The Snowflake Case

Snowflake is the appropriate structural example here — not because it is a weak platform, but because it is an exceptional one. As a cognitive substrate — built for analytical workloads, governed data sharing, high-performance query execution across petabyte-scale data — Snowflake represents the current ceiling of what enterprise data architecture has achieved. The enterprise has invested heavily to get here. The data is accessible. The governance is mature. The query performance is exceptional. And when Snowflake is connected via MCP, transport confidence will be achieved. The integration will work.

What Snowflake was not designed to be is an agentic substrate. The architectural conflict is specific and structural, not a matter of configuration or version. Operational state — the current status of a contract, the live inventory count, the real-time position of a pending transaction — is not first-class and transactionally bound to data access in an analytical platform. Snowflake’s architecture optimizes for query-time computation over stored data. That is precisely the right design for analytical workloads, and precisely the wrong design for the consumption model agentic reasoning requires: data that is atomically retrievable and decision-bound at the moment the agent acts on it.

The Architectural Conflict — Precisely Stated

Snowflake’s query model separates the state of the data from the act of querying it. For analytics, this is correct: you query historical data, aggregate it, derive insight. For agentic reasoning, this creates a gap: the agent retrieves a representation of state at query time, which may not reflect operational reality at action time. Connecting Snowflake via MCP does not change this structural condition. It surfaces it — confidently, at inference speed, at the moment the agent needs to act. The gap is not in the transport. The gap is in what the transport carries.

This is why the Substrate or Scaffolding series draws a precise distinction between gaps that are extensible and gaps that are architectural conflicts. Some conditions can be layered in — a cache layer, a streaming feed, a purpose-built operational store that feeds into the analytical platform. Those are engineering problems with known solutions. Other conditions require the substrate to become something it was never designed to be. That is an architectural conflict, and no amount of MCP connectivity resolves it. The one-sentence verdict from that series applies directly here: a sophisticated cognitive substrate is insufficient as an agentic substrate.

The Five Tests Transport Confidence Cannot Run

The Substrate Fitness Criteria developed across the Substrate or Scaffolding series provide five specific architectural tests that determine whether a data substrate is decision-grade under agentic AI. Each criterion names a condition that transport confidence cannot verify. Each represents a gap that MCP connectivity will surface rather than resolve.

Criterion 1 — Discoverability. Can the agent locate what it needs without human-mediated search logic? A substrate optimized for human analysts — with UI-layer metadata, search tools designed for navigation, and catalog structures that assume a human will interpret ambiguity — is not the same as a substrate where data is atomically locatable by a machine reasoning agent. Transport confidence says the query returned results. Discoverability says whether those results were the right ones for the agent’s decision context.

Criterion 2 — Permissioning. Are access controls operationally embedded but logically separate from the data itself, such that an agent’s permission scope can be verified and bounded without restructuring the underlying architecture? MCP inherits trust rules from the connected system. What it cannot verify is whether the permission model of that system is fit for per-session, per-action capability scoping — the access model agentic reasoning requires.

Criterion 3 — Contextual Richness. Is the data atomically retrievable and decision-bound at consumption time? This is the Snowflake conflict in its sharpest form. A query that returns a snapshot of state is not the same as data that is decision-bound — carrying with it the operational context that makes an action based on that data binding and current. Transport confidence confirms the snapshot arrived. Contextual richness determines whether the snapshot was decision-grade when it did.

Criterion 4 — Action-Orientation. Is operational state first-class and transactionally bound to data access? An agent that must recommend or execute an action — not report on past state — needs data architecture where the operational condition being acted on is inseparable from the access event. Analytical platforms separate these by design. That design choice is architecturally correct for analytics. It is an architectural conflict for agentic action.

Criterion 5 — Provenance. Can the agent verify where data came from, when it was last authoritative, and whether it has been superseded? Provenance is the substrate condition that makes audit trails meaningful. Transport confidence confirms what was retrieved. Provenance determines whether what was retrieved was the authoritative version at the moment the agent acted on it.

36pt

The gap between the 91% of organizations that acknowledge a reliable data foundation is essential for AI success and the 55% that believe they actually have one. That perception gap is not a technology awareness problem. It is what transport confidence looks like from the inside: the integration is working, the governance is configured, the data is flowing — and the substrate condition beneath it has not been verified against any of the five criteria above.

What the Protocol Was Designed to Do

This argument should not be read as a critique of MCP. It is the opposite. MCP is architecturally honest: it solves the transport problem it was designed to solve, at the layer it was designed to operate. Anthropic’s decision to release it as an open standard, rather than a proprietary integration layer, is the correct architectural choice — it enables interoperability across the enterprise AI stack without creating vendor dependency at the transport layer. The Great Compression series documented how model providers have systematically absorbed every middleware function between foundation models and enterprise workloads. MCP, as an open standard, is a structural counterweight to that dynamic: it allows enterprises to connect substrate to model without ceding the connectivity layer to any single provider.

What MCP cannot do — by design and by honest scope — is determine whether the substrate being connected is fit for the reasoning it will be asked to support. That determination requires a different kind of assessment, operating at a different architectural depth. It requires asking, before any MCP server is configured, whether the data architecture being connected passes the five fitness criteria. That question is not in any current deployment program. It is not a transport question. It belongs in the assessment that runs before the transport layer is configured — and it is the test the Substrate Inventory Checklist was built to run.

Transport confidence confirms the plumbing works. Substrate fitness confirms what the plumbing is carrying. These are different tests. Only one of them is currently being run.

Post 2 Argument

MCP is a transport standard. Transport confidence — the state where integration is working and data is flowing — is necessary and not sufficient. The five Substrate Fitness Criteria are the tests transport confidence cannot run. Every enterprise that has deployed MCP connectivity and declared agentic readiness has confirmed the former. Almost none has verified the latter.

Transport Confidence Is Not the Gate. Substrate Fitness Is.

The Luminity Substrate Inventory Checklist assesses your data architecture against the five fitness criteria before MCP connectivity is configured. The test that belongs in your deployment plan and currently appears in none.

Schedule a Substrate Assessment
Guided Determinism Comes Up Short — Series Navigation
Post 2 · Now Reading What the Protocol Doesn’t Carry
References & Sources

Share this:

Like this:

Like Loading...