What the a16z Framework Gets Right — And What It Assumes You Already Have — Luminity Digital
The Context Graph Reality Check  ·  Part 3 of 3
Enterprise AI  ·  Practitioner Response

What the a16z Framework Gets Right — And What It Assumes You Already Have

Jason Cui and Jennifer Li published the clearest investor-level diagnosis of enterprise AI context failure to date. The five-step framework is sound. The upstream condition it depends on — and doesn’t name — is Decision Intelligence.

March 2026 Tom M. Gomez Luminity Digital 14 Min Read

This is Part 3 of The Context Graph Reality Check. Part 1 established that metadata platforms and context graph platforms are architecturally distinct problems — and that no connector roadmap closes that gap. Part 2 addressed what has to be true about how an organization makes decisions before a context graph has anything meaningful to capture. This post turns to the investor-level conversation that has shaped how the entire category is being framed — and names the one thing it gets precisely right, and the one structural condition it assumes without naming.

On March 10, Jason Cui and Jennifer Li published a piece on the a16z blog titled Your Data Agents Need Context. It is the best investor-level diagnosis of enterprise AI context failure I have read. The thesis is direct and correct: AI agents deployed in enterprise fail not because the models are weak but because the models are blind. They lack the business context that any competent human would carry into a conversation — the definitions, the exceptions, the tribal knowledge that accumulates inside an organization over years.

The five-step framework Cui and Li propose — make data accessible, build context via LLM, refine with human expertise, expose via API or MCP, maintain continuously — is a coherent and well-sequenced foundation. I am not writing this post to dispute it. I am writing it because the framework, precisely because it is right about the problem, reaches a conclusion that stops just short of naming the condition that makes the solution work.

That condition is Decision Intelligence. It is not a gap in the framework’s logic. It is a gap in the framework’s assumed starting conditions — and for most enterprises, those starting conditions do not currently exist.

The Five Steps, Read from a Practitioner’s Position

The framework deserves a precise reading before it is critiqued. Cui and Li anchor the argument in a concrete failure: an agent asked to answer “what was revenue growth last quarter?” cannot do so reliably because it does not know which table is authoritative, which definition of revenue applies, which fiscal quarter the question means, or which geographic perimeter is in scope. That failure is real. Every enterprise AI team working with data agents has encountered it.

The five steps address that failure systematically. Each one is correctly ordered relative to the others. The most important concession in the piece — the one that makes the argument honest rather than just optimistic — comes at Step 3, where Cui and Li acknowledge that refining context with human expertise is the hardest step. They write that the most important context is the kind that no one has ever written down: the implicit rules, the conditional exceptions, the tribal knowledge that lives in the heads of the people who have been doing the work for years.

That acknowledgment is the hinge. Step 3 is where the framework is most honest about what it requires. It is also the step where the framework’s assumed starting conditions come into sharpest relief.

  1. Step 1 Make your data accessible. Connect to where data lives. No argument here — data that cannot be reached cannot inform decisions. Sound
  2. Step 2 Build context automatically via LLM. Extract semantic structure from existing data systems and organizational content. Efficient and necessary — but limited to what was documented somewhere first. First gap opens
  3. Step 3 Refine with human expertise. Cui and Li call this the hardest step. They are right. This is also the step that treats human context as a curation input — and does not address the full scope of what human context actually is. The hinge
  4. Step 4 Expose via API or MCP. Context that cannot be queried by agents at runtime has no operational value. Agreed. But this is downstream of the problem. Sound
  5. Step 5 Maintain continuously. Context decays. Agreed. But maintenance cannot close a gap that was never structured in the first place. Treats divergence as decay

The Gap at Step 3: Human Expertise Is Not Human Decision Context

Step 3 frames human involvement as a curation exercise — experts verify, correct, and supplement what the LLM extracted from existing data and documentation. The assumption is that the most important context was documented somewhere, and human experts are the mechanism for catching what the extraction missed or got wrong.

That assumption is false for the specific context that matters most to AI agents operating in enterprise workflows.

The decisions that AI agents most need context about — the pricing exception that became an informal precedent, the compliance override granted on a call that was never minuted, the policy interpretation a VP communicated through Slack and which then propagated through the team informally — were not documented. They were decided. The distinction is load-bearing. Documentation is a deliberate act of recording. Decision is the act itself. Most enterprise decisions produce no structured artifact at all.

LLM extraction recovers what was expressed. It cannot recover what was decided but never said, what was understood but not articulated, or what became precedent through repetition rather than declaration.

— Those gaps are not a retrieval problem. They are an organizational design problem.

Step 3’s human refinement process addresses the expressed context that automated extraction misread or missed. It does not address the structural gap between what was expressed and what was decided. That gap is not closed by more careful human review. It is closed — if it is closed at all — by changing how decisions are made so that they produce structured artifacts in the first place.

This is the distinction the framework does not reach. It proposes a sophisticated pipeline for capturing and surfacing organizational context. It does not propose a discipline for ensuring that the most consequential organizational context exists in a form the pipeline can process.

Decay Versus Divergence — Why Step 5 Cannot Fix a Structural Absence

Step 5 is right that context decays. Data definitions drift. Team structures change. Policies are revised. The context graph built today is not the correct context graph for six months from now. Continuous maintenance is not optional.

But decay and divergence are different problems, and conflating them leads to a maintenance strategy that addresses one while leaving the other structurally untouched.

Decay — What Step 5 Addresses

Stale Context

A definition that was correct becomes incorrect as the business changes. A table that was authoritative is superseded. A policy that was current is revised.

Decay is a maintenance problem. The context existed in structured form, became outdated, and needs to be refreshed. Step 5’s continuous maintenance pipeline addresses this correctly.

Maintenance Fixes This
Divergence — What Step 5 Does Not Address

Missing Context

The context graph believes the organization decided X. The organization actually decided Y — in a Slack approval, a verbal exception, a meeting with no minutes. The gap widens with every undocumented decision.

Divergence is a structural absence. The context was never captured because the decision process was never designed to produce structured artifacts. No maintenance pipeline refreshes what was never ingested.

Requires Upstream Discipline

The organizations that will get the most from Step 5’s continuous maintenance are the ones that arrived at the context graph with structured, queryable decision artifacts accumulated from an upstream discipline. For those organizations, maintenance keeps a good graph current. For organizations that lack the upstream discipline, maintenance keeps a gap from growing while the divergence problem compounds beneath it.

The Upstream Condition the Framework Doesn’t Name

Parts 1 and 2 of this series have been building toward the same conclusion from different angles. Part 1 established that metadata platforms address the data asset context layer — schemas, lineage, semantic definitions — but have no mechanism for capturing decision context from the channels where human decisions actually happen. Part 2 introduced the empty graph problem: a context graph that ingests from an enterprise where decisions are not produced in structured form accumulates organizational noise, not organizational intelligence.

The a16z framework is, among other things, a description of how to build the capture layer. What it does not address is the upstream condition the capture layer depends on. That condition has a name in the framework Luminity has been developing with clients over the past several years: Decision Intelligence.

Decision Intelligence — Working Definition

Decision Intelligence is an organizational state, not a product category. It describes the condition in which decisions are architecturally designed to reach the right people with the right intelligence, systematically recorded as they happen, and connected to outcomes in a feedback loop that makes the organization structurally smarter with every cycle.

Without it, AI generates outputs. With it, AI improves outcomes — and the context graph has structured artifacts to persist, link, and query. The distinction between these two trajectories is not the quality of the capture infrastructure. It is whether the decisions being captured were made in a way that produces anything worth capturing.

Three upstream conditions have to hold before the a16z five-step framework delivers what it promises. First, the decisions that actually drive the business have to be named — inventoried by layer, frequency, and consequence — so it becomes possible to determine which decisions warrant architecture and which can remain informal. You cannot improve decisions you have not named, and you cannot capture context from decisions that were never acknowledged as decisions in the first place.

Second, the path that intelligence takes to each significant decision has to be designed — not just the data that feeds it, but the format, the timing, the accountability structure, and the record the decision produces. The a16z framework focuses on what the agent knows at inference time. The upstream discipline focuses on what the decision produces as an artifact — and that artifact is what the context graph ingests.

Third, every significant decision has to leave a mark: a structured record of what was decided, by whom, under what constraints, linked to the entities it affected, and updated as outcomes become observable. Not post-hoc documentation written for audit purposes. A record that exists because the decision process was designed to produce it.

4

Maturity stages separate a reporting-oriented organization from one capable of compounding decision intelligence: Reporting — Analytics — Augmented — Intelligent. Most enterprises entering the context graph conversation are at stage one or two. The context graph is a stage-four capability. The upstream discipline is what moves an organization from stage two to stage four — and no vendor has an incentive to surface the gap, because it implicates work that cannot be sold as a software license.

One Addition to the Framework: Step 0

The five steps Cui and Li describe are the right foundation for context infrastructure. I want to propose a single addition — not a replacement, and not a critique of the framework’s logic. A prerequisite step that the framework’s assumed starting conditions depend on.

  1. Step 0 Establish the Decision Intelligence discipline. Inventory the decisions that matter. Design the path intelligence takes to each one. Ensure decisions produce structured artifacts at the moment they are made — linked to the entities they affect, connected to the outcomes they produce. Without this step, Steps 1 through 5 build a sophisticated capture infrastructure for organizational noise. With it, they build a context graph that compounds.
  2. Step 1 Make your data accessible. The data infrastructure foundation. Connects to the structured context the upstream discipline has already generated.
  3. Step 2 Build context automatically via LLM. Surfaces the expressed half of organizational context — what was documented. Step 0 ensures there is also a structured decision record to ingest.
  4. Step 3 Refine with human expertise. Human experts verify and extend what extraction surfaced. Their refinements are themselves decisions — and Step 0’s discipline captures them as structured artifacts rather than discarding them as session context.
  5. Step 4 Expose via API or MCP. A context graph built on Step 0 artifacts is queryable at the decision layer, not just the data definition layer. The agent can ask not just what “revenue” means but what exceptions have been approved and under which conditions.
  6. Step 5 Maintain continuously. Step 0’s ongoing discipline keeps new decisions flowing into the graph as structured artifacts. Maintenance refreshes what exists. The upstream discipline ensures what needs to exist gets created.

Step 0 is organizational work, not software work. That is precisely why no vendor surfaces it. It implicates the design of how decisions are made — a domain that resists productization and requires the kind of practitioner engagement that takes longer and produces slower revenue than a platform license. It is also the work that determines whether every downstream investment delivers.

The Series Conclusion

Rippletide’s response to the same a16z piece — published this week — identified a different gap: agents that act rather than read require an enforcement layer that validates proposed actions against operative authority rules before execution. That critique is correct and addresses a real problem the a16z framework does not reach. The enforcement gap and the Decision Intelligence gap are not competing arguments. They address different failure modes at different layers of the same stack.

The Luminity argument is upstream of both. Before enforcement, before capture, there is the question of whether decisions are being made in a way that produces anything worth capturing or enforcing. An enforcement layer that validates agent actions against authority rules still depends on those rules being current, structured, and connected to the decisions that established them. A context layer that ingests structured decision artifacts still depends on an organizational discipline that produces those artifacts in the first place.

The Series Conclusion

Context layers describe what the organization knows. Decision Intelligence captures what the organization has decided. The a16z framework is an excellent guide to building the former. The upstream prerequisite for both is an organizational discipline that most enterprises have not yet built — and that no platform can substitute for. The value of context graph infrastructure accrues to organizations that do the upstream work first.

The Context Graph Reality Check — Complete

Three posts. The metadata platform architectural argument, the empty graph problem and its upstream conditions, and the investor framing extended to its logical conclusion. The next conversation is about what building the upstream discipline actually looks like in practice.

The Context Graph Reality Check  ·  Three-Part Series
Part 1 · Published Metadata Platforms Are Not Context Graph Platforms Part 2 · Published What Context Graphs Actually Require Upstream
Part 3 · Now Reading What the a16z Framework Gets Right — And What It Assumes You Already Have
References & Sources

Share this:

Like this:

Like Loading...