When “What” Meets “Why” — Luminity Digital
Thought Leadership

When “What” Meets “Why”: A Question Metamorphic Metadata Left Me Sitting With

Ramana Gorantla’s recent article on Metamorphic Data makes a compelling case for what AI needs to trust enterprise data. It surfaces a question worth sitting with — one that points toward the next layer of the problem.

March 2026
Tom M. Gomez
Luminity Digital
5 Min Read

Ramana Gorantla’s (LinkedIn) recent piece on Metamorphic Data stopped me mid-scroll. If you haven’t read it, the core argument is this: most enterprise AI fails not because the models are weak, but because the data foundation underneath them is. Static metadata — the dusty data dictionary that nobody updates — gives an LLM no reliable basis for knowing what question it’s actually answering.

When you point a Large Language Model at raw enterprise data without context, you do not get speed and intelligence. You get confident garbage. The fix Gorantla proposes is what he calls Metamorphic Metadata — a third generation of enterprise metadata that doesn’t just describe data, but dynamically transforms it at query time based on user intent, security policy, and organizational context.

Gorantla’s Core Argument

Metamorphic Metadata goes beyond what Gartner named Active Metadata in 2021. Where active metadata continuously generates and propagates context across the data stack, Metamorphic Metadata transforms data at query time — generating optimized execution plans, creating ephemeral role-appropriate views, enforcing row-level security masking on the fly. The practical result is the elimination of what he calls “metric drift”: the silent divergence where Sales reports revenue as $1M and Finance reports $1.2M for the same period, not because either team is wrong, but because no shared executable definition exists.

Ramana Gorantla — “Why Your AI Needs Better Foundations: Metamorphic Metadata, Semantic Layers, and MCP” (Feb 2026)

It’s a compelling architecture. And it surfaced a question I haven’t been able to shake.

Where the Question Opens Up

Gorantla frames the evolution cleanly across three generations of metadata management and maps out the full semantic stack required to get AI to a point of trustworthy retrieval rather than probabilistic inference.

What Metamorphic Metadata Solves

What a Number Means

Knowing that revenue is defined as recognized cash under the Finance definition, enforced at query time, with the right user permissions applied, so every department receives a consistent, governed answer.

Structured Layer
What Remains Open

Why a Decision Was Made

Knowing why the organization chose that definition for a particular contract, why an exception was approved for a specific client, or why the threshold was moved in Q3. That reasoning doesn’t live in the data layer.

Context Layer

Those two things are related but distinct. Gorantla’s architecture handles the structured world cleanly. The metrics layer, the ontology layer, the knowledge graph connecting structured data to documents and emails — these are the right building blocks. But the knowledge graph component raises the question rather than answers it: what actually gets captured there, and how?

Most organizational decisions still happen through human communication. The governed metric is the output. The conversation that produced it — the tradeoffs, the context, the reasoning — is rarely structured, rarely captured, and almost never queryable. The Metamorphic Metadata layer can enforce what “revenue” means at query time. It can’t reconstruct the meeting where that definition was contested, revised, and ultimately agreed.

The “what” problem is hard enough. Solving it cleanly will make the “why” problem more visible, not less.

— Tom M. Gomez, Luminity Digital — March 2026

What I’m Left Wondering

Gorantla builds toward a “5-Part Metadata Contract” that organizations need to establish — metric contracts, entity graphs, behavioral signals, quality and lineage, and policy. It’s a rigorous framework for making AI trustworthy at the data layer. The piece ends with a clear call to action: start with the foundation, not the feature.

That framing feels right. And it leaves an open question about sequencing. Gorantla’s foundation — governed metrics, semantic layers, a secure MCP handshake — is a necessary precondition for AI that retrieves rather than guesses. But once that foundation is in place, the next question becomes what the AI is retrieving toward. Governed metrics tell an agent what the numbers mean. They don’t tell it why a particular decision was reached using those numbers, or what precedent applies when the situation is new.

The Sequencing Question

Are Metamorphic Metadata and whatever we end up calling the decision-context layer two parts of the same problem, or two separate problems that need to be sequenced? Gorantla’s framework feels like a necessary foundation. The question that keeps returning: what sits on top of it — and whether the same vendors building the “what” layer are positioned to build the “why” layer, or whether that’s a different problem entirely. If the briefing I published in March is any guide, the answer may be sequential — the Metamorphic layer solves what the data means, and whatever comes next has to solve how decisions were made using it, from two very different sources: the clean traces that AI agents generate natively, and the messy human context buried in email threads and Slack channels that no system has reliably captured yet.

I don’t have a neat answer. What I’m genuinely curious about is whether practitioners working through Gorantla’s framework in practice are finding that the knowledge graph component naturally pulls toward capturing decision context — or whether it stays cleanly in the structured data domain and the “why” remains as elusive as it was before.

Practitioner Question

If you’re further along in thinking about this than I am, I’d be interested in how you’re framing it. The Metamorphic Metadata architecture — and specifically the concept of Minimum Viable Context as a floor for AI trustworthiness — is a useful lens for evaluating whether your organization’s data foundation is actually ready for the AI capabilities being built on top of it.

When “What” Meets “Why” — March 2026

This post responds to Ramana Gorantla’s “Why Your AI Needs Better Foundations: Metamorphic Metadata, Semantic Layers, and MCP” (Feb 2026). Related context: Gartner Active Metadata Management guidance, the Model Context Protocol specification, and Foundation Capital’s “Context Graphs: AI’s Trillion Dollar Opportunity” (Dec 2025).

References & Sources
Tags
Metamorphic Metadata Active Metadata Enterprise AI Data Governance Semantic Layer Model Context Protocol MCP AI Readiness Data Architecture Knowledge Graph Metric Drift Agentic AI

Share this: