The Identity Layer: Necessary, Not Sufficient — Luminity Digital
Standalone Dispatch · Vendor Announcement Read
Agentic AI Security  ·  Identity & Architecture

The Identity Layer: Necessary, Not Sufficient

Okta has shipped first-class identity for AI agents. For the past year, the IdP-side identity primitive for agents was a slide deck. As of yesterday, it is a product. That is real progress, and it sharpens — rather than resolves — the structural questions enterprise architects still need to answer about the layers above and below it.

1 May 2026 Tom M. Gomez Luminity Digital 11 Min Read
Yesterday, 30 April 2026, Okta announced general availability of Okta for AI Agents — first-class identity for AI agents in Universal Directory, scoped tokens with short lifetimes, governed connections to applications and MCP servers, lifecycle controls including a single-action kill switch, and audit telemetry to SIEM. The announcement is framed around three questions every architect is being asked: where are my agents, what can they connect to, what can they do? The framing is accurate, the work is real, and the result is what the IdP-side identity primitive for agents looks like when it stops being a slide and starts being infrastructure. This dispatch is for the enterprise architects who are about to deploy it. It explains what the identity layer structurally provides, what it correctly leaves to a different layer, and what work remains on the architect’s plate after Okta is in production. The frame is straightforward: necessary, not sufficient — but the necessary part is now real.

For the past year, anyone advising enterprises on agentic AI security has had a frustrating conversation to repeat. Architects asked which identity provider should hold their AI agents. The honest answer was that no IdP held them as first-class objects yet — agents inherited human credentials through OAuth grants or carried machine credentials with provisioning-time scopes that bore no relationship to the runtime authority any individual call required. Discovery was a spreadsheet. Revocation was a multi-system manual process. Audit was best-effort log aggregation. The category was unbuilt.

That category is now built. Okta for AI Agents places agents in Universal Directory as first-class identities with prebuilt imports for Salesforce Agentforce, Amazon Bedrock AgentCore, and ServiceNow AI Platform. Connections to five resource types — authorization servers, vaulted secrets, service accounts, applications, and MCP servers — are governed through scoped, short-lived tokens issued on demand rather than embedded in code. Agent deactivation provides a single-action containment primitive that propagates across every system the agent touches. Every authorization decision streams to SIEM. The roadmap names the unsolved work honestly: an Agent Gateway control plane, agent-to-agent delegation, threat detection, human-in-the-loop on high-stakes actions.

This is good work, and architects should deploy it. The category had a slide-shaped hole in it for twelve months and Okta filled the hole. The rest of this dispatch is about what the architect deploying it still owns — not because Okta failed to ship those capabilities, but because they live at architectural layers identity infrastructure structurally cannot reach.

What Okta Is Naming

The dispatch opens with a now-familiar attack pattern. An employee’s corporate account holds an OAuth grant to a third-party AI tool. The tool is compromised. The grant — pre-existing, long-lived, broadly scoped — becomes the attack path: a straight line into the company’s internal systems, API keys, tokens, and environment variables. There is no CVE, no infrastructure flaw, no zero-day. The exploit is the trust the organization extended through a routine integration and never revisited. The framing matches what we have written about as the dynamic blast radius of agent deployments — the surface that grows at runtime as agents acquire authority through inherited grants rather than per-call authorization.

Vendor-Cited Statistics — Treat Accordingly

Okta cites Gravitee’s State of AI Agent Security 2026 for two figures: a large majority of organizations report confirmed or suspected agent security incidents (88%), while a small fraction have identities tied to their agents (22%). We have not independently verified the Gravitee survey methodology. Whatever the precise numbers, the directional asymmetry is consistent with what we observe in advisory work and with broader 2026 reporting: agents deploy faster than they are governed, and most of the attack surface is unmanaged identity. Treat the specific percentages as vendor-cited; treat the asymmetry as real.

The asymmetry is the operative point. The OAuth-as-attack-path pattern is not exotic — it is what happens when an organization adopts AI tools through the same procurement and integration workflows it uses for any SaaS product, without recognizing that an AI tool with persistent credentials is a different category of trust commitment than an AI tool that returns text completions. Okta’s framing names this correctly. The remediation Okta ships — discover the agents you have, govern what they can connect to, govern what they can do — is the right starting point. The structural questions that remain after this starting point is reached are the subject of the rest of this dispatch.

What Identity-Layer Enforcement Structurally Provides

To be precise about what Okta has built, it helps to be precise about which architectural layer it operates at. Identity infrastructure for agents enforces at the identity and authorization layer: the issuance, scope, and revocation of tokens that an agent presents to downstream resources. It governs the agent’s outbound surface — the set of resources the agent is permitted to call, with what credentials, for how long. In our taxonomy, this is provisioning-time and runtime enforcement on the trust relationship between the agent and the systems it interacts with.

This is the right place for identity to live. With first-class agent identity, three things become tractable that previously were not. First, blast radius becomes bounded by token scope rather than by the inherited authority of whichever human credential the agent borrowed. Second, revocation is a single architectural action with auditable propagation, rather than a multi-system manual cleanup that takes hours and leaves residual access. Third, every authorization decision is recorded with sufficient detail to support replay, audit, and post-incident reconstruction. None of these were structurally available before. Now they are.

Before First-Class Agent Identity

Inherited Authority, Manual Containment

Agents operate with credentials borrowed from human users via OAuth grants or with machine credentials whose scope was set at provisioning time and bears no relationship to the authority any individual call requires. Blast radius is whatever the underlying human or machine identity could reach. Containment is multi-system, manual, and incomplete.

Inherited · Provisioning-Time · Manual
With Identity-Layer Enforcement

Scoped Tokens, Single-Action Containment

Agents are first-class identities with their own scoped, short-lived tokens issued per task. Blast radius is bounded by token scope rather than inherited authority. Revocation is a single architectural action propagating across every governed system. Audit captures every authorization decision with sufficient fidelity for replay and forensics.

First-Class · Runtime · Architectural

The identity layer has been differentiating for some time. Okta’s GA names the IdP-side primitive — what an agent’s identity is, what it may reach, with which credentials, for how long. Other vendors named adjacent cuts earlier: Oasis Security named Agentic Access Management as a deployment-layer category in late 2025 with Sequoia Capital and a working group of enterprise CISOs, framing the chain-of-custody from prompt to action and in-session fine-grained authorization as a coherent product surface. Non-human identity management, the lifecycle governance for the service accounts, API keys, and IAM roles that agents inherit and exercise outside MCP-mediated paths, has been a shipping product category for longer still. These are not competing claims. They are specialized cuts at the same architectural region. The architect should expect the identity layer to compose from multiple specialized components rather than collapse to a single product.

On the Luminity defense axis, this lands cleanly on the Structural · Enforceable side. Token scope, revocation, and audit are deterministic — they either fire or they do not, and their behavior does not depend on a model’s interpretation of context. They are the same category of mechanism as sandboxing, capability tokens, and HSM-backed signing. They do not degrade as the attack surface grows, because they are not adversarial-response controls; they are structural boundaries. Across Series 1 through 5, we have argued that the half of the defense space that scales with deployment complexity is precisely this category. Okta for AI Agents is what it looks like when the identity portion of that argument gets product-ized.

What Identity Cannot Adjudicate: The Layer Below

Identity-layer enforcement decides whether an agent may call a resource. It does not decide whether the call the agent chose to make reflects user intent or an injected instruction inside content the agent ingested while processing a legitimate task. This distinction is not academic. It is the structural reason identity infrastructure, however well built, cannot close the indirect prompt injection problem that Greshake et al. (2023) first formalized as the foundational threat model for LLM-integrated applications.

The mechanism is straightforward. An agent operating with valid, narrowly scoped, short-lived tokens can still be steered by an instruction embedded in untrusted content — a retrieved document, a tool output, an email body, a webpage — into making a tool call that is fully authorized but does not reflect the user’s intent. The token scope says the call is allowed. The model decides which call to make. If the model’s decision was steered by adversarial content inside its context window, the architectural authorization layer has nothing to push back against. The call is authorized; therefore it goes through.

This is the command-data boundary problem we have written about across Series 1, Series 3, and the MCP companion. Tokens and natural-language content flow through the same attention mechanism. The model has no architectural way to distinguish trusted instructions from untrusted content that contains instructions. Identity-layer enforcement runs above this boundary; it cannot reach below it.

The corpus has a clear answer for what runs at the layer Okta does not reach. Cheng and Tsao’s Agent Privilege Separation in OpenClaw (arXiv:2603.13424, March 2026) replicates Microsoft’s LLMail-Inject benchmark against current frontier models and demonstrates that a privilege-separated two-agent pipeline — one agent processing untrusted content with restricted privileges, a second agent invoking high-privilege tools and receiving only structured outputs from the first — closes the attack surface architecturally. Against 649 attacks that succeeded against the single-agent baseline, the full pipeline achieves 0% ASR. Agent isolation alone achieves 0.31% ASR — a 323× improvement. Ablation confirms that isolation, not output formatting, is the dominant mechanism.

323×

Improvement in attack success rate from privilege-separated agent isolation alone, vs. single-agent baseline. The full pipeline (isolation plus structured output formatting) achieves 0% ASR against the 649 attacks that succeeded against the baseline. The structural boundary between content-handling and action-taking components is the dominant mechanism.

The advisory implication is direct. Identity infrastructure is necessary for the architecture above the agent — what it can reach, what credentials it presents, how to contain it when something goes wrong. Privilege separation is necessary for the architecture below the agent — what content it ingests, which component is allowed to act on the basis of that content, where the structural boundary between processing and action lives. These are different layers; they require different mechanisms; neither subsumes the other. An enterprise deploying Okta for AI Agents and stopping there has installed the identity ceiling. The processing-and-action floor is still on the architect’s plate.

What Identity Asserts but Does Not Verify: The Layer Adjacent

The second layer the architect still owns sits adjacent to identity rather than below it. It is the layer where MCP servers receive calls that Okta has authorized.

Recent independent work documents the state of the MCP transport. Prakash’s Agent Identity Protocol (arXiv:2603.24775, 2026) reports a scan of approximately 2,000 internet-exposed MCP servers; the survey found that all of them lacked authentication. This is consistent with multiple independent audits across early 2026: MCP servers are widely deployed without protocol-level verification of the caller. An Okta-issued token, presented to an MCP server that does not verify it, is not enforcement — it is the appearance of enforcement. The token chain terminates at a trust boundary the protocol does not yet defend.

The corpus also names what the protocol-layer fix looks like. Prakash proposes Invocation-Bound Capability Tokens — cryptographically bound delegation chains carrying public-key verifiable identity, attenuated authorization, and provenance records at each hop. The implementation reports sub-millisecond verification overhead for compact-mode tokens and 100% rejection across 600 adversarial attempts including delegation-depth violations and audit evasion. Whether the specific protocol Prakash proposes becomes the consolidating standard or whether MCP itself is extended (the SMCP and MCP Guardian directions both target this layer) is not the point. The point is that the protocol-layer work is parallel to identity-layer work, not downstream of it. Both layers need to be built. Okta is building one; the standards bodies and MCP server implementers are building the other.

Roadmap Read — The Unsolved Work Is Named

Okta’s roadmap names the layers the GA does not yet reach: an Agent Gateway control plane (moves enforcement closer to the protocol layer), agent-to-agent delegation (the next round of trust assumptions, structurally similar to MCP’s authentication gap), threat detection, and human-in-the-loop on high-stakes actions. The roadmap is honest. The architect’s job is to recognize that “on the roadmap” is not the same as “shipped” — and to plan the next twelve months accordingly.

What This Means for Architects Tomorrow Morning

For enterprises evaluating Okta for AI Agents, the advisory position is clear. This is the right identity foundation for the agentic enterprise. Deploy it. The asymmetry between agent deployment velocity and agent governance maturity is real and growing, and the GA closes the most-cited operational gap in the category. Architects who have been hand-rolling token scoping, building one-off discovery tooling, and documenting agent inventories in spreadsheets can stop doing those things and start arguing about the layers above and below this one.

Then own the two layers it does not cover. Below: privilege separation between content-handling and action-taking agents, with structured outputs crossing the boundary rather than raw content. The Cheng and Tsao result — 0% ASR with a two-agent pipeline against 649 previously-successful attacks — is what closure of this layer looks like empirically. The architect’s question is not whether to adopt this pattern but how to retrofit it onto existing single-agent deployments. Adjacent: verifiable delegation at the MCP transport. The 2,000-server scan reported by Prakash is the present-state baseline. The architect’s question here is which MCP servers in their environment require which authentication posture, and what the migration path looks like as the protocol layer matures.

Neither of these layers is solved by identity infrastructure, however good. Both are structural and architectural. Both have research-grounded answers. Neither is on Okta’s plate to deliver, and neither should be. The category is consolidating around a sensible separation of concerns: identity providers govern who the agent is and what it may reach; MCP servers and protocol-layer extensions govern how that authorization is verified at the call site; agent runtime architectures govern which component within the agent is permitted to act on which content. Each layer needs its own mechanisms; the layers compose.

A prior dispatch — What Agentic Access Management Names — examined the deployment-layer category Oasis Security named with Sequoia Capital and a working group of enterprise CISOs: the chain-of-custody from prompt to action that AAM operationalizes, paired with non-human identity governance for the credentials agents exercise outside MCP-mediated paths. The Okta GA covered here is a major IdP-side event within the architectural region that prior dispatch describes. Read together, the two pieces map the identity layer as it is actually consolidating — composed of multiple specialized components shipped by different vendors, with Okta now naming the IdP-side primitive cut alongside the deployment-layer category Oasis named earlier.

The Net Position

The identity layer for AI agents has moved from slide to product. That is not incremental progress; it is a category shift. For the past year, the honest advisory answer to “which identity provider holds my agents?” was that none did. As of yesterday, that answer is different. Okta has shipped the identity ceiling for the agentic enterprise. The processing floor and the protocol-layer transport are still on the architect’s plate — but the architect’s plate is now clearer, not heavier, because the identity layer no longer needs to be improvised. Necessary, not sufficient. But the necessary part is now real.

For Architects Evaluating This Category

Luminity Digital advises enterprise architects and security leaders on agentic AI architecture. If you are evaluating Okta for AI Agents, designing the privilege-separation and protocol-layer work that runs alongside it, or thinking through the broader trust architecture for your agent deployments, we are happy to discuss. Schedule a conversation.

References & Sources

Share this:

Like this:

Like Loading...