Good Enough Is Not the Problem — Luminity Digital
The Captured Vertical  ·  Post 2 of 2  ·  April 2026

The Compliance Substrate Is the Market

The control layer governing AI deployment in financial services was established in 2011 — before anyone called it AI. SR 11-7 functions as the market’s de facto permission gate.

April 2026 Tom M. Gomez Luminity Digital 10 Min Read
The first post in this series argued that healthcare AI is captured not by model performance but by compliance infrastructure — and that Epic’s Agent Factory is the move that converts three decades of ONC certification, HIPAA BAAs, and FDA SaMD validation into a certified agentic runtime moat. This post argues that financial services is further along, the moat is older, and — unlike healthcare — there was no announcement.

The fintech disruption frame is so deeply embedded in financial services AI discourse that most practitioners accept it as premise rather than examine it as argument. Stripe disrupted payment rails. Plaid disrupted data portability. Robinhood disrupted brokerage access. The inference: the next wave of AI disruption in financial services will follow the same pattern — a fast-moving company building on open infrastructure, compressing margins, capturing share from incumbents who move too slowly. That inference is wrong. Not because disruption is absent from financial services — it is not — but because it misidentifies the object being disrupted.

The Wrong Analogy

The fintech disruption pattern succeeded by building at the customer interface layer. Stripe didn’t need validated model governance to process a payment. Plaid didn’t need a prudential examiner to accept its governance framework before it could aggregate account data. Robinhood didn’t need SR 11-7 compliant model documentation before it could execute a trade. Those companies succeeded because the valuable capability they were automating — distribution, data portability, trade execution — did not require operating inside the compliance substrate of a regulated financial institution. They built above the layer, not through it.

Agentic AI that makes or informs a material financial decision is different in kind. An autonomous agent that recommends credit terms, flags AML exceptions, generates trading signals, or adjusts risk parameters is not operating at the distribution layer. It is operating inside the compliance substrate — the layer that prudential examiners inspect, that model risk managers validate, and that decades of supervisory guidance has made a regulatory requirement. That layer has a name. It has been in effect since 2011.

Core Structural Argument

The compliance substrate isn’t a barrier to the FSI AI market. It is the market. And it was built before the agents arrived.

SR 11-7: The Permission Layer That Predates the Problem

On April 4, 2011, the Board of Governors of the Federal Reserve System and the Office of the Comptroller of the Currency issued SR 11-7: Supervisory Guidance on Model Risk Management. The guidance established a comprehensive framework for managing the risk that financial models — the quantitative systems banks use to make or inform material decisions — could produce incorrect outputs with adverse consequences. It required independent validation, three lines of defense, conceptual soundness review, ongoing performance monitoring, and outcomes analysis. Every US bank over $1 billion in assets has been required to maintain a model risk management program under SR 11-7 ever since.

The average US bank now operates 175 validated models. Large institutions operate significantly more. Model risk management has grown from a compliance footnote into a full organizational capability: dedicated MRM teams, annual validation cycles, examiner-reviewed documentation, and ongoing supervisory relationships with the OCC and Federal Reserve. Only 44% of banks currently validate their AI tools properly under SR 11-7. The institutions that do have built something that cannot be replicated quickly: accumulated examiner trust, documented governance culture, and validation infrastructure accepted as sound by the regulators who can halt a model’s deployment.

In 2024, the Federal Reserve confirmed that AI and machine learning systems — including large language models — fall squarely within SR 11-7 scope. The guidance applies the same rigor to AI systems as to traditional quantitative models, regardless of complexity or vendor source. That confirmation was not a new requirement. It was a clarification of a fourteen-year-old one. SR 11-7 didn’t know it was building a permission layer for agentic AI. It was building a permission layer for every model that makes or informs a material financial decision — and agentic AI is the next class of such models.

The Core Banking Fabric

Beneath SR 11-7 sits the transaction ledger — the authoritative source of truth for every financial account, balance, and movement. In healthcare, that substrate is concentrated in a single dominant platform. In financial services, it is distributed across three.

FIS, Fiserv, and Jack Henry — the “Big Three” core banking providers — collectively power more than 70% of US banks, according to research published by the Federal Reserve Bank of Kansas City. Fiserv alone holds 42% of US banks and 31% of credit unions. Jack Henry serves 21% of banks, concentrated in community institutions. FIS holds 9%, focused on large and midsized banks. Unlike Epic’s single dominant platform tier, the Big Three serve different market segments with different lock-in mechanisms — but all of them sit beneath SR 11-7, and all have been embedding compliance tooling into their processing infrastructure for fourteen years.

The question is not which core system holds the data. It is which entities have built the SR 11-7 validated model governance infrastructure that makes autonomous action on that data permissible. The Big Three hold the transaction substrate. They are now building AI layers on top of infrastructure they already own. That is not a product roadmap announcement. It is gravity.

No Announcement Needed

In healthcare, the structural capture had a moment. Epic took the stage at HIMSS26 in March 2026 and announced Agent Factory — the move that converts the compliance permission layer into a certified agentic runtime. You could point to a date, a conference, a product name. The capture was visible because it was named.

In financial services, there was no announcement. There was no need for one. The capture was structural and silent, accumulated across fourteen years of regulatory requirements, Basel III stress testing model inventories, AML model validation cycles, fair lending model documentation, and CCAR capital planning submissions. By the time agentic AI arrived as a deployment question, the institutions with the deepest SR 11-7 infrastructure had already built something no startup could replicate in a compliance cycle — or three.

The signal in FSI is not a product launch. It is a pricing decision. JPMorgan Chase has indicated to data aggregators that it will begin charging fees for fintech access to its customer data — with the highest fees applied to payments-focused companies. The largest banks are building internal SR 11-7 validated AI systems that community and regional banks cannot replicate at scale. Those systems will become the foundational AI capabilities for the broader market — either licensed down or embedded in the core banking platforms that already serve 70% of institutions. The moat in FSI doesn’t need a press release. It was structurally complete before anyone asked the agentic question. The absence of an announcement IS the tell.

The Government Sees the Surface, Not the Structure

SR 11-7’s core design assumptions are being structurally broken by agentic AI. Writing in GARP Risk Intelligence in February 2026, Krishan Sharma — Senior Vice President and Model Risk Management Leader at Citigroup — identified the precise point at which the framework strains: “The challenge for institutions is no longer whether these systems fall within the scope of SR 11-7, but whether the framework’s supervisory tools remain effective for models whose behavior may evolve materially between validation cycles.”

The guidance was conceived for systems with bounded scope, stable parameters, and decision paths reconstructable after the fact. Agentic systems are dynamic rather than static, probabilistic rather than deterministic, and capable of pursuing objectives with limited human intervention. They may recalibrate autonomously between validation cycles, accumulating behavioral changes that no formal redevelopment event triggers a review for. The tools SR 11-7 provides — periodic conceptual soundness review, outcomes analysis, benchmarking — were designed for a different object than the one now being deployed into regulated institutions.

Regulators are aware the surface is shifting. The OCC has issued AI risk guidance. DORA — the EU’s Digital Operational Resilience Act — went into effect in January 2025. The regulatory surface for agentic FSI AI is being constructed in real time. No actor currently holds a validated agentic runtime that meets it at scale. The institutions best positioned to shape what the new supervisory tools look like are the ones with the deepest existing examiner relationships — built across fourteen years of SR 11-7 supervision.

The Section 1033 Counter-Force — And Its Structural Limit

The CFPB’s open banking rule — designed as the FSI regulatory counter-force to data concentration — was finalized October 22, 2024. April 1, 2026 was the compliance deadline for the largest institutions. It arrived in legal limbo: challenged by banking groups, stayed on October 29, 2025 by a federal court in Kentucky (Forcht Bank, N.A. v. CFPB, E.D. Ky.), with the CFPB actively rewriting it under new leadership. The compliance deadline for the most consequential data rights rule in US financial services history came and went without a single institution required to comply.

Industry participants who submitted comments during the rulemaking explicitly named agentic AI as a technology the rule was not designed to address. JPMorgan Chase argued in its comment letter that Section 1033’s data rights should not extend to third-party technology companies — only to consumers and those acting in a fiduciary-like capacity. The Bank Policy Institute stated that Congress did not intend to give fintechs and data aggregators statutory rights to consumer data.

The rule addresses data access. SR 11-7 validated agentic action is a different structural problem, and the regulations were designed for only one of them. The Section 1033 counter-force addresses whether you can read the data. The compliance substrate governs whether your model is validated to act on it.

Sharma’s Diagnosis and the Architectural Provocation

Sharma identified three specific dimensions where SR 11-7 strains under agentic AI. Dynamic validation: the framework was designed for models with stable behavior between review cycles; agentic systems recalibrate continuously. Explainability standards: SR 11-7 requires transparency sufficient for effective challenge, but provides limited guidance on what “sufficient” means for systems whose decision pathways cannot be readily interrogated. And — most structurally significant — third-party concentration: “concentration in foundational AI capabilities can create correlated risks across institutions, reducing the effectiveness of firm-specific controls and increasing the potential for systemic impact.”

That third point is the moat restated in risk management language. Whoever holds the foundational validated agentic runtime creates correlated concentration risk across every institution that adopts it. Sharma named the risk without naming its consequence. He is the FSI parallel to Halamka in Post 1 — a practitioner at a dominant institution, describing the structural problem from inside it, calling for new supervisory tools without seeing the full architectural consequence of who builds them first.

Sharma’s closing argument: “Institutions that succeed in this next phase will be those that treat AI neither as a panacea nor as an exception, but as a risk-bearing capability subject to disciplined governance. That discipline is unlikely to emerge from waiting for perfect regulatory clarity. Instead, it will require firms to recognize where SR 11-7’s assumptions end — and where new supervisory tools must begin.”

That gap is the architectural opening. SR 11-7 governs approval — not execution. It validates a model before deployment; it does not trace decisions at runtime, enforce constraints at the agent boundary, or maintain a continuous record of autonomous action. What the compliance substrate built is a gate. What agentic AI requires is a harness. That is the work that has not been done at scale, in financial services or anywhere else.

In financial services, the compliance relationship is not with a certification body. It is with a prudential examiner who forms a judgment about institutional risk culture over decades. Portable governance infrastructure in FSI must be examiner-acceptable — not just technically sound, not just certified on paper.

The architectural work is the same as in healthcare: separating the compliance substrate from the platform that currently contains it. Building governance infrastructure that any SR 11-7 validated institution can adopt, not just the ones that built it. But in financial services, the compliance relationship is not with a certification body that issues documents. It is with a prudential examiner who visits annually, reviews model validation documentation, and forms a judgment about institutional risk management culture over a multi-year supervisory relationship. That relationship cannot be acquired. It is accumulated. The gap is wider than healthcare. The work is harder. It has not been named yet either.

The Captured Vertical
Post 1  ·  Healthcare Good Enough Is Not the Problem
Post 2  ·  Now Reading The Compliance Substrate Is the Market
References & Sources

Like this: