The healthcare AI debate has been running for three years on the wrong track. Every serious conversation circles back to the same set of questions: are Epic’s models good enough? Can startups outperform? What do the AUC scores say? These are legitimate clinical questions. They are not the structural question. The structural question is who controls the right to act in a clinical workflow. The answer is not determined by a model evaluation. It is determined by nearly three decades of accumulated compliance infrastructure that one vendor holds — and that no startup can replicate in a product cycle.
The Wrong Debate
The JAMA analysis frames Epic’s dominance as a market competition problem — good enough tools winning over better ones because of integration convenience and vendor trust. A 2025 study in Health Affairs confirmed the gravity: among hospitals implementing AI, 79% chose tools from their EHR vendor. A systematic review of Epic’s clinical AI tools found no model achieving an AUC above 0.79. These two data points together are not a story about customers making suboptimal choices. They are a story about a market where the compliance substrate has been captured by one vendor, and the procurement decision reflects that architectural reality.
When one AI administrator tells Wachter “if a third-party tool is an A- and an Epic tool is a B, we’ll still go with Epic,” the operative word isn’t the grade differential. It’s the unstated logic behind it: Epic’s B carries institutional permission that the A- does not. The grade comparison assumes the tools are competing on the same terms. They are not.
Performance is a distraction when permission is the actual constraint.
How the Permission Layer Was Built
Epic was founded in 1979 by Judith Faulkner in a Madison, Wisconsin basement with a $70,000 investment. Its Chronicles database — the proprietary architecture that no agentic AI can touch without Epic’s permission — predates HIPAA by nearly two decades. The compliance substrate built on top of that database has been accumulating since HIPAA’s enactment in 1996. The HITECH Act of 2009 poured $27–30 billion in federal subsidies into EHR adoption through Meaningful Use requirements. Hospitals that failed faced Medicare reimbursement penalties. Epic was best positioned to capture that arms race.
The reason is structural, not incidental. The actual procurement driver was never clinical outcome improvement. It was revenue cycle optimization. Epic’s Resolute billing engine, Cadence scheduling system, and Prelude registration module are the primary value proposition — not support functions for the clinical record. Clinical documentation in Epic’s architecture is designed to automatically generate billing codes and reduce claim rejections. As a peer-reviewed PLOS Digital Health analysis stated directly in March 2026: as long as reimbursement models reward exhaustive documentation, vendors will continue to optimize for billing rather than usability. That architecture is now the substrate on which agentic AI must operate.
The Compliance Substrate
The accumulated institutional trust infrastructure — ONC certification, HIPAA BAA coverage, FDA SaMD validation pathways, Joint Commission alignment, and change management processes — that makes clinical action permissible in a regulated environment. Epic has been building this substrate for nearly three decades. Agentic AI that needs to act, not merely advise, must either inherit it or rebuild it from zero.
This is the distinction the healthcare AI discourse keeps missing. The question for cognitive-era AI — will this tool produce a useful recommendation for a clinician to consider? — has a relatively tractable compliance answer. The question for agentic AI — will this system initiate, modify, or complete a clinical action without direct human supervision? — runs directly into every layer of that infrastructure. Epic owns the substrate. That is the permission layer.
The Architectural Conflict
Agentic AI is probabilistic, multi-step, and adaptive. The regulatory frameworks governing clinical AI were designed for the opposite: deterministic, auditable, human-supervised systems with defined inputs and predictable outputs. FDA’s January 2025 draft guidance for AI-enabled device software functions introduced a Total Product Lifecycle approach requiring validation, monitoring, and documentation processes that are, at minimum, time-consuming and, at maximum, structurally incompatible with the continuous improvement loops that make agentic systems useful.
The implication is asymmetric. For Epic, any new agentic capability is an extension of already-certified infrastructure with established validation pipelines, existing FDA relationships, and accumulated Predetermined Change Control Plans. For a startup, every agentic capability is a blank-page regulatory problem — new SaMD classification analysis, new BAA negotiations with each health system, new Total Product Lifecycle documentation for a system that, by design, continuously changes. The regulatory surface area grows with every degree of autonomy the agent claims.
The regulatory surface area grows with every degree of autonomy the agent claims. Epic is already inside the perimeter. Startups are trying to enter it one integration at a time.
This is not an argument that regulation is wrong. The frameworks exist for good reasons in a domain where errors kill people. It is an argument that the regulatory architecture creates a structural ceiling on startup autonomy that does not apply equally to Epic — because Epic is already inside the perimeter, and startups are trying to enter it one integration at a time. Applying cognitive-era AI frameworks to agentic AI doesn’t just slow the market. It structurally advantages whoever was already certified when the agentic era began.
Agent Factory Is the Tell
Epic’s Agent Factory announcement at HIMSS 2026 has been reported as an openness move — a platform that lets customers build, customize, and monitor AI agents. Read architecturally, it is something more specific: the construction of a certified agentic runtime. The only orchestration layer that already holds institutional permission to initiate clinical actions, route clinical data, and execute multi-step workflows inside a certified EHR boundary.
Every third-party agent running inside Agent Factory inherits Epic’s compliance posture. Certification, HIPAA coverage, SaMD validation pathways — these flow to agents operating within the runtime, not to the agents themselves. Every agent operating outside Agent Factory starts from zero, regardless of model quality. The moat is not the model. The moat is the runtime permission. Agent Factory converts that runtime permission into a platform.
The Abridge arc previewed this move precisely. Epic made a rare investment in Abridge — an AI scribe startup — used the partnership to validate clinical AI deployment at scale inside Epic environments, then announced its own AI scribe tool two years later. The Abridge CEO moved to reframe the company as a broader clinical AI platform. The ladder had been pulled up. Agent Factory systematizes that arc: use third-party tools to demonstrate demand, build the certified runtime, convert ecosystem partners into tenants.
Epic isn’t absorbing model capabilities. It is absorbing the right to deploy them. That is the moat Agent Factory completes.
This is precisely the Great Compression thesis applied to a regulated vertical. The compression is not only of middleware and integration layers. It is of the compliance substrate itself — the institutional trust infrastructure that makes clinical action permissible. Epic isn’t absorbing model capabilities. It is absorbing the right to deploy them.
The Government Knows and Proceeds Anyway
ARPA-H’s ADVOCATE program, announced in January 2026, is the most instructive exhibit in the argument. The agency states explicitly that it is funding agentic AI to democratize healthcare access — developing the first FDA-authorized agentic AI system capable of providing 24/7 specialty cardiovascular care. It also explicitly requires applicants to recruit health system partners for deployment in clinical settings.
Epic holds 54.9% of acute care beds in the US. Most of those health system partners will be Epic shops. ADVOCATE needs FDA-authorized results in three years. You do not topple Epic in three years with federal grant money. So ARPA-H proceeds with the infrastructure that exists — the infrastructure that advantages the incumbent — and does not say so out loud.
This is not an accusation of bad faith. Cardiovascular disease kills hundreds of thousands of Americans annually who could be saved with inexpensive, widely available medications. The program is legitimately urgent. But the structural consequence of funding agentic AI development inside the existing health system fabric is that taxpayer money flows into a structure where the primary architectural beneficiary was determined before the first proposal was submitted. The elephant is in the room. Federal agencies are professional at working around it.
Meanwhile, the FDA — with a 15% staffing reduction since 2023 — is building the Predetermined Change Control Plan and Total Product Lifecycle validation frameworks that incumbent infrastructure handles at scale and startups navigate from zero. The regulatory path for autonomous clinical AI is being cleared in real time. Agent Factory is already parked at the entrance.
The Information Blocking Counter-Force — And Its Structural Limit
The 21st Century Cures Act prohibited information blocking in 2016. Rules went into effect April 2021. Civil penalties for health IT developers became enforceable September 2023. Provider disincentives landed July 2024. As of late 2025 — four years in — OIG had received 1,300+ complaints and issued zero fines. In September 2025, HHS Secretary Kennedy declared enforcement a priority. Four years of rules. Zero penalties. And 90% of those complaints target providers for individual access disputes — not the structural runtime control question Agent Factory creates.
Nine regulatory exceptions give sophisticated actors extensive cover. When Particle Health disputed Epic’s data cutoffs in 2024, Epic’s consistent framing was HIPAA and security concerns — not market control. The framework does not distinguish between a good-faith security decision and strategic foreclosure dressed as one.
The litigation is catching up — but to the wrong question. In September 2025, a federal judge allowed Particle Health’s Sherman Act monopolization claims against Epic to proceed — the first antitrust case against Epic to survive a motion to dismiss. In December 2025, Texas Attorney General Ken Paxton filed a state antitrust suit accusing Epic of transforming patient records into a “private gatekeeping tool that blocks competition and locks hospitals in.” Both cases are about data access. The agentic runtime control question is not yet in any courtroom.
The information blocking regulations address access. Agent Factory is about action. Those are different structural problems, and the regulations were designed for only one of them.
Halamka’s Wrapping Strategy and the Architectural Provocation
No one has understood this structural problem longer or worked harder to solve it from the inside than John Halamka, MD. As CIO of Beth Israel Deaconess Medical Center from 1998 and CIO of Harvard Medical School simultaneously, Halamka spent two decades building web-based, internet-first health data infrastructure at a time when the industry orthodoxy was proprietary client-server architectures and HL7 v2 message parsing. His counter-argument was architectural: design for REST, make data liquid, build for the web.
That orientation became the foundation for FHIR. In December 2014, as Co-Chair of the federal HIT Standards Committee, Halamka co-founded the Argonaut Project — the private-sector coalition that produced the first FHIR implementation profiles. His framing at the time: an unprecedented opportunity to apply the same RESTful API paradigm that Facebook, Google, and Amazon had already implemented at scale to healthcare data exchange. The argument the industry spent years resisting became the standard the industry now depends on.
FHIR opened a window. It made EHR data more accessible. It did not make Epic permeable. Epic remains technically compliant with FHIR while ensuring data flows most seamlessly within its own Care Everywhere network — reinforcing, not dissolving, the lock-in dynamics already at play. Most people have forgotten the pain before FHIR. The window is narrower than the argument that created it.
Now president of the Mayo Clinic Platform — which runs on a $1.5 billion Epic implementation completed in 2018 across 90 hospitals and 52,000 employees — Halamka stated the strategic reality most precisely at HIMSS26 in March 2026: “It really does require rethinking your architecture. Sure, you’ll have your systems of record, your EHR, your ERP, but how do you wrap those in a mechanism that’s standard-based, scalable, and future-proof?”
That is the wrapping strategy. Build a data abstraction layer above the EHR that makes it more permeable without replacing it. The Mayo Clinic Platform is the most sophisticated institutional execution of that strategy currently in existence. It addresses the data access layer. The compliance layer still belongs to Epic. For agentic AI to escape the captured vertical, the wrapper needs to become the compliance layer — portable, independently certified, substrate-agnostic. That is what alignment-grade governance actually demands.
The path out of the captured vertical is not waiting for Epic to open its gates, for ARPA-H to fund an alternative, or for the FDA to redesign thirty years of regulatory frameworks.
It is architectural work: separating the compliance substrate from the platform that currently contains it. Building governance infrastructure that any certified system can read, not just the one that built it.
That work has not been done at scale in healthcare. The absence of it is why Agent Factory is a moat and not just a product. And its absence is precisely what the second post in this series examines — in a regulated vertical where the structural capture is further along, the regulatory moat older, and the incumbent’s hold on institutional trust even harder to see from the inside.
