The Consent Gap, Part 1: What U.S. Law Actually Requires on AI Disclosure in Healthcare — Luminity Digital
The Consent Gap  ·  Part 1 of 3
Healthcare AI  ·  Policy & Governance

What U.S. Law Actually Requires on AI Disclosure in Healthcare

The patchwork of state laws, voluntary guidance, and industry standards shaping AI disclosure today — and why their absence at the federal level is a governance problem hiding in plain sight.

March 2026 Tom M. Gomez Luminity Digital 10 Min Read

The AMA's 2026 Physician Survey asked physicians to rank seven regulatory priorities in order of importance. Clear liability frameworks came first — chosen as the top priority by 31% of respondents. Disclosure of AI use to patients ranked sixth. Patient consent requirements ranked last. That sequencing deserves scrutiny. Disclosure and consent aren't downstream niceties. They are the patient's primary interface with everything the healthcare system does in the name of AI governance.

There is no single federal law that requires a healthcare provider to tell a patient when AI was involved in their care. Not the diagnosis, not the imaging read, not the discharge instructions drafted by a generative AI system, not the risk stratification model that influenced which ward they were admitted to. No law. No mandate. Nothing that compels a physician or a health system to say, in plain language, at the point of care: an algorithm contributed to this decision about you.

That absence is not an oversight. It reflects the structure of U.S. healthcare regulation — a system built for a world of discrete treatments, known risk profiles, and identifiable human agents. AI introduces complexity that structure wasn't designed to accommodate. And into the gap that federal policy has left open, states have moved with unusual and sometimes contradictory speed.

The result is a compliance landscape that is legally meaningful in some jurisdictions, operationally complex for any organization operating across state lines, and almost entirely invisible to the patients it is nominally designed to protect. This piece maps that landscape and examines what "compliance" actually looks like on the ground today.

21

State bills introduced in 2025 specifically focused on AI chatbots in healthcare. Seven passed into law — five with a specific mental health focus. By early 2026, at least four states have enacted enforceable AI disclosure requirements for clinical settings, with several more in active legislative consideration.

No Federal Floor, A Growing State Ceiling

The federal framework that most health systems look to — HIPAA, the ONC's Health Data, Technology, and Interoperability rules, and the FDA's software-as-a-medical-device guidance — was not built to require AI disclosure. HIPAA governs data privacy and security. FDA regulates certain AI-enabled devices through the 510(k) and De Novo pathways. ONC establishes baseline expectations for certified health IT. None of them require a clinician to say, before or during a clinical encounter, that an AI system was involved in the recommendation they just made.

States have filled that gap, but not uniformly. The laws that have passed vary in their scope, their mechanism, their enforcement approach, and — critically — their definition of what counts as AI use requiring disclosure. A health system operating across California, Texas, Illinois, and Florida is already subject to four distinct regimes. Here is what each actually requires.

California · Generative AI

AB 3030

Effective January 1, 2025

Requires healthcare providers — including clinics, hospitals, and physician offices — that use generative AI to produce written or verbal communications containing clinical information to include a clear disclaimer stating the content was produced by generative AI, alongside instructions for reaching a licensed human provider.

Enacted · In Force Fenwick & West — Regulatory Analysis
What It Requires In Practice

Output-Level Labeling

The law targets the patient-facing output of generative AI — the portal message, the discharge summary, the care plan draft — rather than requiring disclosure of every AI tool in the workflow. A patient who receives an AI-generated discharge instruction must see a disclaimer. A patient whose risk was stratified by an AI tool that a physician then reviewed may receive no disclosure under this law.

California's AB 489, effective January 1, 2026, extends this further: it prohibits AI systems from using language, design, or branding that implies the system holds a healthcare license or that licensed oversight exists where it doesn't. Each prohibited term constitutes a separate enforceable offense.

Texas · Broad Clinical AI

TRAIGA (HB 149)

Effective January 1, 2026

The Texas Responsible Artificial Intelligence Governance Act requires licensed healthcare practitioners to provide patients — or their personal representatives — with conspicuous written disclosure when AI systems are used in the diagnosis or treatment of the patient. Disclosure must occur before or at the time of interaction. In emergencies, as soon as reasonably practicable. A 60-day cure period follows written notice of violation from the state attorney general.

Enacted · In Force Akerman — Healthcare AI Laws Now in Effect
What It Requires In Practice

Point-of-Care Disclosure + Physician Oversight

TRAIGA is paired with Texas SB 1188 (effective September 1, 2025), which goes beyond disclosure to mandate oversight: AI may support diagnosis and treatment only when the practitioner personally reviews all AI-generated content or recommendations before any clinical decision is made. The practitioner — not the health system — bears the disclosure obligation and the review responsibility.

In practice this means updated consent forms, point-of-care signage, and documented review workflows — all of which must be auditable against the attorney general's enforcement mechanism.

Illinois · Mental Health AI

WOPRA (HB 1806)

Effective August 4, 2025

The Wellness and Oversight for Psychological Resources Act prohibits AI systems from making independent therapeutic decisions or directly interacting with clients in any form of therapeutic communication. Mental health AI tools are limited to administrative functions. AI transcription of therapy sessions requires written informed consent from the patient, obtained in advance.

Enacted · Mental Health Scope Manatt Health AI Policy Tracker
What It Requires In Practice

Prohibition + Advance Consent

WOPRA does not merely require disclosure — it restricts what AI can do. The advance consent requirement for transcription is particularly notable: it implies that consent must be non-pressured and pre-encounter, not obtained at the moment a patient arrives in a state of distress. Florida has pre-filed analogous legislation for 2026, including a 24-hour advance notice window for AI therapy transcription consent — one of the clearest acknowledgments in any state law that meaningful consent requires time.

Multi-State Compliance Reality

A health system operating across California, Texas, and Illinois is now subject to three different disclosure mechanisms with different triggering conditions, different delivery requirements, and different enforcement authorities. AB 3030 targets output. TRAIGA targets encounter. WOPRA targets therapeutic interaction type. There is no standardized definition of "AI use requiring disclosure" that bridges all three. The compliance burden falls disproportionately on multi-state systems that lack the centralized governance infrastructure to track which tools are deployed where and what each state requires of each.

The Voluntary Layer: Where Most of Healthcare Actually Operates

State statutes, even where they exist, cover a fraction of the AI tools deployed across clinical settings. For the vast majority of health systems, the operative governance framework is not statutory — it is voluntary guidance issued by accreditation bodies, industry coalitions, and federal agencies without enforcement authority.

The most consequential development in this space is the September 2025 joint release by the Joint Commission and the Coalition for Health AI of the Guidance on the Responsible Use of AI in Healthcare. The Joint Commission accredits over 23,000 healthcare organizations in the United States. CHAI counts nearly 3,000 member organizations, including health systems, patient advocacy groups, technology companies, and startups. Guidance from the Joint Commission has a particular quality: it is non-binding until it isn't. Organizations that fail to meet Joint Commission standards lose accreditation. When accreditation-adjacent guidance begins referencing patient transparency around AI, health system compliance officers take note.

Organizations should inform patients about AI's role in their care, including how their data may be used and how AI may benefit their care.

— Joint Commission / CHAI, Guidance on the Responsible Use of AI in Healthcare (RUAIH), September 2025

The RUAIH guidance frames patient transparency as one element of a seven-part responsible AI program that includes governance structures, local validation, data security, ongoing quality monitoring, risk and bias assessment, voluntary adverse event reporting, and education and training. Disclosure isn't framed as a standalone compliance item — it's framed as an expression of an institution's broader governance posture. That framing matters for how health systems implement it.

CHAI is also building practical infrastructure alongside the guidance. Its Applied Model Card gives vendors a standardized format for disclosing known risks and biases to the health systems that are evaluating their tools for procurement. Its forthcoming Health AI Registry will enable voluntary, confidential reporting of AI safety incidents to independent organizations — creating an adverse event signal outside the litigation system, in the model of Patient Safety Organizations. Both mechanisms are vendor- and institution-facing rather than patient-facing, but they are part of the governance architecture that makes patient-facing disclosure meaningful rather than performative.

The Prevailing Practice

Categorical Institutional Disclosure

A patient signs a form at admission acknowledging that the health system uses AI tools in care delivery. The disclosure is broad, one-time, and institutional. The patient may interact with eight AI systems over a two-day stay — from risk stratification to ambient documentation to radiology AI — without knowing that any specific tool was involved in any specific decision.

Legally defensible in many jurisdictions. Meaningless as patient communication. Protects the institution. Doesn't inform the patient.

Notification Without Transparency
The Emerging Standard

Encounter-Level or Tool-Specific Disclosure

Disclosure is triggered by specific AI use in specific clinical interactions — the imaging read, the care plan, the discharge communication. The patient knows what tool was used, what it produced, and who reviewed it. The record reflects that disclosure occurred.

Required under TRAIGA for diagnosis and treatment. Implicit in AB 3030 for generative AI outputs. Recommended under RUAIH for all clinical AI. Still rare in practice.

Transparency With Accountability

The Federal Headwind

Against the state-led momentum toward disclosure requirements, federal policy has moved in the opposite direction — at least in terms of regulatory prescription.

In December 2025, President Trump signed an executive order establishing a national AI policy framework. The order signals preference for minimal federal regulation and potential resistance to state AI mandates. The order does not preempt state statutes — executive orders cannot override duly enacted state law — and the existing requirements in California, Texas, and Illinois remain in force. But the order signals that a federal disclosure floor is not forthcoming from the current administration, and it creates uncertainty about whether federal agencies will act to harmonize state requirements or challenge them.

More consequentially for the technical infrastructure of disclosure, the HHS/ONC proposed rule known as HTI-5 — released in December 2025 — proposes eliminating 34 of 60 existing health IT certification criteria and revising seven more. The agency frames this as a deregulatory move designed to accelerate AI-driven data exchange and reduce compliance costs for developers, estimated at 4,000 hours per developer in year one. Several of the criteria being relaxed were the closest approximation federal health IT rules had to transparency and algorithmic disclosure requirements for certified EHR systems. If finalized, HTI-5 would reduce the federal infrastructure available to operationalize the state-level disclosure mandates that are simultaneously becoming more demanding.

The Structural Tension in Plain Terms

States are passing laws requiring AI disclosure at the point of care. Federal agencies are deregulating the health IT certification requirements that would make those disclosures systematically auditable at the EHR level. The entities caught in the middle are health systems building compliance workflows manually, state by state, for tools that are deployed at enterprise scale through the same EHR platforms the federal government is now deregulating.

The Joint Commission / CHAI voluntary accreditation pathway — currently under development — is the most plausible route to national standardization in this environment. It is not a federal mandate. But for the 23,000 organizations that depend on Joint Commission accreditation, it may become the operative standard faster than Congress moves.

What Compliance Actually Looks Like Today

In practice, the mechanisms health systems are using to meet disclosure obligations — where they are meeting them at all — fall into several categories. Each has a role. None of them, individually or together, consistently produces what researchers or regulators would describe as meaningful informed consent.

Pre-visit consent form updates are the most common mechanism: language added to admission paperwork or patient intake forms acknowledging broadly that the health system uses AI tools in care delivery. These are quick to implement, low in operational cost, and low in information value to the patient.

EHR-embedded disclaimer banners are required under California AB 3030 for generative AI clinical communications. They are the most targeted mechanism currently deployed — a patient who receives an AI-generated discharge instruction sees a disclaimer identifying the content as AI-generated alongside contact information for a licensed provider. They cover generative AI outputs. They do not cover diagnostic AI, risk stratification, or imaging analysis.

Point-of-care signage in clinical settings — the conspicuous written disclosure required under TRAIGA for diagnosis and treatment — is the most operationally complex requirement currently in force. It requires that disclosure happen before interaction, that it be documented, and that the documentation be defensible against attorney general review.

Ambient documentation consent scripts — verbal disclosures at the start of clinical encounters where ambient AI scribes are recording — are increasingly common in health systems that have deployed tools like Abridge, Ambience, or Microsoft Nuance DAX Copilot. A July 2025 JAMA Network Open study found that consent processes for ambient documentation varied widely across institutions, even within the same health system using the same tool. No standardized approach exists.

Vendor attestations — contractual representations in procurement agreements asserting that a given AI tool complies with applicable state disclosure requirements — shift legal responsibility toward vendors but do not themselves constitute patient disclosure.

The Gap That Matters

Current disclosure practices tend to be categorical and institutional rather than encounter-specific or tool-specific. A patient may sign a form acknowledging AI use without knowing that the radiology read, the discharge summary, the medication risk flag, and the appointment scheduling system they interacted with each involved different AI models with different risk profiles, different training data, and different known failure modes. That gap — between disclosure as notification and disclosure as meaningful transparency — is exactly what the research literature is now pressing on. Closing it turns out to be considerably harder than passing a law.

Up Next: Part 2 — What the Research Is Actually Telling Us

The science of consent, the limits of the leading frameworks, what patients say they want, and why the fastest-growing AI category in healthcare has no standardized consent process.

The Consent Gap  ·  Three-Part Series
Part 1 · Now Reading What U.S. Law Actually Requires on AI Disclosure in Healthcare
Part 2 · Coming What the Research Is Actually Telling Us
Part 3 · Coming The Hard Questions Nobody Has Answered Yet
References & Sources

Share this: