The AMA's 2026 Physician Survey found that 88% of physicians say validation of safety and efficacy is a top prerequisite for AI adoption. But when asked what would build patient trust specifically — disclosure and consent ranked last. Part 1 mapped the legal patchwork. Part 2 examines why the gap between a law on the books and a patient who is genuinely informed is far wider than the legislation implies.
Regulation can establish a floor. It can require that a disclosure happen, that it be written, that it be conspicuous. What it cannot do is guarantee that the disclosure means anything — that a patient reads it, understands it, or is in any position to act on it. That gap between legally required notification and genuinely informed consent is where the academic and clinical research community has been working intensively over the past two years. The output has been illuminating, contested, and at times unsettling for anyone who assumed the problem was primarily a legislative one.
The more carefully researchers examine what meaningful AI disclosure looks like in practice, the more apparent it becomes that the mechanisms currently being deployed — consent forms, disclaimer banners, point-of-care signage — were designed for a different problem. They were built for a world of discrete treatments and identifiable human agents. They were not built for AI systems that are probabilistic, continuously updated, embedded invisibly in clinical workflows, and deployed at a scale where individual consent encounters are a rounding error on the governance surface.
The Mello Framework: The Most Actionable Proposal in Circulation
The most cited intellectual contribution to this debate in the past twelve months is a JAMA Perspective published in September 2025 by Michelle Mello, Danton Char, and Sonnet Xu of Stanford: "Ethical Obligations to Inform Patients About Use of AI Tools." The paper proposes a two-factor framework for determining when disclosure is ethically required — and it has become the reference point against which every subsequent contribution positions itself.
The Mello Two-Factor Test
Disclosure obligations are determined by two factors assessed in combination: the risk that an AI error causes physical harm — accounting for the likelihood that a human clinician would catch it — and the patient's capacity to exercise meaningful agency in response to a disclosure, for example by opting out or seeking a second opinion.
Tools that score high on both factors warrant full consent. High on one warrants notification. Low on both warrants neither.
Active Framework jamanetwork.com — Full PaperOrganizational Responsibility, Not Clinician Discretion
The framework's most consequential structural recommendation: disclosure decisions should be made at the organizational level, not left to individual clinicians. This shifts the burden of the consent architecture to health system governance committees and leadership — away from the physician who has 15 minutes for an appointment.
It also explicitly flags the over-disclosure risk: indiscriminate notification about all AI tools, including well-validated low-risk administrative ones, can undermine patient trust in tools that are genuinely beneficial. Not every tool warrants disclosure. The framework's value is in creating principled criteria for the distinction.
The framework is genuinely useful precisely because it resists the impulse to treat all AI tools as equivalent. An AI that flags a potential drug interaction in a pharmacist's queue is categorically different from an AI that drafts a patient's care plan or interprets a cardiac imaging study. Collapsing all of them into a single disclosure obligation produces either over-disclosure — burying patients in notifications for low-stakes tools — or under-disclosure, where the path of least resistance is to call everything low-risk and do nothing.
The CodeX Critique: Why the Framework's Foundation Is Shaky
In February 2026, Stanford Law School's CodeX center published a pointed challenge: "Consent All the Way Down: Why the Mello Framework for AI Disclosure in Healthcare Fails on Its Own Terms." It is worth engaging with seriously, not because it dismisses the Mello framework — it doesn't — but because the critique reveals structural problems that any disclosure framework will face.
Three Foundational Weaknesses
First: The framework treats human oversight as a reliable error-interception mechanism while simultaneously acknowledging it isn't. Automation bias — clinicians deferring to AI output even when their independent judgment would differ — is one of the most documented phenomena in human-computer interaction research. The "human in the loop" is not a safety net; it is a sociotechnical system with its own compounding failure modes.
Second: Patient agency is quantized into a binary when it exists on a gradient. The framework assigns patients the role of quality-control agents while elsewhere acknowledging patients cannot absorb more information.
Active Critique law.stanford.edu — Full AnalysisOrganizations Cannot Perform the Risk Assessments the Framework Demands
Third — and most damaging: the framework presumes health systems have the internal infrastructure to perform the risk assessments that generate the three tiers. The evidence says they don't.
A 2025 CHIME Foundation survey found that only 8% of healthcare organizations described themselves as "very confident" in their ability to identify emerging AI risks. Fewer than half had a formal approval process before AI implementation.
An organization that cannot assess algorithmic risk defaults to the path of least resistance — in the Mello framework, that path is the "neither" category, because it requires no action. The framework's own structure incentivizes underassessment.
The notice-and-consent paradigm fails at four sequential stages: reading is impossible given volume, comprehension is unattainable given complexity, evaluation is foreclosed by lack of technical expertise, and action is impossible because users face take-it-or-leave-it terms.
— Stanford Law CodeX, "Consent All the Way Down," February 2026What Patients Actually Want
While the academic debate continues at the framework level, researchers have moved in parallel to simply ask patients what they expect. The answers are unambiguous — and they complicate both the "just notify them" and the "don't overwhelm them" camps simultaneously.
of U.S. adults expect to be notified or to provide consent before AI tools are used in their healthcare — compared to only 14–16% who said neither was required. The University of Michigan TIERRA study (n=3,000, November 2025) concluded that opt-out approaches are likely insufficient to meet patient expectations. The AMA's 2026 Physician Survey found 40% of physicians simultaneously worry that AI disclosure will stoke patient distrust — even of beneficial tools.
TIERRA Program — Patient Preferences Study
A nationally representative survey of U.S. adults found that the overwhelming majority expect to be informed and involved in decisions about AI use in their care. Preference for notification about AI use in healthcare is higher than reported preferences for notification about biospecimen or health information use in earlier comparable studies.
Key finding: opt-out approaches — where patients are presumed to consent unless they affirmatively object — are likely insufficient. Most patients want active involvement, not passive assumption of consent.
National Survey Data medresearch.umich.edu — Full FindingsPatient Expectations vs. The Over-Disclosure Risk
Mello and colleagues note directly in the JAMA paper that too much disclosure about AI use can harm patients by "stoking distrust of clinicians' recommendations and communications even where evidence suggests that AI improves them." Earlier survey data cited in the paper found that roughly 60% of U.S. adults said they would be uncomfortable with their physician relying on AI, and 70–80% had low expectations that AI would improve their care.
The tension is real: patients want disclosure, but disclosure designed without care can undermine the therapeutic relationship and trust in tools that are genuinely beneficial. The design of disclosure — how it is framed, when, by whom — may matter more than whether it technically occurs.
The Ambient Documentation Problem
If you want to understand why consent infrastructure isn't keeping pace with AI deployment, ambient documentation is the most instructive case study available right now. It is the fastest-growing AI category in healthcare, it is the most intimate in terms of what it captures, and it has the least standardized consent process of any clinical AI application currently in widespread use.
Ambient AI documentation tools — Abridge, Ambience Healthcare, Microsoft Nuance DAX Copilot — passively record and transcribe physician-patient conversations, then generate structured clinical notes. The segment generated $600 million in revenue in 2025, a 2.4× year-over-year increase. Ambient scribes are being deployed into thousands of clinical settings simultaneously, often through EHR vendor bundles or departmental piloting, moving faster than institutional governance structures can evaluate them.
The JAMA Network Open Finding: No Standardized Consent Process Exists
A July 2025 quality improvement study published in JAMA Network Open examined clinician and patient experiences with informed consent for ambient clinical documentation across a health system deployment. The central finding: consent processes varied widely across institutions and within health systems using the same tool. There was no standardized approach — not to the language used, the timing of consent, the delivery format, or the documentation that consent occurred at all.
This is worth sitting with. Ambient tools are recording physician-patient conversations — among the most intimate interactions in healthcare. Several states now require explicit written consent for AI transcription of clinical sessions. And yet the fastest-growing AI category in healthcare has no consistent consent architecture, no standardized disclosure language, and no shared definition of what adequate patient understanding looks like before a recording begins.
The ambient documentation gap is also illustrative of a broader pattern: AI tools enter clinical workflows through operational channels — IT procurement, EHR vendor bundles, departmental pilots — faster than clinical governance can evaluate them. Disclosure obligations become an afterthought to deployment rather than a prerequisite for it. The governance conversation happens after the tool is already in 40 exam rooms.
The Consent Literacy Problem
The American Journal of Bioethics dedicated its March 2025 issue to this domain. One through-line across the papers is a problem the authors collectively identify as consent literacy — the gap between what disclosure frameworks assume patients can absorb and process, and what patients can actually do with complex information about AI systems during a healthcare encounter.
The Informed Rational Patient
Disclosure frameworks are built on the premise that patients can read a disclosure, evaluate the risk profile of the AI tool described, compare it against alternatives, and make a meaningful choice — for example, opting out, seeking a second opinion, or adjusting their level of engagement with subsequent recommendations.
This is the model of patient autonomy embedded in U.S. informed consent law since the 1970s. It was designed for a world of discrete treatments and identifiable human agents making bounded decisions. It assumes time, literacy, technical capacity, and the existence of genuine alternatives.
Built for a Different ProblemA Different Reality
Most patients encounter AI disclosure during healthcare encounters — moments of stress, time pressure, health anxiety, and cognitive load. They receive dense, legally-drafted language about tools they cannot evaluate, produced by systems they cannot audit, with no realistic alternative but to accept the terms of care as offered.
Barbara Evans and Azra Bihorac, writing in NEJM AI, argue that informed consent as conceived in the 1970s is structurally incompatible with large-scale AI environments. The volume and complexity of AI tools in clinical care exceeds what any patient — or clinician — can meaningfully evaluate tool by tool. Community-based, ongoing oversight structures may be the only mechanism that can give patients genuine voice in aggregate policy.
Requires Structural RethinkingHurley and colleagues at Baylor College of Medicine propose in the American Journal of Bioethics that the right to notice and explanation of AI serves three distinct functions, each requiring a different approach: notifying patients about their care, educating patients and building trust, and meeting standards for informed consent. Simple notification is achievable with a banner or a form. Education that builds genuine understanding requires interaction, time, and content that most health systems cannot currently provide. Meeting informed consent standards — where patients have the information they would need to make a meaningful choice — may require changes to clinical workflow the healthcare system isn't structurally prepared to make.
The Entrepreneur Opportunity
Against this backdrop, the market for disclosure and consent infrastructure in healthcare is real, growing, and largely unbuilt. CB Insights' 2025 Digital Health 50 identified patient consent management as an emerging product category alongside AI governance platforms — but the category is still early-stage. The specific product gaps that represent genuine entrepreneurial opportunity are as follows.
Consent Management as a Product Layer
A HIPAA-compliant, EHR-integrated consent management layer that tracks which AI tools a patient has been disclosed to, stores versioned consent records, and triggers re-consent when models are updated — built specifically for multi-state compliance requirements.
Disclosure UX Beyond Boilerplate
Plain-language explainability interfaces aligned to the Mello framework's risk tiers — translating model behavior, known failure modes, and liability responsibility into formats patients can actually engage with. Less terms-of-service, more patient communication.
Patient-Facing AI Nutrition Labels
CHAI's Applied Model Card is written for procurement teams. No patient-facing equivalent exists. A standardized plain-language summary — what the AI does, what data it uses, its known failure modes, who is responsible for its output — embedded at point of care.
Multi-State Disclosure SaaS
A compliance layer that maps AI tool deployments to state-specific disclosure requirements, auto-generates compliant disclosure language by jurisdiction, and maintains auditable records defensible against attorney general enforcement — including TRAIGA's 60-day cure period mechanics.
Standardized Ambient Consent
A consent workflow product designed specifically for ambient documentation: patient-facing, multilingual, accessible across literacy levels, auditable per encounter, and deployable as an EHR-embedded pre-session module.
Patient-Side Adverse Event Reporting
The patient-facing complement to CHAI's Health AI Registry — mechanisms for patients to report AI-related errors, unexpected recommendations, or trust violations, feeding back to independent oversight bodies before adverse events escalate through the litigation system.
The compliance mandate exists or is arriving. The state-law patchwork is tightening. The Joint Commission and CHAI accreditation pathway is advancing. The tooling to meet these requirements at enterprise scale — in a way that is genuinely patient-centered rather than institutionally defensive — largely does not yet exist. That is a product gap before it is a policy gap.
What the Research Collectively Establishes
First: patients want disclosure, and opt-out models are insufficient. Second: the leading academic framework for deciding when disclosure is required rests on infrastructure assumptions that most health systems cannot currently satisfy. Third: the fastest-growing clinical AI category — ambient documentation — has no standardized consent process anywhere in the country. Fourth: the 1970s model of informed consent is structurally incompatible with the scale and complexity of AI deployment in modern healthcare.
None of this means disclosure is impossible or counterproductive. It means that the path from a legal requirement to a patient who is genuinely informed runs through product design, workflow integration, and governance infrastructure that is still being built. Part 3 of this series examines the hardest questions about what that infrastructure should look like — and why reasonable, well-informed people disagree on the answers.
