Patient AI Use — The Clinical Case: Where Physicians Draw the Line and Why — Luminity Digital
Patient AI Use in Healthcare  ·  Part 2 of 2
Healthcare AI  ·  Policy & Innovation

Patient AI Use — The Framework Gap: What Needs to Be Built

Patient AI use is happening at massive scale. Liability defaults to physicians without giving them tools. No professional body has issued standalone guidance. No product helps clinicians manage what patients are doing with AI between appointments. Here is what the gap actually looks like — and what needs to change.

March 2026 Tom M. Gomez 13 Min Read

Part 1 of this series established the clinical case: the physician comfort hierarchy revealed in the AMA's 2026 Physician Survey is evidence-based, not reflexive. AI demonstrably helps patients in the informational and preparation tiers. It demonstrably fails in the clinical interpretation tier. The boundary is right. The problem is that medicine has built almost nothing to enforce it, guide patients toward the safe side of it, or protect clinicians when patients cross it on their own.

There is a particular kind of governance failure that happens in plain sight. It is not the failure to recognize a problem — the AMA survey, the ECRI hazard rankings, the Nature Medicine emergency undertriage data, the NEJM AI overtrust study make the problem visible. It is the failure to act on recognition. Three years after ChatGPT made consumer health AI a mass phenomenon, no major professional body has issued standalone guidance on counseling patients about their own independent AI use. No EHR system tracks what AI tools patients are consulting between appointments. No clinical framework exists for the conversation a physician should have with a patient who arrives having already diagnosed themselves with ChatGPT. The guidelines that govern this space were written for a world where the patient asked Google, not a world where 230 million people ask a confident, empathetic AI that undertriages half of emergencies.

The Liability Landscape: Physicians Bear Risk Without Tools

The legal framework for patient harm from independent AI health use is unsettled, actively contested, and defaulting to physician liability in ways that give physicians no practical recourse. Understanding the liability picture is not an academic exercise — it is the reason clinicians have a professional stake in building the frameworks that currently don't exist.

Current Law · Physician Exposure

Physicians Bear Primary Liability Under Existing Doctrine

Federation of State Medical Boards · April 2024 recommendation

The Federation of State Medical Boards recommended in April 2024 that medical professionals are responsible for ensuring accuracy and veracity of evidence-based conclusions — framing AI as equivalent to any other diagnostic tool and placing liability on clinicians rather than AI makers. The Doctors Company, the nation's largest physician-owned malpractice insurer, currently has no exclusion for AI and would defend physicians if AI played a role in a claim — but explicitly acknowledged that physicians "bear the biggest risk."

The emerging double bind: physicians face exposure for over-relying on AI recommendations and potential future liability for failing to use AI if it becomes standard of care. Johns Hopkins researchers described it as "an immense, almost superhuman burden."

Unresolved Liability Healthcare Brew — Doctors Liable When AI Makes a Mistake
The Patient AI Variant

When a Patient Acts on AI Advice Without Physician Involvement

The most legally ambiguous scenario — a patient uses ChatGPT, misinterprets a result, delays or refuses care, and experiences harm — has no established legal framework assigning clear responsibility. Traditional assumption-of-risk defenses may apply, but AI companies' terms-of-service disclaimers face significant enforceability challenges, particularly regarding minors and foreseeable harms.

The first enforcement signal: Texas Attorney General Ken Paxton reached a September 2024 settlement with Pieces Technologies — the first state AG enforcement against a healthcare AI company — for misleading claims about AI accuracy in clinical documentation. The theory was deceptive trade practice, not medical malpractice. This is a different and potentially faster enforcement pathway than negligence litigation.

Section 230 protection for AI-generated health content is eroding. Character.AI did not claim Section 230 as a defense in the Garcia wrongful death lawsuit — settled in January 2026 — a recognition that the defense likely fails for generative AI outputs. Legal consensus is moving toward treating AI chatbot responses as authored speech, not hosted third-party content.

Texas AG — First Healthcare AI Enforcement Settlement

The Structural Trap for Clinicians

Patients are using consumer AI for health decisions outside of clinical relationships. When those decisions cause harm, liability frameworks default to physicians even when physicians had no involvement in or awareness of the AI consultation. A physician who never knew a patient used ChatGPT to interpret a pathology report may still face liability for the resulting treatment delay — on the theory that the physician should have counseled the patient about AI use before or after the report was issued.

The only protection against this exposure is the very thing that doesn't yet exist: a clinical standard for discussing patient AI use, a workflow for documenting that conversation, and professional guidelines that establish what "reasonable care" looks like when patients are known to be active health AI users.

The Professional Guidelines Gap: A Conspicuous Silence

The most striking finding from surveying the professional landscape is not the absence of good answers — it is the near-complete absence of the question. Major professional bodies have produced extensive guidance on clinician AI use and on AI systems deployed by institutions. Almost none have issued standalone guidance on how physicians should counsel patients about their own independent consumer AI use.

What Exists · Clinician AI Guidance

Robust Guidance on Clinician Use, Near-Silence on Patient Use

ACP, AMA, ASCO, Joint Commission/CHAI — 2024–2025

The American College of Physicians published a comprehensive policy position in Annals of Internal Medicine (June 2024) with 10 recommendations on AI in clinical practice. Its patient-facing guidance is one sentence: recommend that "patients, physicians, and other clinicians be made aware, when possible, that AI tools are likely being used." The Joint Commission/CHAI RUAIH guidance (September 2025) recommends informing patients when institutional AI impacts their care — not addressing what patients do independently.

ASCO established six guiding principles for AI in oncology in 2024. The APA issued a Health Advisory in November 2025 urging patients not to use chatbots as substitutes for mental health professionals. The WHO's January 2024 guidance identified patient-guided AI use as a primary application category — and flagged risks — but provided no clinical counseling framework.

Partially Addressed NLM — ACP AI Recommendations
What Doesn't Exist

The Missing Standalone Guidance

No major medical professional body has issued a standalone clinical guideline specifically on how physicians should counsel patients about their own use of consumer AI tools for health. There is no equivalent of "talk to your doctor before taking a new supplement" for AI health use. There is no validated screening question for clinical intake forms. There is no EHR field for documenting whether a patient has consulted AI about the condition being treated.

The National Academy of Medicine's December 2025 paper on "Critical AI Health Literacy" comes closest — articulating a framework for patients — but it is a NAM Perspectives paper, not a clinical practice guideline, and it has no implementation pathway into clinical workflows.

The NEJM AI editorial by Blumenthal and Goldberg coined "Patient AI" as a distinct category requiring clinical management, but identified no organization actively building the counseling framework that term implies.

No Guidance Exists NAM — Critical AI Health Literacy

How Patients Actually Use AI: The Clinical Encounter Reality

The framework gap is not theoretical. Patients are already bringing AI into the clinical relationship — before appointments, after appointments, and increasingly during the intervals when symptoms develop and the office is closed. Understanding this usage pattern is a prerequisite for designing any response to it.

Before the Appointment

Patients Are Arriving Pre-Diagnosed

31% of Americans use AI chatbots to prepare questions for doctor visits. 55% use AI to check or explore symptoms before seeking care. A growing subset — particularly among younger patients — arrive having already formed a diagnostic hypothesis via AI and seeking confirmation rather than independent assessment.

Physicians at Harvard's Beth Israel Deaconess and Kettering Health report patients bringing AI-generated summaries to appointments. This can be productive — better-prepared patients, more focused appointments. It becomes problematic when patients have formed incorrect conclusions that are resistant to revision because they arrive with a confident AI narrative already in place.

Only 8% of patients disclose AI use to their physician — despite 30% of physicians believing most patients are probably using it. The disclosure gap means clinicians are flying blind about the AI context shaping patient expectations before an encounter begins.

Happening Without Clinical Awareness
After the Appointment & Between Visits

The After-Hours AI System

70% of health AI use occurs outside clinic hours. Patients use AI to decode discharge instructions, understand test results, interpret medical bills, compare medications, and — in the most dangerous cases — decide whether a worsening symptom requires emergency care.

The 52% emergency undertriage rate from the Mount Sinai/Nature Medicine study is not an abstract safety concern. It describes a patient with diabetic ketoacidosis being told by AI to wait 24–48 hours. That patient is not waiting with clinical supervision. They are at home, alone, after hours, acting on AI advice that is confidently wrong.

Rural communities send approximately 580,000 health AI queries per week to ChatGPT — in areas where the alternative is not a doctor's office but a two-hour drive to the nearest emergency department. For these patients, the after-hours AI use is not convenience; it is the healthcare system they actually have access to.

Requires Clinical Framework
250+

Health AI bills were introduced across 34 U.S. states in 2025 alone. All of them address AI deployed by healthcare providers or institutions. Essentially none address AI used by patients independently. The regulatory energy is entirely concentrated on the institutional deployment side of the problem — leaving the 230 million weekly consumer health AI users without a single enforceable protection designed specifically for them. Source: Manatt Health AI Policy Tracker

The Startup Landscape: $14 Billion Into the Wrong Problem

Healthcare AI investment reached $14.2 billion in 2025, with AI capturing 54% of all digital health funding. The money has flowed predominantly into two streams: clinician workflow tools (ambient documentation, clinical decision support, coding) and direct-to-consumer patient AI (ChatGPT Health, symptom checkers, general chatbots). The bridge between these — tools helping clinicians manage, respond to, and guide patient AI use within clinical workflows — is largely unbuilt.

Current Leaders · Patient-Facing AI

What Is Being Built

Hippocratic AI, RecovryAI, Kaiser, OpenAI Health — 2025–2026

Hippocratic AI ($404M raised, $3.5B valuation) deploys voice-enabled AI agents for non-diagnostic tasks: chronic care management, post-discharge follow-ups, care gap closure, medication adherence. Their critical design constraint — agents explicitly avoid diagnosis and escalate clinical questions to human nurses — is the architectural expression of the physician comfort hierarchy. 115 million+ patient interactions with no reported safety incidents is evidence that well-bounded patient AI is both commercially viable and safe.

RecovryAI received FDA Breakthrough Device Designation in March 2026 for physician-prescribed Virtual Care Assistants — the first patient-facing clinical AI to enter this regulatory pathway. The prescription model matters: it establishes institutional responsibility, clinical accountability, and compliance with professional oversight that consumer chatbots entirely lack.

Viable Bounded Models Business Wire — RecovryAI FDA Breakthrough Designation
What Nobody Is Building

The Clinician-Mediated Patient AI Layer

No commercial product exists to help physicians track, understand, or guide patients' independent use of consumer AI for health between appointments. This is the single largest product gap in the healthcare AI landscape — and it sits at the intersection of the liability exposure, the guidelines gap, and the patient usage patterns described above.

What this product category would do: give physicians a structured way to ask about patient AI use during intake, document it, flag high-risk AI-informed decisions for clinical discussion, and provide patients with curated, clinician-endorsed guidance on which AI use cases are safe. The healthcare equivalent of "here are the AI tools I recommend for managing your condition, and here is what not to use them for."

Kaiser Permanente's Intelligent Navigator (launched October 2024) is the closest institutional example — a curated AI portal with clinical guardrails. But it requires Kaiser's scale, integration infrastructure, and clinical governance apparatus to operate. The equivalent product for a 26-physician group practice or a solo practitioner does not exist.

Entirely Unbuilt

Five Product Gaps That Need to Be Filled

The framework gap is a product gap before it is a policy gap. The clinical counseling standard, the liability protection, and the patient safety guardrail all depend on infrastructure that doesn't yet exist. Here is what needs to be built.

01  ·  Workflow

Patient AI Use Intake Module

A structured clinical intake component — EHR-embedded or standalone — that asks patients which AI tools they have used for the condition being treated, what the AI recommended, and flags responses requiring clinical follow-up. Turns an undocumented behavior into auditable clinical data.

02  ·  Prescription

Physician-Prescribed Patient AI

A framework for clinicians to recommend specific, vetted AI tools to patients as part of care plans — with explicit scope boundaries (what the tool is for, what it isn't for), clinical accountability, and patient-reported outcomes feeding back into clinical records. The RecovryAI FDA pathway is the first proof point that this model is viable.

03  ·  Literacy

Patient AI Literacy at Point of Care

Clinician-endorsed patient education on safe AI use — what AI is good for (medication questions, appointment preparation, discharge clarification) and what it isn't (pathology interpretation, emergency triage, diagnostic second opinions). Delivered through patient portals, discharge packets, or condition-specific care plans. The "talk to your doctor" equivalent for health AI.

05  ·  Feedback

Patient-Side AI Adverse Event Reporting

A mechanism for patients to report AI health advice that was wrong, confusing, or harmful — feeding anonymized signal to clinical governance bodies, CHAI's Health AI Registry, and ECRI's patient safety monitoring. Closes the feedback loop between consumer AI use patterns and clinical safety surveillance without requiring litigation as the primary signal.

06  ·  Access

Curated Health AI Portals for Practice-Scale

What Kaiser's Intelligent Navigator does at enterprise scale, delivered as a SaaS product for group practices and solo practitioners. A clinician-curated portal where patients access AI tools that stay within clinical guardrails — bounded by diagnosis restrictions, connected to escalation pathways, and accountable under the practice's clinical governance framework.

The Path Forward: Clinician-Mediated Guardrails

The argument is not that patients should be prohibited from using AI for health — 230 million weekly users make that argument moot before it starts. The argument is that medicine should proactively shape how patients use AI rather than passively inheriting whatever the consumer AI companies build.

The physician comfort hierarchy in the AMA survey is the starting point. It represents the clinical community's considered judgment about where AI helps and where it harms. Translating that hierarchy into clinical practice requires three things: a counseling standard that tells physicians how to have the conversation, documentation infrastructure that makes that conversation auditable, and patient-facing tools that give the conversation somewhere to land.

The question is not whether patients will use AI for health. It is whether medicine will build the frameworks to channel that use toward the evidence-supported applications — or wait for the liability cases and the ECRI hazard rankings to accumulate until the pressure to act becomes unavoidable.

— Luminity Digital, Patient AI Use in Healthcare Series, March 2026

The analogy to managing "Dr. Google" is instructive but insufficient. Internet health searching was largely passive information retrieval with conflicting results that prompted verification. AI health use is interactive, personalized, authoritative-sounding, and — as the NEJM AI overtrust study confirmed — indistinguishable from physician advice to most patients. The informal "be careful what you read online" approach that eventually evolved around web-based health information is not adequate for a tool that undertriages half of emergencies with the same confident empathy it uses to explain how ibuprofen works.

The Joint Commission and CHAI voluntary certification pathway is advancing. Professional societies are beginning to act. The FDA Breakthrough Device Designation for RecovryAI signals a regulatory pathway for physician-prescribed patient AI. The building blocks are visible. What the field now needs is the clinical counseling framework, the EHR workflow integration, and the product infrastructure to connect them. The institutions, clinicians, and builders who work through this now will define the standard for the next decade of the doctor-patient relationship. The ones who wait will inherit whatever standard those builders produce.

Luminity Perspective

The healthcare AI market is spending $14 billion on clinical workflow tools and consumer chatbots. The gap is in the middle: tools that help clinicians manage what patients are doing with AI between appointments, document those conversations, and guide patients toward the beneficial tier of AI use. This is a product category, a liability protection mechanism, and a patient safety intervention simultaneously. It does not require building new AI — it requires building the clinical infrastructure around the AI that already exists.

The AMA's 2026 survey data is the business case. The liability landscape is the urgency. The 230 million weekly users are the market.

What This Series Established

Part 1 — The Clinical Case. The physician comfort hierarchy from the AMA 2026 survey is evidence-based. AI demonstrably helps in the informational and preparation tiers. It demonstrably fails in the clinical interpretation tier. The 52% emergency undertriage rate, the 83% pediatric diagnostic error, the documented toxic ingestion cases, and the NEJM AI overtrust data validate the boundaries physicians have drawn.

Part 2 — The Framework Gap. Liability defaults to physicians without giving them tools. No major professional body has issued standalone guidance on counseling patients about consumer AI. No EHR tracks patient AI use. The $14 billion flowing into healthcare AI has largely bypassed the clinician-mediated patient AI layer. The five product gaps identified here represent the infrastructure that needs to exist before the boundary can mean anything in practice.

The path forward is clinician-mediated guardrails — not prohibition, not passivity, but proactive clinical management of how 230 million weekly users interact with a technology that is simultaneously the most accessible health resource ever built and one of the most dangerous when used without appropriate boundaries.

Patient AI Use in Healthcare — Series Complete

This two-part series was produced by Luminity Digital as part of our ongoing healthcare AI governance research. Full references and source material listed below.

Patient AI Use in Healthcare  ·  Two-Part Series
Part 1 · Published The Clinical Case: Where Physicians Draw the Line and Why
Part 2 · Now Reading The Framework Gap: What Needs to Be Built
References & Sources — Full Series

Share this:

Like this:

Like Loading...