Patient-led, evidence-informed insights on medical AI

Practical takeaways for clinicians, health leaders, and recruiters.

Stay current on AI in patient care

Join clinicians and recruiters receiving practical, evidence-informed updates.

Subscribe now About Dan

Adoption vs. Trust: Clinicians Lean In; Patients Stay Cautious

As a Responsible AI in Healthcare Strategist I find that I continually have to juggle the rapid adoption of healthcare AI against the real concerns many patients, like myself, feel with the rapid adoption of AI in the healthcare setting.

Adoption vs. Trust: Clinicians Lean In; Patients Stay Cautious
Responsible AI in Healthcare Strategies. Adoption vs. Trust. Are clinicians aligned in their perception of AI and clinical outcomes.

As a Responsible AI in Healthcare Strategist I find that I continually have to juggle the rapid adoption of healthcare AI against the real concerns many patients, like myself, feel with the rapid adoption of AI in the healthcare setting. I dig deep into Epic and all that they are doing with predictive analytics with EHR data and I ask myself what would these analytics have said about me and my healthcare journey. Would they have helped or would it have pointed my clinical team in another direction?

In 2025, we find ourselves in a paradoxical moment in healthcare AI: physicians are adopting AI in droves, yet many patients remain uneasy—or simply unaware—of its use in their care. Adoption is outpacing trust. That mismatch is both a technical challenge and a moral one. If AI is to enhance healthcare responsibly, bridging that gap is mission critical.

1. Surge in Clinician Adoption — But Confidence Isn’t Absolute

A recent AMA “Augmented Intelligence” survey reports that 66% of U.S. physicians now use some form of health AI in their practice (up from 38% in 2023). The growth is “unusually fast” for healthcare technology, reflecting strong momentum.

Physicians report deploying AI mainly in administrative or augmentation roles: chart documentation, generating discharge instructions, summarizing research, assisting translation, reminding care plans, and in some cases, triage support.

Yet the survey reveals that adoption does not imply wholehearted trust. Only 35% of physicians say their enthusiasm for AI now outweighs their concerns (up from 30%). Another 25% report their concerns now outweigh enthusiasm (down somewhat), and the remainder remain equally excited and worried.

Key physician concerns include:

  • Oversight/regulation — 47% rank increased regulatory supervision as the most important factor to bolster trust.
  • Data privacy assurances — 87% of physicians list data privacy as essential.
  • EHR / workflow integration — seamless embedding into clinical workflows is often cited as a gating factor.
  • Liability & error handling — physicians worry AI errors or recommendations could expose them to risk, especially when model logic is opaque.

So we have widespread use and cautious optimism. In other words: clinicians are testing the pedals but haven’t floored the accelerator.

2. Patient Perspectives: Hesitant, Demanding Transparency

Even as AI tools creep deeper into clinical routines, patient sentiment is more ambivalent. A large, cross-hospital global survey of 13,806 patients across 74 hospitals found:

  • 57.6% expressed a generally positive view of AI in health care.
  • But less than half trusted AI to accurately predict treatment responses (41.8%).
  • Most patients said they prefer explainable AI and physician oversight, and only ~4.4% supported fully autonomous AI.
  • Trust levels varied by demographics: older patients, those in poorer health, and lower tech literacy participants exhibited more skepticism.

Other surveys echo this trend: patients generally welcome AI as a tool, but want assured degrees of control, clarity about how it’s being used, and the human in the loop.

An especially provocative insight: in the domain of medical advice generation, one experimental study showed that laypeople struggle to distinguish AI-generated vs. physician-written responses; moreover, participants rated the AI responses (even those with errors) as more “complete” and “trustworthy” than physician responses. That suggests the risk of over-trust in AI by patients, especially when the system is opaque.

In short: patients don’t mind AI in principle—but they demand transparency, human accountability, and trust infrastructure.


3. The Mismatch: Why Trust Lags Adoption

Why is this gap so persistent? A few key forces:

  • Asymmetric knowledge & power: Clinicians know the tech, patients don’t. Most patients won’t see the algorithm code or model updates; transparency becomes a promise, not an experience.
  • Explainability / interpretability barrier: Many AI models are black boxes. Even when explanations are built, they may be too simplified or not clinically salient.
  • Liability / accountability fog: Who is responsible when AI errs? The physician, the vendor, the hospital, the coder? That unresolved tension undermines confidence.
  • Data provenance & fairness fears: Patients worry about bias, misuse of data, and opaque training datasets.
  • Trust “deposits” vs “withdrawals”: Even small missteps (misdiagnosis, errors, overpromising) erode trust more than many successes reinforce it.
  • Lack of visible oversight and governance signals: Patients don’t see the audit trails, oversight frameworks, or safety guardrails; they just see a “black box” being applied to their care.

Also, physicians themselves often act in “negotiated” ways with AI suggestions: they may accept, reject, or partially incorporate AI advice depending on context. A study in sepsis decision-making characterized this as “ignore, trust, or negotiate,” depending on clinician confidence and explained uncertainty. That suggests deploying AI isn’t binary—it’s relational.


4. Toward a Bridge: Strategies to Align Adoption & Trust

Here’s where the rubber meets the road. To reconcile adoption momentum with patient trust, systems should consider:

1. Transparent consent / disclosure mechanisms

  • Always tell patients when AI tools are involved (e.g. “Your clinician used an AI assistant to draft this plan”)
  • Offer “opt-out” or human-only fallback pathways
  • Provide short, understandable language about the AI’s role, limitations, and oversight

2. Explainability + Audit Trails

  • Use interpretable model explanations aligned to clinical reasoning
  • Maintain audit logs and version histories
  • Periodically show patients (or their proxies) summaries of AI decisions and how those recommendations were used

3. Hybrid human-in-loop workflows

  • AI as an assistant, not an arbiter. Final decisions rest with clinicians
  • Incorporate clinician overrides, with rationale logging
  • Encourage “negotiation” interfaces (e.g. clinicians can adjust weights or flags in AI suggestions)

4. Independent oversight & certification

  • External auditing (bias, drift, safety)
  • Publicly visible certification or seals (like “AI verified by X”)
  • Regulatory frameworks that mandate reporting, transparency, and redress

5. Feedback loops, patient voices, and iterative learning

  • Mechanisms for patient complaints, feedback, corrections
  • User experience (UX) study of how patients understand AI in care
  • Monitoring trust metrics over time (e.g. patient satisfaction, opt-out rates, complaints)

6. Education & shared narratives

  • Clinician-facing and patient-facing educational materials
  • Use real case studies (with anonymization) to show AI’s performance and limitations
  • Narratives that emphasize augmentation, not replacement

5. Narrative Fragment: A Patient’s Glimpse

Maria, 68, comes in for a follow-up in cardiology. Her clinician says: “I used an AI tool to flag that your LDL trajectory is risky, and it suggested possible dosage adjustments. I reviewed and adjusted it based on your kidney labs.” Maria frowns: “So, the machine is telling you what to do?” The doctor replies: “No — the AI’s recommendation helps me think faster. You and I decide together.”

That small exchange illustrates the tension: AI as behind-the-scenes co-pilot, but with human accountability front and center.