Patient-led, evidence-informed insights on medical AI

Practical takeaways for clinicians, health leaders, and recruiters.

Stay current on AI in patient care

Join clinicians and recruiters receiving practical, evidence-informed updates.

Subscribe now About Dan

"Only Problem Patients Read These Forms"

How do we handle informed consent when it is viewed as a nuisance? There are strategies we can all handle to make informed consent truly informed.

"Only Problem Patients Read These Forms"


That's what the front desk staff told me this morning when I asked if anyone actually reads that stack of informed consent forms they handed to me.

I had to ask, "What happens if someone questions them or refuses to sign these forms simply because they don't understand what they say?"

"We fire the patient. We don't have time for trouble." I felt their pain. They are overworked, and of course, they have patients like me asking questions.

As someone who navigates healthcare both as a patient with a chronic condition and as a Responsible AI Healthcare Strategist helping organizations implement AI governance frameworks, this moment revealed a crisis hiding in plain sight.

If healthcare systems view basic informed consent as a nuisance, something only "problem patients" care about, how will they possibly handle AI consent requirements?

The Evidence Is Stark
Research shows fewer than 75% of patients correctly understand what they consented to, even in clinical trials. For complex concepts, that number drops to 50%.

Yet, these same systems must now disclose when AI influences diagnosis, explain algorithmic decision-making, clarify accountability when systems fail, and provide opt-out options.
Survey data reveal the gap: 75% of patients don't trust AI in healthcare, and 80% are unaware whether their doctor is using it. When informed, 80% say disclosure would improve their comfort.

The FDA Has Spoken
The January 2025 FDA guidance isn't optional; it establishes clear expectations for transparency, bias mitigation, and lifecycle management of AI-enabled medical devices. The WHO's AI ethics principles emphasize informed consent as a foundational principle.

But regulations alone won't fix a culture where asking questions makes you a "problem patient."

What Responsible AI Consent Actually Requires:

✓ Proactively disclose AI use in plain language
✓ Explain the AI's specific role in each patient's care
✓ Clarify who remains accountable for decisions
✓ Provide meaningful opt-out options
✓ Welcome patient questions as essential to governance, not trouble

This isn't about perfect systems. It's about fundamental respect for patient autonomy.

The Questions That Matter

  • Does your organization treat patients who ask about AI as informed participants or as problems?
  • What happens when someone questions AI use in their care, genuine dialogue, or "we don't have time for trouble"?

The future of responsible AI in healthcare doesn't depend solely on better algorithms. It depends on whether organizations can transform consent culture from legal protection theater into genuine patient empowerment.

About Dan Noyes

Dan Noyes operates at the intersection of healthcare AI strategy and governance. After 25 years leading digital marketing strategy, he is transitioning his expertise to healthcare AI, driven by his experience as a chronic care patient and his commitment to ensuring AI serves all patients equitably. Dan holds AI certifications from Stanford, Wharton, and Google Cloud, grounding his strategic insights in comprehensive knowledge of AI governance frameworks, bias detection methodologies, and responsible AI principles. His work focuses on helping healthcare organizations implement AI systems that meet both regulatory requirements and ethical obligations—building governance structures that enable innovation while protecting patient safety and advancing health equity

Want help implementing responsible AI in your organization? Learn more about strategic advisory services at Viable Health AI