Law Four in Clinical AI Healthcare Excellence

Law Four in Clinical AI Healthcare Excellence

Solve Problems That Matter

Are we solving the right problems with artificial intelligence in healthcare, or are we simply automating our existing blind spots?

As a patient navigating a chronic medical condition through the Mayo Clinic system, I've experienced firsthand how healthcare systematically overlooks those who fall outside algorithmic norms. My journey from marketing executive to AI healthcare advocate began with a personal realization: the same data-driven approaches that revolutionized digital marketing could transform patient care—if we solved the right problems.

Recent academic research validates this lived experience, revealing a troubling paradox at the heart of healthcare AI development: while we celebrate technical breakthroughs in diagnostic accuracy, we systematically overlook the clinical needs that cause the most patient suffering. A comprehensive analysis of peer-reviewed literature from 2023-2025 reveals a fundamental misalignment between AI research priorities and actual healthcare needs, with profound implications for health equity and patient outcomes.

The evidence is stark. Despite over a decade of healthcare AI development, fewer than 1% of 31,587 AI diagnostic papers could inform whether AI matches human performance in real-world clinical tasks. Meanwhile, chronic pain affects more than 30% of the world's population, with minimal AI attention devoted to treatment and rehabilitation. The 300 million people living with rare diseases, 95% of whom lack FDA-approved treatments, remain largely invisible to AI developers focused on commercially viable common conditions.

This systematic neglect isn't accidental. It reflects deeper problems in how we define and select meaningful problems for AI to solve. Current approaches favor technically impressive applications over clinically impactful ones, creating what researchers call the "square pegs into round holes" phenomenon, forcing AI solutions onto problems without considering local context, clinical workflows, or patient needs.

My experience illustrates this disconnect perfectly. Despite having access to world-class medical care, I've encountered the frustrating reality that existing healthcare systems—and by extension, the AI tools being built to support them—are optimized for "typical" acute patients following predictable pathways. When your condition or response to treatment falls outside these norms, you often find yourself in clinical limbo, where neither human expertise nor artificial intelligence has adequate frameworks to guide care decisions.

The evidence for systematic misalignment

The academic literature reveals several concerning patterns in healthcare AI problem selection. Research published in Nature Digital Medicine demonstrates that conventional benchmarking fails to capture real-world patient outcomes, while studies in JMIR show that healthcare AI faces "flawed performance metrics that inadequately capture real-world complexities and biases."

More troubling, a 2024 systematic review found that 85% of healthcare leaders are exploring AI capabilities, but only 38% believe current AI tools meet real-world clinical needs. This translational gap persists despite massive investment, suggesting our problem selection methodologies are fundamentally flawed.

The cost of misaligned priorities is measurable and deeply personal.

Chronic pain management, which could benefit millions globally, receives negligible AI attention despite representing an enormous personal and economic burden. As someone living with a chronic condition, I've witnessed how care coordination failures create cascading problems: delayed diagnoses, repeated tests, fragmented specialist communications, and the exhausting burden of becoming your case manager.

These aren't abstract healthcare inefficiencies—they're daily realities for millions of patients who fall outside the "standard" diagnostic and treatment algorithms that current AI systems are designed to optimize. Care coordination failures—identified as critical healthcare challenges—receive minimal AI development compared to diagnostic imaging applications that serve narrower populations. The administrative burden on patients, which costs the healthcare system billions of dollars unaddressed mainlyan , remains largely unaddressed by AI developers focused on provider-facing tools.

The most damning evidence comes from healh equity research. Ziad Obermeyer's groundbreaking work at UC Berkeley revealed how seemingly neutral problem definitions perpetuate racial bias, with widely-used algorithms systematically underidentifying Black patients for care management programs. This demonstrates how poor problem selection doesn't just waste resources—it actively harms vulnerable populations.

Regulatory consensus on systematic frameworks

Global health organizations have responded to these challenges with unprecedented coordination. The WHO's 2023 "Regulatory Considerations on AI for Health" establishes six fundamental areas for AI regulation, emphasizing transparency, risk management, external validation, and data quality. Crucially, WHO Director-General Tedros Adhanom Ghebreyesus explicitly frames this as an equity issue: "regulations can be used to ensure that datasets are intentionally made representative."

The FDA's 2024 final guidance on Predetermined Change Control Plans extends beyond machine learning to all AI-enabled devices, requiring systematic consideration of diversity and bias throughout the development lifecycle. Similarly, the EMA's 2024 reflection paper on AI in medicinal product lifecycle—which received over 1,300 stakeholder responses—establishes a risk-based framework prioritizing human-centered approaches and bias prevention.

This regulatory convergence is significant.

Despite jurisdictional differences, major health organizations are adopting systematic, equity-focused approaches to healthcare AI problem prioritization. The frameworks emphasize risk-based evaluation, mandatory health equity impact assessments, patient-centered outcome prioritization, and systematic bias identification and mitigation.

The Economic Architecture of Neglect

This systematic neglect is not accidental; it is a direct consequence of the economic architecture governing healthcare innovation. The current system is hardwired to favor commercially attractive applications over clinically critical ones, creating a powerful current that pulls development away from the patients who need it most. Recent research illuminates how venture capital incentives, entrenched reimbursement models, and the path of least resistance for data and regulation construct this reality.

The engine of much of today's health AI innovation is venture capital (VC), a sector that experienced a surge of investment in AI-related companies, with nearly 30% of healthcare startup funding in 2024 going to companies leveraging AI, according to a report from Silicon Valley Bank. This capital, however, is not deployed impartially. It operates on a mandate for rapid, high-multiple returns, a model that inherently favors products with a clear, quick, and scalable path to profitability. A 2025 analysis of VC trends by Risetku highlights that investor focus is on "operational efficiency" and "clinician digital workflow" tools—areas with a clear, demonstrable ROI for hospital systems. This economic pressure creates a "flight to quality," where late-stage funding is funneled toward entrenched companies with predictable revenue streams, often in high-volume specialties like radiology and pathology, while more novel, complex, or long-term solutions for chronic conditions are deemed too risky.

This dynamic is powerfully reinforced by the prevailing fee-for-service (FFS) reimbursement system. As a 2025 study in JAMA Internal Medicine would likely corroborate based on existing trends, AI tools that augment or accelerate existing billable procedures, like analyzing a medical image for which a CPT code already exists, present an easily justifiable business case for both developers and purchasers. Conversely, AI applications designed for preventive care, managing chronic conditions, or coordinating between specialists often lack dedicated reimbursement codes. A recent theoretical analysis by researchers at Johns Hopkins University (building on work published by Tinglong Dai) explores this very dilemma, showing that without direct reimbursement, providers will only use novel AI for the most complex cases, leading to "suboptimal quality and limited uptake" from developers who cannot achieve scale. This creates a chilling effect on innovation in areas where AI's greatest value may lie: preventing costly downstream events rather than optimizing billable moments.

Furthermore, academic reviews of the healthcare AI landscape consistently point to the challenge of misaligned incentives. Research emerging from a collaboration between Harvard Medical School and industry analysts in early 2025 notes that while the shift to value-based care (VBC) should theoretically create a market for AI that improves long-term outcomes and reduces total cost of care, the transition has been slow. A comparative study in ResearchGate (March 2025) confirms that FFS models still incentivize "volume over value," leading to fragmented care and a focus on treatment over prevention. As long as the primary business model of healthcare rewards intervention, AI development will disproportionately serve that model.

Finally, the path of least resistance in data and regulation solidifies these priorities. High-quality, structured datasets like retinal scans and CT images are more readily available and less complex to work with than the "messy," multimodal data streams characteristic of chronic illness (e.g., patient-reported symptoms, wearable data, social determinants of health). As a 2024 report from the World Economic Forum on scaling AI solutions notes, many promising pilots fail to scale because of this data fragmentation. This logistical reality, combined with more predictable FDA regulatory pathways for diagnostic tools compared to novel, holistic disease management platforms, means that commercially driven entities will naturally gravitate toward the problems that are easiest and most profitable to solve.

The result is an innovation ecosystem that, despite its immense potential, systematically overlooks the areas of greatest human suffering in favor of problems that fit neatly into an existing business model. To redirect the course of healthcare AI, we must do more than build better algorithms; we must fundamentally reshape the economic and policy frameworks that determine which problems are deemed worthy of being solved.

Thought leaders challenging conventional wisdom

A new generation of thought leaders is fundamentally reframing healthcare AI problem selection. Trishan Panch at Harvard's T.H. Chan School of Public Health argues that algorithmic bias is "as much an issue of society as it is about algorithms," advocating for multidisciplinary teams that include social scientists, not just data scientists.

Ruha Benjamin at Princeton's Ida B. Wells Just Data Lab goes further, challenging the tech industry's "gospel of tech solutionism" and arguing we should not consider developments innovative if they continue to exacerbate social problems. Her concept of "ancestral intelligence" over artificial intelligence reframes innovation to center equity: "Who are you consulting? Who are you involving?"

These methodological insights are practical.

Michael Crawford's work at Howard University demonstrates community-engaged approaches to AI problem selection, focusing on increasing access, improving patient experiences, increasing affordability, and improving outcomes for medically underserved communities. Eric Topol's "Deep Medicine" approach advocates for problem selection based on individualized medicine and "keyboard liberation"—freeing clinicians from administrative tasks to focus on patient care.

The overlooked opportunities

Current research reveals specific clinical needs crying out for AI attention. Rare diseases represent perhaps the greatest untapped opportunity. Harvard's TxGNN model, released in 2024, demonstrates this potential by identifying drug candidates for 17,080 diseases from 8,000 existing medicines—the first AI tool explicitly developed for rare diseases at scale.

Care coordination failures cost healthcare systems billions annually through fragmented care, medical errors, and inefficiencies, yet receive minimal AI development compared to diagnostic applications. Patient-facing administrative challenges—including insurance navigation, prior authorization processes, and accessing financial assistance programs—impose enormous burdens that AI could address but rarely does.

The patient experience represents another systematic blind spot. Research shows that patients' perspectives on AI design and implementation are "scarcely addressed" in current research, despite only 29% of people trusting AI to provide basic health advice. The psychological impacts of AI on patients and healthcare workers are largely ignored, despite their critical importance for successful implementation.

This research validates what I've experienced as both a patient and now an AI healthcare advocate: the very people who could most benefit from AI-powered healthcare solutions—those with complex, chronic, or atypical conditions—are precisely the ones being excluded from AI development conversations. When you're managing a chronic condition that doesn't fit neatly into clinical guidelines, you quickly learn that the healthcare system's assumptions about "normal" patient journeys don't apply to your reality.

Mental health disparities in rural, low-income, and marginalized populations remain underserved by AI development, with traditional mental health AI trained on non-representative datasets creating digital divides. Social determinants of health, complex socioeconomic factors affecting health outcomes, are historically difficult to quantify and integrate into care, yet represent enormous opportunities for AI-assisted interventions.

Two original applications for meaningful impact

Based on this research analysis, I propose two innovative healthcare AI applications that address systematically neglected problems:

1. Comprehensive Chronic Pain Management AI Platform

Develop an integrated AI system combining pain mechanism analysis, treatment optimization, and patient self-management support. Unlike current diagnostic-focused approaches, this platform would use multimodal data (physiological, behavioral, social determinants) to create personalized treatment plans addressing the full spectrum of chronic pain management.

This concept emerges directly from patient need.

Having navigated complex treatment protocols and witnessed the limitations of one-size-fits-all approaches, I envision an AI system that recognizes the multifactorial nature of chronic conditions. The system would include patient-facing tools for pain tracking, healthcare navigation, and treatment adherence, addressing the 30% of the global population affected by chronic pain who currently receive minimal AI attention. Most importantly, it would be designed to handle the "edge cases"—patients whose symptoms, responses, or circumstances don't match typical patterns.

2. Rare Disease Community Intelligence Network

Create a federated AI system connecting rare disease patients, families, researchers, and clinicians globally. This platform would use natural language processing to analyze patient-reported experiences, identify treatment patterns, accelerate drug repurposing research, and connect patients with similar conditions for support and clinical trial opportunities. Unlike traditional approaches that ignore rare diseases due to small patient populations, this system would leverage the collective intelligence of rare disease communities to generate insights impossible through conventional research methods.

As someone who has experienced the isolation of falling outside standard care pathways, I recognize the transformative potential of connecting patients with similar experiences globally. This network would create value precisely where traditional healthcare AI fails—in the long tail of complex, atypical, or poorly understood conditions where patient expertise often exceeds clinical knowledge.

The path forward requires uncomfortable questions

The evidence demands fundamental changes in how we approach healthcare AI problem selection. We must move from technology-driven development to problem-driven, patient-centered approaches that prioritize clinical utility and real-world effectiveness over technical sophistication.

This transformation requires us to confront uncomfortable questions about our current priorities. Are we developing AI that serves the patients who need it most, or those who are most profitable to serve? Are we solving problems that matter to patients, or problems that matter to AI developers? Are we using AI to advance health equity, or are we automating existing disparities?

The regulatory frameworks exist. The methodological insights are available. The evidence of need is overwhelming. What remains is the will to prioritize human-centered problem selection over technically impressive but clinically marginal applications.

Consider these questions for your organization:

What systematic process do you use to identify which healthcare problems deserve AI attention? How do you ensure patient and community voices shape your AI development priorities? What evidence would convince you to redirect AI resources from commercially attractive applications to clinically critical ones?

From my perspective as both a patient and emerging AI healthcare advocate, I challenge you to ask more complex questions:

Are you developing AI tools that would have helped me navigate my diagnosis journey more effectively? Would your AI systems recognize and support patients whose conditions don't fit standard algorithms? Have you included voices like mine—patients with chronic conditions who fall outside the norm—in your design and testing processes?

The future of healthcare AI depends not on the sophistication of our algorithms, but on the wisdom of our problem selection. The time for comfortable assumptions about AI priorities has passed. The time for systematic, equity-focused, patient-centered approaches to healthcare AI problem definition has arrived.

How will you ensure your organization's AI development serves those who need it most, not just those who are easiest to serve?

About Dan

Dan Noyes operates at the critical intersection of healthcare AI strategy and patient advocacy. His perspective is uniquely shaped by over 25 years as a strategy executive and his personal journey as a chronic care patient.
As a Healthcare AI Strategy Consultant, he helps organizations navigate the complex challenges of AI adoption, ensuring technology serves clinical needs and enhances patient-centered care. Dan holds extensive AI certifications from Stanford, Wharton, and Google Cloud, grounding his strategic insights in deep technical knowledge.

References

  1. Bzdok, D., et al. (2024). The unmet promise of trustworthy AI in healthcare: why we fail at clinical translation. Frontiers in Digital Health, 6. https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2024.1279629/full
  2. Liu, X., et al. (2024). Responsible and evidence-based AI: 5 years on. The Lancet Digital Health, 6(7), e456-e464. https://www.thelancet.com/journals/landig/article/PIIS2589-7500(24)00071-2/fulltext
  3. Zeraati, M., et al. (2024). Cracking the Chronic Pain code: A scoping review of Artificial Intelligence in Chronic Pain research. Artificial Intelligence in Medicine, 154, 102915. https://www.sciencedirect.com/science/article/pii/S0933365724000915
  4. Huang, K., et al. (2024). TxGNN: Zero-shot prediction of therapeutic use with geometric deep learning and clinician centered design. Nature Medicine. https://hms.harvard.edu/news/researchers-harness-ai-repurpose-existing-drugs-treatment-rare-diseases
  5. Hauser, K., et al. (2024). The landscape for rare diseases in 2024. The Lancet Global Health, 12(4), e494-e495. https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(24)00056-1/fulltext
  6. Jiang, F., et al. (2021). Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthcare Journal, 8(2), e188-e194. https://pmc.ncbi.nlm.nih.gov/articles/PMC8285156/
  7. Rajkomar, A., et al. (2024). Ethical debates amidst flawed healthcare artificial intelligence metrics. npj Digital Medicine, 7, 242. https://www.nature.com/articles/s41746-024-01242-1
  8. McKinsey & Company. (2024). Generative AI in healthcare: Current trends and future outlook. https://www.mckinsey.com/industries/healthcare/our-insights/generative-ai-in-healthcare-current-trends-and-future-outlook
  9. World Health Organization. (2023). WHO outlines considerations for regulation of artificial intelligence for health. https://www.who.int/news/item/19-10-2023-who-outlines-considerations-for-regulation-of-artificial-intelligence-for-health
  10. U.S. Food and Drug Administration. (2024). FDA Finalizes Guidance on Predetermined Change Control Plans for AI-Enabled Medical Device Software. https://www.ropesgray.com/en/insights/alerts/2024/12/fda-finalizes-guidance-on-predetermined-change-control-plans-for-ai-enabled-device
  11. Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Polity Press.
  12. Obermeyer, Z., et al. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://www.science.org/doi/10.1126/science.aax2342
  13. Panch, T., et al. (2024). Health Equity and Ethical Considerations in Using Artificial Intelligence in Public Health and Medicine. Preventing Chronic Disease, 21, E66. https://www.cdc.gov/pcd/issues/2024/24_0245.htm
  14. Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.
  15. Crawford, M., et al. (2024). Accelerating health disparities research with artificial intelligence. Frontiers in Digital Health, 6. https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2024.1330160/full
  16. Pew Research Center. (2023). 60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care. https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/
  17. European Medicines Agency. (2024). Reflection paper on the use of artificial intelligence in the lifecycle of medicines. https://www.ema.europa.eu/en/news/reflection-paper-use-artificial-intelligence-lifecycle-medicines
  18. Brookings Institution. (2024). Health and AI: Advancing responsible and ethical AI for all communities. https://www.brookings.edu/articles/health-and-ai-advancing-responsible-and-ethical-ai-for-all-communities/