How to use medical AI to boost healthcare

Patients and doctors are increasingly using AI, however, there are serious risks that we all need to be aware of. Key common areas include bias, accuracy, 'realism' and the problems of medical consensus. These factors are likely to have a major bearing on the apparent medical evidence base - especially around informed choice and weighted decision making. It is therefore essential to pause and reflect on the limitations, and enduring complexities of clinical care.

This article follows the previous Total Health article where we summarised the new post-AI doctor and patient relationship.

Bias and distortion in AI information outcomes

There are a number of key factors that can bias or distort AI information outcomes, these include the following:

Source data, bias and commercial narrative posing as science

AI systems are only as good as the data that trains them, and the medical literature is far from neutral! Bias, commercial influence, under-representation, sales narrative masquerading as 'science' and historical inequities. A well-crafted AI report may ‘appear’  authoritative, but is in fact built on flawed assumptions or skewed inputs. Without careful scrutiny, the risk is of creating a new gold standard that is neither truly inclusive nor consistently safe.

Legitimate disagreement exists within medicine

There’s also the growing concern of false precision. AI tools often present percentage-based prognostics or risk assessments that seem mathematical, yet their applicability to an individual patient is rarely that clear-cut. Even among experts, clinical consensus is fragile. Studies have shown wide variation in responses when the same case is presented to different specialists. Factors like training background, regional norms, and even subconscious biases influence recommendations. AI doesn’t resolve this variability, the risk is that it amplifies it. There is a need to highlight how much legitimate disagreement exists within medicine itself.

Moral imperative and navigating complexity

This is where genuine informed consent becomes more than a legal formality, it also becomes a moral imperative. Patients must not only understand their choices, but also the landscape from which those choices emerge: the uncertainties, the conflicting schools of thought, the possibility that even doing nothing might actually be the best option. Independent patient engagement, supported by trusted clinicians, will be vital for navigating this complexity.

We must therefore beware of the the allure of algorithmic clarity with clinical humility. The goal is not to replace one form of authority with another, but to build systems, technological and human, that support reflective, contextualised care.

Scientist and the worker

Doctors, in this vision, become mediators between evidence and experience, between population derived 'normal' ranges and the individual, or between what is known and what matters most.

Navigating the future: systems, longevity, and the patient’s journey

The new healthcare systems must adapt in tandem. The future of medicine is not just about technological capability, it’s about access, structure, and patient experience.

Across the UK and globally, patients are already straddling multiple systems including:

  • public and private,
  • physical and digital,
  • traditional and alternative.

Increasingly, the most empowered patients are proactively managing their health with biometrics, home testing kits, and AI summaries. Patients are seeking a new kind of healthcare navigation, and in particular that of hybrid care. For example, with limitations in the UK around what may or not be covered by insurance, it is a question of “NHS or private,” and how to move smoothly between both in ways that prioritise the patient, not the separate systems.

This requires serious rethinking of traditional care pathways. Multidisciplinary collaboration, shared imaging, integrated data, and continuity across providers must become the norm. At the recent International Private Patient Healthcare Summit, leaders called for exactly this, a “Mix & Match” model of care, where patients can fluidly access the best of both sectors. Technology can be a bridge here - when properly implemented, but culture and policy must also follow.

Implementing proactive longevity medicine

Meanwhile, the science of longevity medicine is accelerating. Patients are no longer content with reactive care. They want deep biomarker insights, preventive strategies tailored to their biology, and access to emerging interventions. Yet most doctors haven’t been trained in this fast-evolving space. As The Lancet recently noted, longevity medicine requires a blend of biogerontology, AI fluency, and clinical pragmatism, not just to extend life, but to enhance its quality through targeted prevention.

Doctors must prepare for this shift, not as wellness entrepreneurs, but as evidence-based guides capable of translating emerging science into ethical, actionable plans. As health decentralises, from hospital to home, from clinician-led to patient-directed, the physician’s future value lies in curating care, not controlling it.

The patient journey is no longer linear. It is multidimensional, exploratory, and iterative. The doctor’s job is to help make that journey coherent.

Making medicine meaningful

This turning point offers a profound opportunity. Doctors can reclaim meaning by redefining relevance. The future needs interpreters of complexity, navigators of ambiguity, and companions in deeply human decisions.

So, what does that look like in practice?

Five real-world ways doctors are becoming guides and meaning makers:

  1. Curate, don’t just communicate 

Instead of reciting guidelines or recapping AI outputs, help patients weigh conflicting options. Highlight what’s known, what’s debated, and what aligns with the patient’s values. Translate medical knowledge into personalised, actionable insight.

  1. Normalise clinical disagreement

Acknowledge variation between specialist opinions—not as a flaw, but as a feature of complex medicine. Empower patients to ask questions, seek second opinions, and reflect on whether an intervention is truly right for them, this will include the valid choice of “watchful waiting.”

  1. Create space for dialogue

Advocate for models of care that allow more time for conversation. Whether through smaller patient panels, collaborative consults, or hybrid digital follow-ups, find ways to deepen relationships rather than increase throughput.

  1. Stay literate in emerging science

Invest time in understanding evidence-based longevity medicine, AI capabilities, and personalised diagnostics to expand knowledge and health literacy.

  1. Lead in ethical navigation

Patients now face an overwhelming marketplace of health products, protocols, and promises. The doctor now needs to be  trusted guide, one who does not sell a solution, but anchor’s options in personal situations, evidence, safety, and meaning.

Does power lies in the questions?

In this new era, the doctor’s greatest power lies not in the answers they provide, but in the questions they help patients ask.

Some people will be early adopters of AI-generated care; others will still want reassurance from a trusted face. Regardless of rate of adoption, all will need someone who can bridge the gap between data and wisdom.

In this new era, the doctor’s greatest power lies not in the answers they provide, but in the questions they help patients ask. To quote the Editor of Total Health, Sue Ryder:

When information is plentiful and free, credibility and authority are priceless

 

The growth within a laboratory of microbes, organisms too small to be seen with the naked eye. Full medical glossary
A viral infection affecting the respiratory system. Full medical glossary
The basic unit of genetic material carried on chromosomes. Full medical glossary
A tube placed inside a tubular structure in the body, to keep it patent, that is, open. Full medical glossary
The management of prostate cancer by surveillance rather than initial intervention. Full medical glossary