Artificial intelligence (AI) is disrupting healthcare from the ground up. Patients can now quickly and easily obtain AI-generated diagnoses, laboratory results interpretations, and personalised treatment recommendations. They can be pointed towards the latest research and journal articles that their doctors haven’t even read. Large language models (LLMs) digest more medical knowledge than any human could ever hope to retain. These are now widely used for symptom triage, treatment planning, and, yes, even carrying out surgical procedures.
Patients and their families are increasingly more engaged with their own health and well-being and are willing to do their own research.
AI providing instant health information gratification
The waiting time to see a specialist might be months. An AI interpretation of laboratory tests? ... Less than three minutes.
This all means that the role of doctors will increasingly be to serve as 'guides' and providers of medically meaningful advice. They will need to help patients to:
- put their AI generated data into context,
- interpret uncertainty, and
- make deeply personal choices that are rooted in their individual situations.
Of course, access to information doesn’t guarantee understanding, and that’s where the new challenge for doctors will lie. A patient may now face a complex AI-generated report, multiple expert opinions that diverge sharply, and no clear guidance on how to synthesise these perspectives.
For instance, one spinal surgery patient could receive three suggested different treatment plans based on the opinions of three equally highly credentialed surgeons, each shaped by their preferred techniques. Who is right? What are the risks of doing nothing?
Doctors as compassionate guides who bring meaning to the data
As part of nurturing the doctor-patient relationship and building deeper partnerships with their patients, doctors will increasingly need to stay abreast of AI capabilities (and pitfalls), and be actively involved in helping people to navigate and interpret divergent expert opinions and the often-complex treatment pathways. This will require a shifting in outlook and for the paternalistic, ‘doctor-knows-best’ attitude to finally be consigned to history.
This shift will also need to accept that clinical consensus is not a constant. When a patient presents their full electronic record (EPR) to multiple specialists, they may receive radically different treatment recommendations. This can be driven by training, geography, culture, experience, and even philosophical leanings, yet is a long-standing reality of medicine. AI may now expose these differences more starkly than ever before.
Making medicine meaningful
The doctor’s role must therefore evolve, not as an arbiter of fixed truths, but as a translator and guide through complexity. Patients will increasingly need help comparing divergent clinical opinions, understanding the implications of each option (including the do-nothing option), and making informed decisions that reflect their own values and goals. This calls for a deeper, more independent form of engagement, where the doctor is not just a 'proceduralist', but rather a partner in meaning-making.
It is nevertheless important to remember that AI cannot understand what health means in the context of someone’s life. It cannot comfort, motivate, or weigh deeply personal trade-offs. Is the age of the discerning, relational, and deeply human doctor just beginning? In this new era, maybe the doctor’s greatest power will lie not in the answers they provide, but in the questions that they help patients to ask.