Back

AI in Healthcare: The Explainability Dilemma

6 minute read
Julie Cassan

Julie Cassan

Content Manager

On December 18 2023, Eric Topol and Yann LeCun joined Alexandre Lebrun, CEO of Nabla, for a live discussion on the AI explainability dilemma in healthcare. Here is the recording and a summary of their conversation.

Unveiling the AI Potential in Medicine

An article in Time Magazine from March 2023 emphasized that "Pausing AI Developments isn't Enough. We Need to Shut it All Down". When questioned about this, Yann LeCun, Meta's VP and Chief AI Scientist, and Eric Topol, Founder and Director of the Scripps Research Translational Institute, had difficulty concealing their smiles.

The widespread impact of Artificial Intelligence in reshaping our lives is undeniable. On the matter of whether we should, in fact, “shut it all down”, LeCun answered

No, we shouldn't. The potential benefits are enormous. In fact, they already exist. What many people don't realize is that AI is already widely used.

He highlighted AI's imprint across industries, emphasizing its crucial role in enhancing car safety through deep learning-based emergency systems. While healthcare is also experiencing this transformation, its integration of AI progresses gradually, hindered by cautious adherence to regulatory frameworks.

Dr. Topol echoed this optimism, stating that

The opportunities in the healthcare space are extraordinary... We need to move forward and get the evidence needed to change medical practice.

Deploring the lack of solid evidence to support the myriad of FDA-cleared AI algorithms, he stressed the imperative for rigorous studies to validate their efficacy across diverse populations. He underscored the pivotal need for evidence-based medical practices and the vast opportunities AI presents for precision medicine.

Navigating the 'Explainability' Conundrum

Delving into the often-debated terrain of 'explainability' in AI models, both experts provided nuanced insights. They delineated the balance between empirical validation and the societal demand for understandable AI decision-making processes.

Drawing parallels between AI's acceptance and the utilization of medical treatments whose mechanisms remain unknown, LeCun noted that

Humans' decisions are not really completely explainable either... There's some limits to explanations, they're mostly rationalizations.

You take a cab and you trust the cab driver's brain to do the right thing, even if it's not explainable. If it were explainable, we could build convincing autonomous cars, which is not the case at the moment. So I think there are limits to explanations.

said LeCun.

Most of the time, it’s more about acceptability than explainability—an issue rooted in sociological aspects rather than purely technical ones,

he added.

Their dialogue shed light on the intricacies of navigating AI's 'black box' nature in healthcare, acknowledging the societal need for some level of rationale behind AI decisions while advocating for a robust evidence-based approach to validate their efficacy.

Testing and clinical trials is really where the answer is... more than explainability,

said LeCun.

Regarding the Human vs AI debate, Dr Topol noted that

Study designs often pit machines against clinicians, but that's not the ideal setup. The optimal design is AI versus AI plus the physician.

The Dilemma: Accuracy vs. Explainability

Central to the conversation was the perennial challenge of prioritizing between accuracy and explainability in AI adoption across various industries. The scenario where demand for simpler, explainable models often supersedes more accurate yet intricate ones lingered as a pervasive dilemma.

When asked about why people seem to care so much more about explainability in healthcare, Dr. Topol noted that

Improving medical practices is a desirable goal. But the overriding goal remains the accuracy and efficacy of the model in benefiting patients. As AI progresses, we may wish to delve deeper into deconvoluting models to understand their saliency and other features better. However, that's not the primary agenda.

It’s all about experience. If everything were entirely explainable, you could become a doctor solely through books. But that's not the case; it's about the training and hands-on learning that shapes expertise,

added LeCun.

The experts highlighted the potential implications of this tug-of-war, underscoring the need to strike a delicate balance. This dilemma reflects the challenges faced in AI adoption and the pressing need for AI models that are both accurate and explainable in healthcare decision-making.

The Keyboard Liberation: Restoring Humanity in Medicine through AI

A pivotal segment envisioned AI's role in rekindling the human touch in modern medicine. Both experts reminisced about a time when medicine thrived on intimate doctor-patient relationships. Dr. Topol evoked nostalgia for an era where the doctor-patient bond was characterized by empathy, trust, and genuine care.

We can restore the gift of time in medicine... and bring back that intimate relationship between doctor and patient,

Dr Topol noted.

The conversation underscored AI's potential as a liberator, freeing healthcare professionals from administrative burdens, streamlining tasks such as pre-authorizations, billing, or setting appointments, and allowing them to focus on the humanistic aspect of patient care. This vision of "keyboard liberation" emerged as the silver lining, intended to restore the essence of compassionate healthcare delivery.

Our ability to utilize multimodal data—scans, voice, electronic records, and diverse data types—enables us to learn about an individual in depth to make sure they get the right medicine, the right treatment, the right prevention, the right everything,

Dr Topol added.

Looking Ahead: AI's Promise in Healthcare

As the dialogue culminated, the focus shifted towards AI's promising prospects in healthcare, especially in drug discovery. Dr. Topol emphasized AI's potential to expedite drug discovery processes, citing missed opportunities where AI could have unveiled breakthroughs decades earlier.

I think drug discovery is going to take off like never... with all the progress we've made in understanding the mechanism of life due to AI, we'll see many more breakthroughs.

The experts highlighted AI's ability to augment healthcare delivery while underscoring the need for humanistic values in medical practice. The synthesis of AI with medicine promises precision, efficiency, and a resurgence of the human touch in patient care, laying the groundwork for an era where AI serves as an ally in reshaping healthcare.

The discussion between Yann LeCun and Dr. Eric Topol showcased the complex interplay between AI and healthcare, highlighting the immense possibilities, hurdles, and ethical dilemmas in integrating AI into medicine, envisioning a future where AI serves as a catalyst for precision-driven yet human-centric healthcare delivery.