AI Can See a Tumor, But Can It Tell Us Why?
Should an AI have to explain its life-saving diagnosis? And what if that explanation leads us astray? This post explores the complex and critical field of Explainable AI, questioning how we build trust in the machines that are reshaping medicine.
João Abrantes
8/30/20253 min read



Artificial Intelligence (AI) is transforming medicine, especially in radiology. These powerful algorithms can analyze medical images with incredible speed and accuracy, spotting patterns that might be invisible to the human eye.
But this power comes with a critical question: when an AI makes a diagnosis, how do we know how it decided?
For a long time, many advanced AI models have been treated as "black boxes." We can see the input (the scan) and the output (the diagnosis), but the complex process in between is a mystery. As these tools become more integrated into healthcare, just being accurate isn't enough. We need to be sure they are safe, fair, and trustworthy.
In fact, the European Union has even established a "right to explanation" for significant algorithmic decisions. This is where Explainable AI, or XAI, comes in.
A New Language: Interpretability vs. Explainability
To understand XAI, it's useful to know two key terms that are often mixed up:
Interpretability: Think of a simple decision tree. You can easily follow its logic from start to finish. The model is inherently understandable by itself.
Explainability: This applies to more complex "black box" models like the deep learning networks used in imaging. We can't naturally understand their internal workings, so we need external tools to help explain their decisions.
It's not a strict binary choice but rather a spectrum. Simpler models are often more interpretable but might be less accurate, while complex models are more accurate but lean heavily on explainability.


Peeking Inside the Box with "Heatmaps"
So how do we get a complex AI to explain itself? One of the most common methods in medical imaging is creating heatmaps (also called saliency or attention maps).
XAI techniques can analyze an AI's decision and generate a visual overlay on the original image, highlighting the exact pixels or regions that were most influential in its conclusion. For example, a bright red area on a heatmap might show the specific cluster of cells the AI flagged as suspicious. These visuals are powerful because they give clinicians a window into the AI's "thinking."
The Surprising Twist: Do We Always Need an Explanation?
Here’s where it gets complicated. We demand full explainability from AI, but do we hold everything in healthcare to the same standard?
Consider this: a significant number of commonly prescribed medicines are deemed safe and effective based on robust clinical trials, even if their precise pharmacological mechanisms aren't fully understood. This raises a question: if randomized controlled trials are enough to approve a new drug, should AI be held to a higher standard? Is our need for explainability about safety, or is it about making AI more acceptable to doctors and patients?
A Word of Caution
While XAI is a huge leap forward, it’s not a perfect solution. Research has shown that providing explanations can sometimes lead to over-reliance on the AI. When a doctor sees a heatmap that looks convincing, they might be tempted to trust the AI's conclusion, even when the model is wrong or uncertain. This can paradoxically reduce decision-making performance.
The world of XAI is a field of intense research, and while we are getting better at opening the black box, there is still much to learn about how these explanations truly impact clinical outcomes.
Want to dive deeper into the state of Explainable AI in radiology? This post only scratches the surface. To get the full picture, check out the complete publication here: 10.1016/j.ejrad.2024.111389
Connect
Dr. João Abrantes Radiologist & AI Specialist
Resources and Support
info@joaoabrantes.ai
Copyright © 2025 João Abrantes