Making AI Speak the Doctor’s Language

A Smarter ECG Interpretability for Smarter Diagnoses

The electrocardiogram (ECG) is one of the most essential tools in modern medicine, used to detect heart problems ranging from arrhythmias to structural abnormalities. In the U.S. alone, millions of ECGs are performed each year, whether in emergency rooms or routine doctor visits. As artificial intelligence (AI) systems become more advanced, they are increasingly being used to analyze ECGs, sometimes even detecting conditions that doctors might miss.

The problem with this is that doctors need to understand why an AI system is making a certain diagnosis. While AI-powered ECG analysis can achieve high accuracy, it often works like a “black box,” giving results without explaining its reasoning. Without clear explanations, physicians are hesitant to trust these tools. To bridge this gap, researchers in the Technion are working on making AI more interpretable, giving it the ability to explain its conclusions in a way that aligns with medical knowledge.

Making AI Speak the Doctor’s Language
For AI to be useful in clinical settings, it should highlight the same ECG features doctors rely on when diagnosing heart conditions. This is challenging because even among cardiologists, there isn’t always full agreement on what the most important ECG markers are. Despite this, researchers have developed several interpretability techniques to help AI explain its decisions. But these techniques sometimes highlight broad regions of the ECG, without pinpointing the exact marker, leading to potential misinterpretations. They also sometimes highlight irrelevant parts of the image, like the background, rather than the actual ECG signal.

The Next Step: AI for Real-World ECGs
Most current AI models rely on high-quality scanned ECG images. But in the real world, doctors don’t always have access to perfect scans. They often rely on paper printouts from ECG machines, which they might photograph with a smartphone to share with colleagues or add to a patient’s records. These photographed images can be tilted, crumpled, or shadowed, making AI analysis much more difficult.

To solve this, Dr. Vadim Gliner, a former Ph.D. student in Prof. Yael Yaniv’s Biomedical Engineering Lab at the Technion, in collaboration with the Schuster Lab in the Henry and Marilyn Taub Faculty of Computer Science, has developed a new AI interpretability tool designed specifically for photographed ECG images. This paper was published in npj-Digital Medicine. Using an advanced mathematical technique (based on the Jacobian matrix), this method offers pixel-level precision, meaning it can highlight even the smallest details within an ECG. Unlike previous models, it doesn’t get distracted by the background and can even explain why certain conditions don’t appear in a given ECG.

AI-based detection of heart disease using standard ECG markers
AI-based detection of heart disease using standard ECG markers

A More Transparent Future for AI in Medicine
As AI continues to play a bigger role in healthcare, making it explainable and trustworthy is just as important as making it accurate. By developing methods that allow AI to communicate its findings in a way that aligns with medical expertise, researchers are helping pave the way for smarter, more reliable, and more widely accepted AI tools in cardiology. With these advancements, doctors may soon have AI assistants that not only detect heart problems but also clearly explain their reasoning, leading to better, faster, and more informed patient care.

The research was sponsored by the Ministry of Science and the Israel Innovation Authority.

Click here for the full paper