Reliable XAI Explanations in Sudden Cardiac Death Prediction for Chagas Cardiomyopathy

This paper proposes a logic-based explainability method with correctness guarantees that achieves 100% explanation fidelity and superior robustness compared to heuristic approaches, thereby enhancing clinical trust and facilitating the deployment of AI-driven tools for predicting sudden cardiac death in Chagas cardiomyopathy patients.

Vinícius P. Chagas, Luiz H. T. Viana, Mac M. da S. Carlos, João P. V. Madeiro, Roberto C. Pedrosa, Thiago Alves Rocha, Carlos H. L. Cavalcante

Published 2026-02-27
📖 4 min read☕ Coffee break read

Imagine you are a doctor trying to predict which patients with Chagas disease (a heart condition caused by a parasite common in Latin America) might suffer Sudden Cardiac Death (SCD). It's like trying to predict a lightning strike: it happens fast, it's rare, and it's terrifyingly hard to see coming.

For a long time, doctors have used "Black Box" AI models to help with this. Think of these models like a magic 8-ball. You shake it, it gives you an answer ("High Risk" or "Low Risk"), but it refuses to tell you why. If a doctor asks, "Why did you say this patient is at risk?" the magic 8-ball just shrugs. This makes doctors nervous. They can't trust a tool they don't understand, especially when lives are on the line.

The Problem with Current "Explanations"

Some researchers tried to fix this by creating "explanation tools" (like LIME or Anchors). Imagine these tools as guessing games. They look at the magic 8-ball's answer and say, "I think it guessed 'High Risk' because the patient is old." But because they are just guessing, they might be wrong. Sometimes, they might give you the same reason for two completely different patients, or they might miss the real reason entirely. It's like a weather forecaster who says, "It's raining because the clouds are gray," but sometimes it rains when the sky is blue, and their explanation doesn't actually match the reality.

The Solution: The "Logic Detective"

This paper introduces a new approach called Logic-Based Explainability. Instead of a guessing game, think of this new method as a Logic Detective or a Mathematical Proof.

Here is how it works, using a simple analogy:

  1. The Training: The researchers built a super-smart AI (using a tool called XGBoost) that is like a master chef. This chef has tasted thousands of patient records and learned exactly which ingredients (medical data points like heart size, rhythm, and blood tests) lead to a "bad meal" (Sudden Death). This chef is incredibly accurate (95% success rate).
  2. The Problem: The chef can cook the perfect meal, but if you ask, "Why did you add salt?" the chef just says, "Because I felt like it."
  3. The Detective's Job: The new "Logic Detective" steps in. Instead of guessing, it looks at the chef's recipe book (the internal math of the AI) and writes down a strict rule.
    • Example: "If the heart chamber is bigger than 4.5cm AND the blood flow is slow, THEN the risk is high."
  4. The Guarantee: The best part? The Detective doesn't just guess. It uses mathematical proof (specifically, a type of logic called "First-Order Logic") to prove that these rules are 100% correct. It guarantees that if these specific conditions are met, the AI will say "High Risk." There is no guessing, no "maybe," and no "black box."

What They Found

The researchers tested this new "Logic Detective" against the old "Guessing Games" (LIME and Anchors) using real patient data from Brazil.

  • Accuracy: The Logic Detective was 100% faithful. Every time it gave an explanation, it was perfectly true to the AI's decision. The old guessing games were only about 75-98% faithful.
  • Trust: Because the explanations are mathematically proven, a doctor can look at the rule (e.g., "Patient has a specific heart rhythm issue") and say, "Okay, I understand. The AI isn't guessing; it's following a clear rule I can verify."
  • Speed: It wasn't the fastest tool, but it was fast enough to be useful in a hospital, and the trade-off was worth it for the certainty it provided.

Why This Matters

In the world of medicine, especially for neglected diseases like Chagas where data is scarce and resources are tight, trust is everything.

If a doctor uses a tool that gives a "black box" answer, they might ignore it. If they use a tool that gives a "guessing" explanation, they might make a mistake because the explanation was wrong. But if they use this Logic-Based AI, they get a tool that is as accurate as a super-computer but as transparent as a clear window.

In short: This paper teaches us how to turn a "magic 8-ball" into a transparent, rule-following partner that doctors can trust with their patients' lives. It proves that we don't have to choose between a smart AI and a clear explanation; we can have both.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →