sFRC for assessing hallucinations in medical image restoration

This paper proposes sFRC, a novel method that performs Fourier Ring Correlation analysis over small patches to robustly detect and quantify hallucinations in deep learning-based medical image restoration across various undersampled imaging problems.

Prabhat Kc, Rongping Zeng, Nirmal Soni, Aldo Badano

Published 2026-03-06
📖 5 min read🧠 Deep dive

Imagine you are looking at a medical scan, like an X-ray or an MRI. These images are crucial for doctors to diagnose diseases. Now, imagine a super-smart AI assistant is hired to clean up these images, making them look sharper and clearer, especially when the original scan was blurry or incomplete.

The problem? This AI is a bit of a creative writer. Sometimes, in its eagerness to make the picture look "perfect," it starts hallucinating. It might invent a blood vessel that doesn't exist, smooth over a tumor until it disappears, or add a fake plaque on an artery. To the naked eye, the image looks beautiful and smooth, but it's actually lying to the doctor. This is dangerous because a doctor might miss a real disease or treat a fake one.

This paper introduces a new tool called sFRC (scanning Fourier Ring Correlation) to catch these lies. Here is how it works, explained simply:

1. The Problem: The "Too Good to Be True" Image

Think of the AI like a photo editor who is given a low-quality, blurry photo of a forest and asked to make it high-definition.

  • The Goal: The AI adds trees, leaves, and birds to fill in the missing details.
  • The Hallucination: The AI gets carried away and adds a dragon sitting on a branch. The dragon looks realistic, but it's not real.
  • The Danger: If a doctor looks at this "enhanced" image, they might think, "Wow, that's a clear image," and miss the fact that the dragon (or a fake tumor) isn't real.

2. The Solution: The "Microscope" Approach (sFRC)

Traditional ways of checking image quality are like looking at the whole forest from a helicopter. You might see that the forest looks green and healthy, but you can't see the dragon.

sFRC is different. Instead of looking at the whole image at once, it acts like a magnifying glass that scans the image in tiny, overlapping squares (patches).

Here is the step-by-step process:

  • The Comparison: The tool takes the AI's "enhanced" image and compares it side-by-side with the "original" raw data (the blurry version) or a mathematically perfect version of what should be there.
  • The Frequency Check: Imagine the image is a song. It has bass (low frequencies, like the shape of the body), mids (like organs), and treble (high frequencies, like tiny details and edges).
    • The AI is good at the bass and mids.
    • But when it tries to invent the treble (the tiny details), it often gets it wrong.
  • The "Lie Detector": sFRC checks the "treble" part of the song in each tiny square. If the AI's version of the treble doesn't match the physics of how the original scan was taken, sFRC flags it.
  • The Red Box: If a patch is found to be "lying" (hallucinating), sFRC draws a red box around it, telling the doctor: "Hey, look here. This part of the image was made up by the computer. Don't trust it."

3. Why This Matters: The "Smoothie" Analogy

Imagine you are making a smoothie.

  • The Real Thing: You have real fruit (the patient's actual anatomy).
  • The AI: It tries to blend the fruit into a perfect, smooth drink.
  • The Hallucination: To make it taste "perfect," the AI secretly adds a spoonful of sugar that isn't in the recipe.
  • Old Metrics: Old ways of checking quality just tasted the whole smoothie and said, "It tastes sweet and good!" (High score). They missed the extra sugar.
  • sFRC: This tool tastes the smoothie in tiny sips. It says, "Wait, this sip tastes like sugar that wasn't in the fruit. That's a fake addition."

4. What the Authors Found

The researchers tested this tool on three different medical scenarios:

  1. CT Scans (Super-Resolution): Making low-res CT scans look high-res.
  2. MRI Scans: Filling in missing data to speed up scans.
  3. Sparse Views: Reconstructing images from very few angles.

In all cases, they found that:

  • The AI often created fake structures (like extra blood vessels or fake plaques) that looked real but were dangerous.
  • Traditional "quality scores" (like PSNR or SSIM) were fooled. They gave the AI high grades even when it was lying.
  • sFRC successfully caught these lies, highlighting exactly where the AI was making things up.

The Bottom Line

This paper gives us a lie detector for medical AI.

As we start using more AI to help doctors see inside our bodies, we need to make sure the AI isn't just "making things up" to look pretty. sFRC is a safety net. It doesn't just tell us if an image looks good; it tells us if the image is truthful. It ensures that when a doctor sees a tumor or a broken bone, it's actually there, and not just a hallucination created by a computer trying too hard to be helpful.