Diagnosis of Multiple Sclerosis Using Multimodal Deep Learning Integrating Lesion and Normal-Appearing White Matter: A Retrospective Study with International Multicentre External Validation

This retrospective multicenter study demonstrates that DeepMS, a deep learning model integrating focal lesion and normal-appearing white matter features from routine MRI, achieves high diagnostic accuracy for multiple sclerosis that surpasses current McDonald criteria biomarkers and retains efficacy even when focal lesions are masked.

Ma, J., Stepanov, V., Rui, W., Chen, H.-C., Lis, M., Stanek, A., Puto, T., Lan, M., Chen, J., Liu, T., Patel, R., Breen, M., Lee, M., Eikermann-Haerter, K., Shepherd, T. M., Novikov, D. S., O'Neill, K. A., Fieremans, E., Shen, Y.

Published 2026-03-10
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

The Big Problem: The "Look-Alike" Confusion

Imagine you are a detective trying to catch a specific criminal (Multiple Sclerosis, or MS). Usually, you look for their "calling card": a specific type of scar or damage on the brain's wiring (white matter lesions).

However, there's a problem. Other people (like those with migraines, aging, or small vessel disease) leave behind very similar-looking scars. It's like trying to find a specific criminal in a crowd where everyone is wearing the same red jacket. Currently, doctors rely on these "red jackets" (lesions) to make a diagnosis. But because so many innocent people wear them, about 10–20% of people get misdiagnosed, leading to unnecessary stress and the wrong treatment.

The Old Way vs. The New Way

The Old Way (Current Diagnosis):
Doctors look at standard brain scans (MRI) and count the "red jackets" (lesions). They also check if the jackets are in different places or appeared at different times.

  • The flaw: If the jackets look like the criminal's, but the person is actually innocent, the doctor gets fooled.
  • The new "high-tech" clues: Recently, doctors started looking for tiny details like "vein signs" or "rim signs" on the jackets to be more sure. But these are hard to see, require special equipment, and often miss the criminal entirely if the jackets aren't obvious yet.

The New Way (DeepMS):
The researchers built an AI detective named DeepMS. Instead of just looking at the "red jackets" (lesions), this AI was trained to look at the entire neighborhood, including the "normal-looking" streets (Normal-Appearing White Matter, or NAWM).

The Secret Sauce: Training with a "Super-Scanner"

Here is the clever part of how they built DeepMS:

  1. The Training Camp: To teach the AI what MS really looks like, they didn't just show it standard photos. They showed it standard photos (routine MRI) plus super-detailed microscopic maps (quantitative diffusion MRI). These maps act like a high-tech microscope that reveals tiny cracks and damage in the "normal" streets that the naked eye can't see.
  2. The Lesson: The AI learned: "Ah, when I see this specific pattern of damage in the 'normal' streets on the super-map, it usually means MS is here."
  3. The Magic Trick: Once the AI learned this secret language of the "normal streets" using the super-map, they took the super-map away. They told the AI: "Now, look at a standard photo. Can you still find those same patterns?"

The Analogy: Imagine teaching a wine expert to taste a specific vintage. First, you give them a chemical analysis of the grapes (the super-map) so they know exactly what the flavor profile should be. Then, you take the analysis away and ask them to taste the wine (the standard photo) again. Because they learned the essence of the flavor, they can still identify the vintage just by looking at the label and the liquid, even without the chemical report.

What Happened? (The Results)

The researchers tested this AI on thousands of patients from New York, Poland, and various public databases.

  • It's a Sharp Detective: DeepMS was incredibly accurate (over 96% accuracy in many tests).
  • It Catches the "Look-Alikes": When patients had brain damage that looked like MS but wasn't (the "innocent people in red jackets"), DeepMS was much better at saying "Not MS" than the current standard rules.
  • It Works Without the "Super-Scanner": Even though the AI was trained using the fancy microscopic maps, it only needs a standard, routine MRI to make a diagnosis in the real world. This is huge because standard MRIs are available in almost every hospital, while the "super-scanner" maps are rare.
  • The "Eraser" Test: The researchers digitally "erased" all the visible lesions from the scans and asked the AI to guess.
    • Old AI (trained only on standard photos): Got confused and failed. It relied entirely on the "red jackets."
    • DeepMS: Still got it right! It realized that even without the jackets, the "normal streets" still had the subtle damage signature of MS.

Why This Matters

This study suggests that the "normal-looking" parts of the brain hold a secret code for diagnosing MS. By using AI to decode this, we can:

  1. Stop Misdiagnoses: Fewer people will be told they have MS when they don't.
  2. Catch It Earlier: We might be able to spot MS before big "red jackets" (lesions) even appear.
  3. Use What We Have: We don't need expensive, rare equipment to do this; we can use the standard MRI machines already in hospitals.

In short: The researchers taught an AI to see the invisible. By learning from high-tech maps, the AI can now spot the subtle signs of MS in routine brain scans, acting like a detective who knows the criminal's footprint even when they aren't wearing their signature red jacket.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →