The practical impact of numerical variability on structural MRI measures of Parkinson's disease

This study demonstrates that numerical variability in neuroimaging pipelines can significantly distort structural MRI findings in Parkinson's disease research, prompting the development of a practical framework to quantify and mitigate these errors to prevent false conclusions in existing literature.

Original authors: Chatelain, Y. M. B., Sokołowski, A., Sharp, M., Poline, J.-B., Glatard, T.

Published 2026-02-19
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine you are trying to measure the height difference between two groups of people: Group A (people with Parkinson's disease) and Group B (healthy people). You want to know if Group A is, on average, slightly shorter than Group B.

To do this, you use a very high-tech, laser-precise ruler (in this case, an MRI scanner and complex computer software called FreeSurfer). You expect the ruler to give you the exact same number every time you measure the same person.

Here is the problem this paper discovered:
Even though the ruler is "precise," the computer doing the math is slightly "jittery."

The "Jittery Calculator" Analogy

Think of your computer like a calculator that is slightly tired or has a shaky hand. When it does a math problem like 2.5 + 2.5, it usually says 5.0. But because of tiny, invisible rounding errors in how computers store numbers (like a digital version of "approximate"), sometimes it might say 4.999999 or 5.000001.

In most everyday math, this doesn't matter. But in brain imaging, scientists are looking for differences that are tiny—sometimes smaller than the width of a human hair. When you run these tiny numbers through a long chain of complex calculations (like measuring brain volume or thickness), those tiny "shakes" in the calculator can get amplified.

By the time the computer finishes its work, the "jitter" might have changed the final result enough to make a healthy brain look like a diseased one, or vice versa.

What the Researchers Did

The team, led by Yohan Chatelain, decided to test this "jitter" on purpose.

  1. The Experiment: They took MRI scans of Parkinson's patients and healthy controls. They ran the same scans through the software 26 times.
  2. The Trick: Each time they ran it, they added a tiny, random amount of "noise" (simulating the natural differences between different computers, operating systems, or hardware).
  3. The Result: They found that the results changed every single time.
    • Sometimes, a specific brain region looked significantly smaller in Parkinson's patients.
    • The next time they ran it (with a tiny bit of digital noise), that same region looked no different between the two groups.
    • In some cases, the "jitter" was responsible for one-third of the total difference they saw between patients and healthy people.

The "Foggy Window" Metaphor

Imagine you are looking at a landscape through a window.

  • The Landscape: The real biological differences between a sick brain and a healthy brain.
  • The Window: The computer software.
  • The Fog: The numerical noise.

The researchers found that the "fog" on the window is so thick in some areas that you can't tell if the landscape has actually changed. You might think you see a mountain (a significant medical finding), but it might just be a trick of the light (numerical noise).

The Big Impact: False Alarms and Missed Cures

Because of this "jitter," the paper found that many previous studies might have made two types of mistakes:

  1. False Positives (The Boy Who Cried Wolf): Scientists thought they found a real difference in the brain, but it was actually just a glitch in the computer math.
  2. False Negatives (The Missed Clue): Scientists thought there was no difference, but the noise was hiding a real, important biological change that could lead to a cure.

The paper looked at 13 previously published studies on Parkinson's and found that a significant number of their "significant" results were right on the edge of this "jitter." If they had run the numbers on a slightly different computer, those results might have disappeared.

The Solution: A "Noise Detector"

The researchers didn't just point out the problem; they built a tool to fix it.

They created a "Numerical-Population Variability Ratio" (NPVR). Think of this as a Signal-to-Noise Meter.

  • The Signal: The real biological difference between patients.
  • The Noise: The computer's jitter.

They built a simple, free web tool where scientists can plug in their summary numbers (like "we found a 5% difference"). The tool then calculates: "Is this difference big enough to be real, or is it just as big as the computer's jitter?"

If the "jitter" is as big as the finding, the tool flags it as unreliable.

Why This Matters

This paper is a wake-up call for the entire field of brain imaging. It tells us that computational stability is just as important as biological accuracy.

  • Before: Scientists worried about whether their MRI machine was calibrated correctly.
  • Now: They must also worry about whether their computer code is "shaky."

By using this new framework, researchers can stop publishing results that are just digital artifacts and start focusing on the real, biological truths about Parkinson's disease. It's about cleaning up the lens so we can finally see the disease clearly.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →