Covering Unknown Correlations in Bayesian Priors by Inflating Uncertainties

This paper proposes a method to inflate uncertainties in Bayesian priors to conservatively account for unknown correlations between nuisance parameters when combining experiments with different parametrizations, thereby preventing underestimated uncertainties in the posterior results.

Lukas Koch

Published 2026-03-13
📖 5 min read🧠 Deep dive

Here is an explanation of the paper "Covering Unknown Correlations in Bayesian Priors by Inflating Uncertainties" using simple language and everyday analogies.

The Big Picture: The "Mystery Team" Problem

Imagine you are a detective trying to solve a case. You have two different witnesses (let's call them Experiment A and Experiment B). Both witnesses saw the same event, but they describe the confusing details (the "nuisance parameters") in completely different languages.

  • Witness A says: "The suspect was wearing a red hat and a blue coat."
  • Witness B says: "The suspect had a warm head covering and a heavy outer layer."

You know they are talking about the same person, but you don't know exactly how "red hat" translates to "warm head covering." Are they 100% the same thing? Are they totally different? Or are they slightly related?

In science, when we combine data from different experiments, we have to guess how these "witnesses" relate to each other. If we guess wrong, we might think we know the answer (the "parameters of interest") with too much confidence. We might say, "I'm 99% sure the suspect is John," when we should really only be 80% sure.

The Core Problem: The "Hidden Link"

The paper argues that when you combine these different experiments, there is a hidden danger: Unknown Correlations.

If the two witnesses are actually describing the exact same thing, their errors are linked. If one is wrong, the other is likely wrong in the same way. If we ignore this link and treat them as totally independent, we might accidentally cancel out their mistakes, making our final result look super precise when it's actually shaky.

The Analogy:
Imagine you are trying to guess the weight of a watermelon.

  • Scale A says it's 10 lbs, give or take 1 lb.
  • Scale B says it's 10 lbs, give or take 1 lb.

If Scale A and Scale B are totally independent, you can average them and say, "It's definitely 10 lbs, give or take 0.7 lbs." You feel very confident.

But, what if both scales were bought from the same factory and have the same broken spring? If Scale A is off by +1 lb, Scale B is also off by +1 lb. They are 100% correlated. In this case, averaging them doesn't help at all. The watermelon is still 10 lbs, give or take 1 lb.

If you didn't know about the broken spring (the unknown correlation), you would have underestimated your uncertainty. You thought you were more precise than you actually were.

The Solution: "The Safety Margin"

The author, Lukas Koch, asks: "How do we fix this without spending years figuring out exactly how the two scales are linked?"

His solution is surprisingly simple: Just add a "Safety Margin" to your uncertainty.

Instead of trying to guess the exact relationship between the two experiments, he suggests we assume they are completely unconnected (which usually makes us too confident) and then inflate (increase) our uncertainty to cover our bases.

The Magic Number (nBn_B):
The paper proves that if you have nBn_B different experiments (blocks of data), you can simply multiply your uncertainty by that number to be safe.

  • The Analogy: Imagine you are packing for a trip. You have 3 different suitcases (3 experiments). You don't know if they will all get lost together or just one.
    • Standard approach: You pack for the average risk.
    • Koch's approach: You pack as if all 3 suitcases might get lost at the exact same time. You bring extra clothes.

By "inflating" the uncertainty (making the "give or take" range bigger), you ensure that even if the hidden links between the experiments are the worst-case scenario, your final conclusion is still conservative. You won't be fooled into thinking you know more than you do.

Why This Works (The "Linear" Rule)

The paper relies on a key assumption: Linearity.

Think of the relationship between the "broken spring" and the "weight" as a straight line. If the spring breaks a little, the weight goes up a little. If it breaks a lot, the weight goes up a lot.

  • If the relationship is a straight line: The "Safety Margin" (inflating the uncertainty) works perfectly. It guarantees you won't underestimate your error.
  • If the relationship is curved (Non-linear): Things get a bit trickier. Imagine the spring gets so broken that it snaps, and the scale reads zero. That's a curve. The paper admits that in these weird, curved cases, the simple math might not be perfect, but it shows that for most real-world physics problems, the "straight line" assumption holds up well enough.

The Takeaway for Everyday Life

You don't need to be a physicist to use this logic. It applies to any situation where you are combining different sources of information that might be secretly related:

  1. The Problem: You have two reports on a project. They use different metrics. You don't know how much they overlap.
  2. The Risk: If you combine them naively, you might think the project is safer than it is.
  3. The Fix: Don't try to solve the mystery of how they overlap. Instead, assume the worst-case overlap and add a buffer to your risk assessment.

In short: When in doubt about how different pieces of data are connected, be more humble about your precision. Widen your margin of error by the number of sources you are combining. It's better to be slightly less precise but 100% safe, than to be very precise and wrong.

Summary of the Paper's Conclusion

  • Don't guess the correlation: It's too hard and easy to get wrong.
  • Assume zero correlation: Start by treating the experiments as independent.
  • Inflate the uncertainty: Multiply the uncertainty by the number of experiments (nBn_B).
  • Result: You get a "conservative" result. You might not be as precise as you could be if you knew the secret links, but you will never be dangerously overconfident.