Bayes with No Shame: Admissibility Geometries of Predictive Inference

This paper demonstrates that predictive inference is governed by four distinct, pairwise non-nested admissibility geometries—Blackwell risk dominance, anytime-valid supermartingales, marginal coverage, and Cesàro approachability—each offering a unique certificate of optimality and proving that admissibility is irreducibly relative to the chosen criterion rather than a universal property.

Nicholas G. Polson, Daniel Zantedeschi

Published 2026-03-06
📖 6 min read🧠 Deep dive

Imagine you are a weather forecaster. Your job is to predict if it will rain tomorrow. You have a notebook, and every day you write down your prediction.

This paper asks a very deep question: What does it mean to be a "good" forecaster?

The authors, Nicholas Polson and Daniel Zantedeschi, argue that there isn't just one definition of "good." In fact, there are four completely different ways to be a champion, and being a champion in one way doesn't mean you are a champion in the others. They call this "Admissibility Geometries."

To make this easy to understand, let's use the metaphor of a Gym and Four Different Sports.

The Core Idea: "No Shame"

The paper uses a clever metaphor: Shame.

  • If you use a strategy that is clearly worse than another available strategy, you should feel "shame." You are leaving points on the table.
  • A "No-Shame" strategy is one where you cannot be beaten. No other method can do strictly better than you in every possible scenario.

The paper says: You can be "No-Shame" in four different ways, but you can't be "No-Shame" in all four at once.

Here are the four "Sports" (Geometries) and their champions:


1. The Bayesian Athlete (Blackwell Admissibility)

The Goal: Be the best on average, based on a specific "hunch" (a prior belief).
The Metaphor: Imagine you are playing a game where you have a secret map (a prior belief) about where the treasure is. The Bayesian is the player who uses that map perfectly.

  • How they win: They minimize their "regret" (loss) based on their map.
  • The Certificate: A Supporting Hyperplane. Think of this as a judge holding up a sign that says, "Given your specific map, no one could have done better than you."
  • The Catch: If your map is wrong, or if you don't have a map at all, this "win" doesn't count.
  • Real-world example: A weather forecaster who assumes "it rains 30% of the time" and updates their prediction perfectly based on that assumption.

2. The Safe Gambler (Anytime-Valid Admissibility)

The Goal: Never get caught cheating, no matter when you stop the game.
The Metaphor: Imagine you are betting on coin flips. You want to prove the coin is unfair. The Safe Gambler uses a special "E-process" (a running score).

  • How they win: They ensure that their score never grows too fast by luck. If they stop betting at any random moment (even if they peek at the results early), they are still safe from false alarms.
  • The Certificate: A Non-Negative Supermartingale. Think of this as a "safety net" that guarantees you won't go broke due to bad luck, even if you stop whenever you want.
  • The Catch: They aren't trying to predict the weather perfectly; they are just trying to prove a point without getting caught lying.
  • Real-world example: A clinical trial for a new drug. You want to stop the trial early if the drug works, but you can't let the data peeking trick you into thinking it works when it doesn't.

3. The Crowd Surfer (Marginal Coverage Validity)

The Goal: Be right often enough, on average, without caring about the specific details.
The Metaphor: Imagine you are throwing a net to catch fish. You don't care if you catch the exact right fish every time. You just want to make sure that 95% of the time, the fish you think you caught is actually in the net.

  • How they win: They use Conformal Prediction. They look at the crowd (exchangeable data) and say, "Based on how the crowd behaves, I'm 95% sure the answer is in this box."
  • The Certificate: An Exchangeability Rank. It's like a popularity contest. "95% of the time, the answer falls in this range."
  • The Catch: They don't care about the probability of the event, just the coverage. They might give a huge, useless net (e.g., "It will rain or not rain") just to be safe.
  • Real-world example: A self-driving car saying, "I am 95% sure the pedestrian is within these 5 feet," even if it doesn't know exactly where they are.

4. The Marathon Runner (CAA Admissibility)

The Goal: Get it right in the long run, even if you stumble in the short run.
The Metaphor: Imagine a runner who trips and falls a lot in the first mile, but by the end of the marathon, their average speed is perfect. They use Defensive Forecasting.

  • How they win: They use a "fixed-point" trick. They don't need a map (prior) or a safety net. They just keep adjusting their strategy so that, over a long time, their mistakes average out to the best possible performance.
  • The Certificate: A Cesàro Steering Argument. It's like saying, "I might be wrong today, but if you watch me for a year, I will be perfect on average."
  • The Catch: They might be terrible at predicting the next coin flip, but they are great at the long-term average.
  • Real-world example: A stock trading bot that makes wild guesses every day but eventually learns the market trend so well that its long-term profit matches the best possible strategy.

The Big Reveal: The "Criterion Separation"

The paper's most important finding is that these four champions cannot be the same person.

  • The Bayesian (who needs a map) is not the Safe Gambler (who needs a safety net).
  • The Crowd Surfer (who just wants a net) is not the Marathon Runner (who cares about long-term averages).
  • The Safe Gambler is not the Crowd Surfer.

Why? Because they are playing on different fields with different rules.

  • If you try to be the Bayesian, you might fail the "Safe Gambler" test because you peeked at the data.
  • If you try to be the Crowd Surfer, you might fail the "Bayesian" test because your net is too wide and you didn't minimize your error.

The "Shame" Lesson

The authors say: Don't feel shame if your method isn't perfect in every way.

  • If you are a Bayesian, you are "No-Shame" regarding your map.
  • If you are a Safe Gambler, you are "No-Shame" regarding your safety.
  • If you are a Crowd Surfer, you are "No-Shame" regarding your coverage.

But, if you try to use a Bayesian method to solve a "Safe Gambler" problem, you will feel shame because you broke the rules of the game.

Summary in One Sentence

There is no single "best" way to predict the future; there are four different ways to be "good," and you have to pick the one that fits the specific game you are playing, because you can't win all four games at the same time.