A categorical formalization of epistemic uncertainty frameworks

This paper introduces a general category-theoretic framework for epistemic uncertainty that unifies various imprecise probability theories, models their interrelationships, and demonstrates how both Bayesian updating and possibilistic conditioning emerge as specific instances of a generalized belief updating process based on change of enrichment.

Torgeir Aambø

Published 2026-03-05
📖 5 min read🧠 Deep dive

Imagine you are a detective trying to solve a mystery. You have a hunch about who the culprit is, but you aren't 100% sure. You have uncertainty.

In the world of math and logic, there are two main types of uncertainty:

  1. Randomness (Aleatoric): Like rolling a die. The outcome is inherently unpredictable, even if you know all the rules.
  2. Ignorance (Epistemic): This is the focus of the paper. It's the "I don't know" feeling. It happens because you don't have enough information, your sources are conflicting, or you just haven't looked hard enough.

This paper by Torgeir Aambø is like building a universal translator and a rulebook for all the different ways humans try to measure this "ignorance."

Here is the breakdown of the paper's big ideas, explained with everyday analogies.

1. The Problem: Too Many Different "Rulers"

Right now, scientists and philosophers use many different mathematical tools to measure uncertainty.

  • Bayesian Statistics: Uses probabilities (0% to 100%).
  • Possibility Theory: Uses "how possible" vs. "how necessary."
  • Certainty Factors: Used in old AI systems, measuring belief from -1 (disbelief) to +1 (belief).

The problem is that these tools speak different languages. It's hard to compare them or switch between them. One might say "70% likely," while another says "highly possible," and we don't know if they mean the same thing.

2. The Solution: A "Universal Grammar" for Uncertainty

The author introduces a new way to look at these tools using Category Theory. Think of Category Theory as the "grammar" of mathematics. Instead of looking at the specific numbers (the vocabulary), it looks at the structure of how we combine ideas.

The author defines an "Epistemic Calculus."

  • Analogy: Imagine a Lego set.
    • The blocks are your levels of belief (e.g., "I'm pretty sure," "I'm doubtful").
    • The studs on top are the rules for snapping them together (fusion).
    • Different calculi (Bayesian, Possibility, etc.) are just different brands of Lego. They all snap together, but some use round studs, some use square studs.

The paper creates a universal instruction manual that describes the shape of the studs and the rules for snapping, regardless of the brand. This allows us to see that, structurally, some of these "brands" are actually the same thing, just dressed differently.

3. The Philosophical "Personality Tests"

The paper treats these mathematical systems like they have personalities. It asks: "If this system of uncertainty were a person, what would their philosophy be?"

It tests them against specific traits:

  • Optimism vs. Skepticism: Does the system assume the best case is possible (Optimism), or does it assume we know nothing until proven otherwise (Skepticism)?
  • Conservatism: If you have a belief, does new evidence always have to be perfect to change your mind? (A conservative system says "No, stick to your guns unless proven wrong.")
  • Fragility: If you find a contradiction, does the whole system collapse? (A "fallible" system admits, "Oops, I was wrong, let's update.")
  • The "Echo Chamber" Effect: The paper proves a fascinating rule: You cannot have a system that is perfectly logical, perfectly conservative, and perfectly consistent all at the same time.
    • Analogy: It's like trying to build a house that is 100% waterproof, 100% fireproof, and 100% earthquake-proof using only one type of brick. You have to compromise. If you want to be super conservative (never change your mind), you have to give up some logical consistency.

4. The "Change of Clothes" (Switching Frameworks)

Sometimes, you need to switch from one way of measuring uncertainty to another. Maybe you start with a vague "Possibility" estimate, but later you get hard data and need "Probability."

The paper shows how to do this translation without breaking the logic.

  • Analogy: Imagine you are translating a book from English to French.
    • A Conservative Translation ensures you never add new meaning that wasn't there. You don't accidentally make a vague sentence sound super specific.
    • A Liberal Translation might add a little bit of flair, but it ensures you never lose the core meaning.
    • The paper proves that some systems are actually isomorphic (mathematically identical twins). For example, "Bipolar Possibility Theory" (which tracks both belief and disbelief separately) is mathematically the same as "Interval Probability" (which tracks a range of probabilities). They are just wearing different clothes.

5. The "Update" Button (Bayesian vs. Others)

The biggest missing piece in many uncertainty theories is updating. How do you change your mind when you get new evidence?

  • Bayesian Updating: The gold standard. You take your old belief, multiply it by the new evidence, and get a new belief.
  • The Paper's Breakthrough: The author created a general "Update Machine" that works for any of these systems.
    • If you feed Bayesian rules into this machine, it spits out the standard Bayes' Theorem.
    • If you feed Possibility rules into it, it spits out "Possibilistic Conditioning" (a different way to update).
    • It shows that these different update methods are just different settings on the same universal machine.

Why Does This Matter?

This isn't just abstract math for mathematicians. This framework helps us:

  1. Build Better AI: Large Language Models (LLMs) and self-driving cars deal with uncertainty constantly. This paper gives engineers a way to mix and match different uncertainty tools safely.
  2. Avoid Bad Logic: It helps us spot when a system is making impossible promises (like being both perfectly conservative and perfectly logical).
  3. Speak the Same Language: It allows a physicist using one type of math to talk to a philosopher using another, because they now share a common "grammar."

In a nutshell: The paper builds a universal adapter that lets us plug any "uncertainty calculator" into any system, checks if that calculator has a healthy philosophical personality, and gives us a standard way to update our beliefs when new facts arrive. It turns the messy world of "I think, I doubt, I guess" into a clean, structured, and logical framework.