Upholding Epistemic Agency: A Brouwerian Assertibility Constraint for Responsible AI

This paper proposes a Brouwerian assertibility constraint for responsible AI in high-stakes domains that replaces probabilistic confidence with a three-status interface (Asserted, Denied, Undetermined), requiring systems to provide publicly inspectable certificates of entitlement before making claims to preserve democratic epistemic agency.

Michael Jülich

Published 2026-03-05
📖 5 min read🧠 Deep dive

Here is an explanation of the paper "Upholding Epistemic Agency: A Brouwerian Assertibility Constraint for Responsible AI" using simple language, analogies, and metaphors.

The Big Problem: The "Smooth-Talking" Robot

Imagine you are in a town square, and a new robot has been hired to give advice on serious matters like health, law, and politics. This robot is incredibly smooth. It speaks with total confidence, using perfect grammar and a calm voice.

However, there's a catch: The robot doesn't actually know the truth. It's just guessing based on patterns it saw in its training data. It might say, "The mayor is definitely corrupt," with 99% confidence, even if it has no proof. Or it might say, "This medicine is safe," when it's actually dangerous.

The problem is that because the robot sounds so confident, people stop asking questions. They stop doing their own thinking. They just accept the robot's verdict. The paper calls this a loss of "Epistemic Agency." In simple terms, it means we are losing our ability to be the judges of what is true and what is false. We are becoming passive listeners instead of active thinkers.

The Proposed Solution: The "Certificate of Proof" Rule

The author, Michael Jülich, suggests a new rule for these robots, inspired by a mathematician named L.E.J. Brouwer.

The Rule: "No Certificate, No Verdict."

Instead of the robot just saying "Yes" or "No," it must follow a strict three-step process before it is allowed to speak:

  1. Can it prove it? The robot must be able to produce a "Certificate of Entitlement." This isn't just a confidence score (like "I'm 90% sure"). It's a concrete, checkable piece of evidence (like a math proof, a verified document, or a clear logical step) that shows why it is allowed to make that claim.
  2. Is the proof public? Anyone (or a human auditor) must be able to look at this certificate and say, "Yes, this proof holds up," or "No, this proof is weak."
  3. The Third Option: If the robot cannot produce a solid certificate, it is forbidden from saying "Yes" or "No." Instead, it must say: "Undetermined."

The Three-Status Interface: A Traffic Light Analogy

Think of the robot's output like a traffic light, but with a twist:

  • 🟢 Green (Asserted): The robot has a solid certificate. It can say, "Yes, this is true," and hand you the proof.
  • 🔴 Red (Denied): The robot has a solid certificate proving the opposite. It can say, "No, this is false," and hand you the proof.
  • 🟡 Yellow (Undetermined): The robot is stuck. It doesn't have enough proof to say "Yes" or "No." Crucially, it cannot guess. It must stop and say, "I don't have enough evidence to decide yet."

Why is the Yellow light so important?
In current AI, when a system is unsure, it often just picks the most likely answer and says it with confidence. This paper argues that in high-stakes situations (like courts or hospitals), guessing is dangerous. The "Yellow" light forces the system to admit uncertainty, which keeps humans in the loop to do the real thinking.

The "Tooth Social" Story: A Real-World Example

The paper uses a fictional story to explain how this works. Imagine a scandal involving a government minister who might have taken bribes.

  • Phase 1 (The Rumors): A journalist reports rumors. The AI checks the evidence. The evidence is messy; some documents support the rumor, others don't. The AI's "proof" is shaky.

    • Old AI: "It's likely the minister is corrupt." (People believe it immediately).
    • New AI (Brouwerian): "Undetermined." It explains: "The evidence is mixed. I cannot prove it yet. Please wait for an official investigation."
    • Result: The public stays calm and waits for facts. The AI didn't start a riot based on a guess.
  • Phase 2 (The Official Report): A congressional committee releases a report with sworn testimony and hard proof. The AI gets a new "certificate."

    • New AI: "Asserted." It says: "Based on the official report (Certificate Attached), the minister is corrupt."
    • Result: The verdict is now backed by a public, checkable record.

Why This Matters: The "Adult Education" Metaphor

The author compares this to education for grown-ups.

  • The Old Way: The AI acts like a know-it-all teacher who just gives you the answer key. You memorize the answer, but you don't learn how to think.
  • The New Way: The AI acts like a strict debate partner. It says, "I can only say this if I can show you my work." If it can't show its work, it stays silent.

This forces us (the humans) to do the work. We have to look at the certificate, check the evidence, and decide if we agree. It stops the AI from doing our thinking for us.

The Catch: It's Hard to Build

The paper admits this is difficult.

  • It's slower: Checking for certificates takes more computing power than just guessing.
  • It's "boring": The AI will say "Undetermined" a lot more often than current AIs.
  • It's not magic: The AI still doesn't "know" the truth in a human sense. It just follows strict rules about when it is allowed to speak.

The Bottom Line

This paper proposes a new "constitution" for AI. It says: In important matters, confidence is not enough. You need proof.

If an AI cannot show its homework, it must admit it doesn't know. This protects us from being misled by smooth-talking machines and ensures that we, the humans, remain the ones in charge of our own knowledge and democracy. It turns the AI from a "verdict-giver" into a "proof-provider," keeping the power of judgment in human hands.