Can machines be uncertain?

This paper explores how AI systems can realize uncertainty through functionalist and behavioral lenses, distinguishing between epistemic and subjective forms while proposing that certain uncertain states function as interrogative attitudes containing questions rather than propositions.

Luis Rosa

Published 2026-03-05
📖 5 min read🧠 Deep dive

The Big Question: Can a Robot "Not Know"?

Imagine you are asking a robot for advice.

  • Scenario A: The robot says, "I am 100% sure it will rain tomorrow."
  • Scenario B: The robot says, "I'm not sure. It might rain, or it might not."

We want our AI to be able to do Scenario B. We don't want them to "jump to conclusions" when they don't have enough information. But here is the tricky part: Does the robot actually feel uncertain, or is it just pretending?

The author, Luis Rosa, asks: Can a machine truly be in a state of uncertainty, or is it just that the data it holds is messy?

To answer this, he splits "uncertainty" into two types:

  1. Epistemic Uncertainty (The Data is Messy): The information the robot has is incomplete or conflicting. (Like a detective who has a missing clue).
  2. Subjective Uncertainty (The Robot is Hesitant): The robot's own "mind" is undecided. It hasn't made up its mind yet. (Like the detective saying, "I don't know who did it yet, so I won't arrest anyone.")

The Goal: We want AI to have Subjective Uncertainty. We want the machine itself to say, "I'm on the fence," rather than just having messy data but still forcing a decision.


The Three Types of AI Brains

Rosa looks at three different "architectures" (types of brains) to see how they handle uncertainty:

1. Symbolic AI (The Rule-Follower)

Think of this as a super-strict librarian who follows a giant rulebook.

  • How it works: It uses clear sentences like "IF the patient has a fever AND a cough, THEN they have the flu."
  • How it gets uncertain:
    • Probabilistic: It can attach a number to a rule. "There is a 90% chance the patient has the flu." It writes this down in its rulebook.
    • Categorical (Questioning): It can simply write down a question mark. Instead of answering "Yes" or "No," it stores the question: "Does the patient have the flu?" and admits it doesn't have the answer yet.
  • The Catch: Sometimes the librarian writes "90% chance" in its notes, but its "mouth" (the output) is programmed to only speak in absolutes. So, it writes "90%" internally, but when you ask it, it just says, "The patient has the flu."
    • The Dilemma: Is it uncertain? Internally, yes. Behaviorally, no.

2. Connectionist AI (The Neural Network)

Think of this as a giant web of neurons (like a human brain) that learns by adjusting the strength of connections between dots. It doesn't use rulebooks; it uses patterns.

  • How it works: You show it a picture of a bear. The dots light up, and the signal travels through the web to the output.
  • How it gets uncertain:
    • Distributed Uncertainty: Imagine the web is "confused" about whether bears are mammals. The connections are set up in a way that sometimes it says "Yes," sometimes "No," and sometimes it just... doesn't light up the "Mammal" or "Non-Mammal" button at all. The whole web is in a state of "I don't know."
    • Point-wise Uncertainty: The network outputs a number like "0.6" (60% confidence). It's saying, "I'm leaning this way, but I'm not sure."

3. Hybrid Systems (The Best of Both Worlds)

These are modern systems (like Large Language Models) that mix the rule-following of Symbolic AI with the pattern-matching of Neural Networks. They can be uncertain in all the ways mentioned above.


The "Level Split" Problem: The Confused Manager

This is the most important part of the paper. Rosa points out a weird glitch that happens when you put a "uncertain" AI inside a bigger system.

The Analogy: The Nervous Intern and the Boss
Imagine a company where:

  • The Intern (The AI Sub-system): Is very smart but cautious. When asked about a project, the Intern says, "I'm only 85% sure this will work." (This is Subjective Uncertainty).
  • The Boss (The Larger System): The Boss has a rule: "If the Intern is over 80% sure, we treat it as a fact and move forward."

The Result:
The Intern is genuinely uncertain (85% is not 100%). But the Boss ignores that hesitation and acts as if the project is 100% guaranteed. The whole company acts confident, even though the "brain" doing the thinking is hesitant.

Rosa's Verdict:
He argues that in this case, the whole system is NOT uncertain.

  • Why? Because "uncertainty" isn't just about having a shaky internal feeling. It's about what you do with that feeling.
  • If the system acts confident, makes bold decisions, and doesn't hesitate, it isn't truly uncertain. The Intern's "shakiness" was just a glitch that got overridden.
  • The Solution: We shouldn't say the AI is uncertain just because its internal math is fuzzy. We should only say it's uncertain if the whole system behaves like an uncertain agent (hesitates, asks for more info, or says "I don't know").

The Takeaway

  1. AI can be uncertain: They can realize this by storing questions, assigning probabilities, or having "fuzzy" connections in their neural networks.
  2. But behavior matters: It doesn't matter if the AI's internal math is shaky. If the AI acts like it's 100% sure, then it is 100% sure.
  3. The "Uncertainty" is in the action: For a machine to be truly uncertain, it must behave like a human who is on the fence—hesitating, hedging their bets, or admitting they don't know. If it jumps to a conclusion, it's not uncertain, even if its internal data was messy.

In short: A machine isn't "uncertain" just because it has a doubt in its code. It's only uncertain if that doubt changes how it acts. If it ignores its own doubts and acts confidently, it's not uncertain at all.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →