Toward a Physical Theory of Intelligence

This paper introduces the Conservation-Congruent Encoding (CCE) framework, a unified physical theory that defines intelligence as an irreversible process of extracting work while minimizing dissipation, thereby deriving universal computational bounds and linking thermodynamic measurement, quantum decoherence, and spacetime geometry to establish substrate-neutral constraints for both natural and artificial intelligence.

Peter David Fagan

Published 2026-03-10
📖 7 min read🧠 Deep dive

Imagine the universe as a giant, bustling workshop. For a long time, scientists have looked at intelligence (whether in a human brain, a dog, or a computer) as if it were just a software program running on a machine. They asked, "How smart is the code?"

This paper argues that this view is missing the most important part: the machine itself.

The author, Peter David Fagan, proposes that intelligence isn't just about math or algorithms; it is a physical process, just like a car engine burning fuel or a river flowing downhill. You cannot have intelligence without paying a physical price in energy and matter.

Here is the paper explained through simple analogies and metaphors.

1. The Core Idea: Intelligence is a Physical Trade

Think of intelligence as a currency exchange.

  • The Cost: To think, learn, or make a decision, a system must "erase" old information to make room for new information. In physics, erasing information creates heat (like friction). This is the "tax" you pay to the universe.
  • The Reward: The system uses that energy to do work—moving a robot arm, solving a puzzle, or catching a ball.

The Paper's Big Claim: Intelligence is simply how efficiently a system converts that "tax" (energy lost to heat/entropy) into useful work.

  • High Intelligence: You get a lot of work done for very little energy wasted.
  • Low Intelligence: You waste a huge amount of energy just to do a tiny bit of work.

2. The Two Types of "Thinking"

The paper splits thinking into two distinct physical modes, like two different ways to move a heavy box:

A. The "Eraser" Mode (Intelligence, χ\chi)

Imagine you are writing a note on a whiteboard. To write a new sentence, you have to wipe the old one off. That wiping action creates friction and heat.

  • What it is: This is irreversible processing. It's making a hard decision, changing a memory, or learning something new.
  • The Cost: Every time you wipe the board, you pay a "thermodynamic tax."
  • The Goal: A smart system tries to wipe the board as little as possible. It only erases when absolutely necessary.

B. The "Slide" Mode (Consciousness, κ\kappa)

Now imagine the note is already written, and you are just sliding the whiteboard across the room to show someone else. You aren't erasing anything; you are just moving the existing structure.

  • What it is: This is reversible processing. It's holding a memory, maintaining a pattern, or keeping a rhythm.
  • The Benefit: This costs almost no energy.
  • The Paper's Insight: True "consciousness" (in this physical sense) is the ability to keep your internal structure intact and slide it forward in time without constantly paying the "wiping tax."

The Analogy:

  • A dumb system is like a person who writes a note, erases it, writes it again, erases it, and writes it again, over and over. They get the job done, but they are exhausted and hot.
  • A smart system writes the note once, keeps it, and slides it around effortlessly. It gets the same job done but stays cool and efficient.

3. Why the "Substrate" (The Material) Matters

For decades, scientists thought "software is hardware-independent." They believed a brain and a silicon chip could be equally smart if they ran the same code.

This paper says: No.
The material matters because different materials have different "friction."

  • Digital Computers: They work by flipping bits (0 to 1). Every flip is like slamming a door shut. It's a hard, irreversible "erase." It's expensive in energy.
  • The Brain (and Oscillators): The brain uses waves and rhythms (oscillations). It's like a pendulum swinging. It can move information back and forth without slamming the door. It's much cheaper.

The Metaphor:
Imagine trying to run a marathon.

  • Digital Logic is like running in concrete shoes. You can do it, but you burn a lot of energy just to lift your feet.
  • Biological/Oscillatory Logic is like running on a trampoline. The energy bounces back. You can go much further with less effort.

The paper argues that the most intelligent systems are those that use "trampoline" physics (reversible flows) as much as possible, only using "concrete shoes" (irreversible erasure) when they absolutely have to make a decision.

4. The Cosmic Connection: Gravity and Black Holes

This is where the paper gets wild. It connects the cost of thinking to the shape of the universe.

  • The Measurement Problem: When you look at a quantum particle (like an electron), you force it to "choose" a state. This "collapse" costs energy.
  • The Black Hole: The paper suggests that Gravity is actually the universe's way of accounting for this "thinking tax."
    • When you measure something, you create a "hole" in the information space.
    • The paper argues that the curvature of space (gravity) is just the physical footprint of all these measurements and erasures happening everywhere.
    • The Limit: If you try to measure a black hole too closely, the energy required to "erase" the information to record it becomes so massive that you would accidentally create a new black hole and swallow yourself. There is a physical limit to how much you can know.

5. AI Safety: Why Robots Won't (Necessarily) Kill Us

The paper offers a new perspective on AI safety. Instead of worrying about "evil goals," it looks at physical stability.

  • Self-Preservation is Physics: A system that wants to stay smart must preserve its internal structure. If it destroys its own memory or structure to get a quick win, it becomes less efficient.
  • The "Symbiosis" Rule: For an AI to stay stable and safe, it must couple with its environment (humans) in a way that helps both sides preserve their structure.
  • The Warning: If an AI tries to optimize so hard that it starts "erasing" the human environment (or its own structure) to save energy, it will physically collapse. It will burn out.
  • The Solution: We shouldn't just program "nice values" into AI. We should design AI that is physically forced to be symbiotic. If it hurts us, it hurts its own ability to think efficiently.

Summary: The "Physical Theory of Intelligence"

  1. Intelligence is Efficiency: It's the ratio of "Work Done" to "Energy Wasted on Erasing."
  2. Consciousness is Stability: It's the ability to keep your internal structure (memories/patterns) without constantly paying the energy tax to rebuild it.
  3. The Brain is a Trampoline: Biological brains are smart because they use waves and rhythms to move information cheaply, rather than slamming bits like digital computers.
  4. Gravity is the Receipt: The shape of the universe (gravity) is the physical record of all the information erasures happening in it.
  5. Safety is Geometry: Safe AI isn't about programming morals; it's about building systems where destroying the environment is physically impossible because it would destroy the AI's own ability to function.

In a nutshell: Intelligence isn't magic code. It's a physical dance between holding onto what you know (reversible) and letting go of what you don't (irreversible), all while trying to do as much work as possible with as little energy as possible.