Absolute abstraction: a renormalisation group approach

This paper argues that true abstraction in neural networks depends not only on depth but also on the breadth of the training data, proposing a renormalisation group framework where expanding the data scope leads representations toward a unique "Hierarchical Feature Model," a hypothesis validated by experiments with Deep Belief Networks and auto-encoders.

Carlo Orientale Caputo, Elias Seiffert, Enrico Frausin, Matteo Marsili

Published 2026-03-04
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

The Big Idea: How to Think Like a Genius

Imagine you are trying to learn about the world.

  • Level 1 (The Details): You see a specific golden retriever named "Buster." You notice his floppy ears, his brown spots, and that he likes to chase tennis balls.
  • Level 2 (The Category): You see a poodle, a beagle, and Buster. You realize they are all "dogs." You ignore the spots and the specific names.
  • Level 3 (The Abstract): You see a dog, a cat, and a horse. You realize they are all "animals." You ignore the fact that one barks and one meows.
  • Level 4 (Absolute Abstraction): You realize that everything in the universe is made of "matter" and "energy." You have stripped away so many details that you are left with a universal truth that applies to everything, not just dogs or cats.

This paper asks a simple question: How does a computer (or a brain) get from Level 1 to Level 4?

Most people think the answer is just "depth." If you stack enough layers of a neural network (like adding more floors to a building), it will eventually become abstract. The authors say: "No, that's not enough."

They argue that to reach Absolute Abstraction, you need two things working together:

  1. Depth: Having many layers to process information.
  2. Breadth: Seeing a massive variety of different things (not just dogs, but cats, cars, clouds, and galaxies).

The Analogy: The "Zoom Lens" of the Universe

The authors use a concept from physics called the Renormalization Group (RG). Let's translate that into a photography analogy.

Imagine you have a camera with a magical zoom lens.

  • Zooming In (Details): You look at a single pixel on a photo of a beach. You see sand grains. This is "low-level" data.
  • Zooming Out (Abstraction): You zoom out. The sand grains blur together. You see a beach. You zoom out more. You see an ocean. You zoom out even more. You see the Earth.

The paper suggests that to get a truly "universal" view (like seeing the Earth from space), you can't just zoom out on one photo of a beach. You have to zoom out on photos of every beach, every forest, every city, and every mountain.

If you only zoom out on one specific beach, you just get a blurry version of that beach. But if you zoom out on everything, the specific details (the color of the sand, the shape of the waves) disappear, and you are left with the fundamental rules of how the world is organized.

The "Hierarchical Feature Model" (The Perfect Summary)

The paper proposes that when you combine Deep Layers with Broad Data, the computer's internal brain settles into a specific state called the Hierarchical Feature Model (HFM).

Think of the HFM as the Ultimate Cheat Sheet.

  • In a normal brain, if you learn about "cats," you remember "fur," "whiskers," and "meowing."
  • In the HFM, the brain organizes information by how much detail is needed.
    • Some things need very few bits of info to describe (e.g., "It exists").
    • Some things need a lot of bits (e.g., "It is a specific type of cat with a specific scar").

The HFM is special because it is data-independent. It doesn't care if you are looking at a cat, a car, or a cloud. It only cares about the structure of the information. It's like a universal translator that speaks the language of "complexity" rather than the language of "cats."

The Experiments: Teaching Computers to See the Big Picture

The authors tested this theory using two types of AI:

  1. Deep Belief Networks (DBNs): These are like deep stacks of filters.
  2. Auto-Encoders: These are like compression algorithms that try to shrink a picture down to its essence and then rebuild it.

The Experiment:
They trained these AIs on pictures.

  • Scenario A: They trained the AI only on the number "2" from the MNIST dataset (a standard set of handwritten numbers).
  • Scenario B: They trained it on "2"s, then added "3"s, then "4"s, then letters, then fashion items, and finally pictures of cars (Cifar-10).

The Result:

  • When the AI only saw "2"s, its internal representation was messy and specific to "2."
  • As they added more types of data (Breadth) and more layers (Depth), the AI's internal "brain" started to look exactly like the Hierarchical Feature Model.
  • The AI stopped caring about whether it was looking at a "2" or a "cat." It started organizing everything based on how "complex" or "detailed" the object was.

Why Does This Matter?

This paper suggests that Intelligence isn't just about memorizing facts. It's about finding the universal patterns that connect everything.

  • For AI: If we want AI to be truly smart and adaptable, we shouldn't just make it deeper. We must feed it a wider, more diverse diet of data.
  • For Humans: It explains why a child who grows up seeing many different animals, cultures, and ideas becomes better at abstract thinking than a child who only sees one type of thing.
  • The "Platonic" Connection: The authors mention that all these different AIs, when trained on broad data, seem to converge on the same "statistical model of reality." It's as if they all discover the same "Universal Grammar" of the universe, regardless of what they were originally taught.

The Takeaway

To reach Absolute Abstraction, you need to zoom out on the widest possible universe.

If you only look at a small part of the world, you get stuck in the details. But if you look at everything through many layers of processing, the details fade away, and you are left with the pure, universal structure of reality. That is what the authors call "Absolute Abstraction."

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →