From sequences to schemas: low-rank recurrent dynamics underlie abstract relational representations

This study demonstrates that recurrent neural networks trained to classify sequences by their latent algebraic patterns spontaneously develop low-rank recurrent connectivity, which creates a structured population state space enabling the formation of abstract, identity-independent relational representations that support rapid generalization.

Boboeva, V., Pezzotta, A., Dimitriadis, G., Akrami, A.

Published 2026-04-10
📖 6 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine you are listening to a series of beeps. Some sound like "beep-beep-boop," others like "beep-boop-beep," and others like "boop-beep-beep."

Even though the actual sounds change, your brain is incredibly smart. It doesn't just hear the noise; it hears the pattern. It realizes that "beep-beep-boop" follows the same rule as "click-click-whistle" or "tap-tap-snap." It sees the structure: Same, Same, Different.

This paper asks a big question: How does a brain (or a computer brain) learn to see these invisible patterns instead of just memorizing the specific sounds?

Here is the story of what the researchers found, explained simply.

1. The Problem: Memorizing vs. Understanding

Imagine you are teaching a robot to recognize these patterns.

  • The "Memorizer" Robot: If you show it "beep-beep-boop," it memorizes that exact sound. If you then show it "click-click-whistle," it gets confused because it's never heard those specific words before. It's like a student who memorizes the answers to a math test but can't solve a new problem with different numbers.
  • The "Understander" Robot: This robot learns the rule. It realizes that the first two items are the same, and the third is different. It can instantly recognize "click-click-whistle" as the same pattern, even though the sounds are totally new.

The researchers wanted to know: What happens inside the robot's "brain" (a Recurrent Neural Network) to make it switch from a memorizer to an understander?

2. The Discovery: The "Low-Rank" Skeleton

The researchers built a computer brain and trained it to classify these patterns. They found that when the brain successfully learned the rules, its internal wiring changed in a very specific way.

Think of a standard computer brain as a giant, messy web of wires where every neuron is connected to every other neuron. It's chaotic and huge.

But when the brain learned the abstract pattern, it didn't just get smarter; it got simpler. It built a tiny, organized skeleton inside that messy web.

  • The Metaphor: Imagine a chaotic city with millions of random roads. Suddenly, the city planners build three main highways that connect all the important districts. Once those highways are built, traffic flows smoothly, and the city can navigate efficiently without needing every single side street.
  • The Science: They call this a "low-rank" structure. The brain realized it didn't need millions of connections to understand the pattern; it only needed a few specific, strong pathways (like those highways) to carry the "Same/Different" logic.

3. The "Tree" in the Mind

The researchers found that this internal skeleton organizes information like a family tree.

  • When the brain hears the first "Same" sound, it branches one way.
  • When it hears a "Different" sound, it branches the other way.
  • As the sequence continues, the brain builds a mental map that looks like a tree.

If you look at how the neurons fire, they aren't just buzzing randomly. They are tracing the branches of this tree. This "tree-like geometry" is the physical proof that the brain has built a Schema (a mental framework) for the pattern.

4. The "Time-Traveler" Connection

The most fascinating part is how this skeleton works. The researchers found one specific "highway" (a dominant connection) that acts like a time-traveling messenger.

  • Without this messenger: The brain only remembers the very last thing it heard. It's like having a short attention span. It knows "boop" was the last sound, but it forgot that the two sounds before it were the same.
  • With this messenger: This specific connection carries the history forward. It says, "Hey, remember? Two steps ago we had a 'Same' pair." It integrates the past with the present.

When the researchers surgically removed this specific connection in their computer model, the brain instantly forgot the pattern. It could still hear the sounds, but it couldn't remember the relationship between them anymore. It lost its ability to generalize.

5. The Twist: It Depends on the Goal

Here is the surprising part: The brain only builds this fancy "tree skeleton" if you ask it the right question.

  • Task A (Classification): "Listen to the whole sequence, then tell me what pattern it is."
    • Result: The brain builds the low-rank skeleton and the tree. It learns the rule.
  • Task B (Prediction): "Listen to the first sound, guess the next one. Then listen to the second, guess the third."
    • Result: The brain stays messy. It doesn't build the tree. It just memorizes the immediate next step because it doesn't need to see the whole picture to win.

The Lesson: The brain only builds abstract understanding if the task forces it to look at the big picture. If the task is just "what comes next," the brain stays lazy and local.

6. The Superpower: Transfer Learning

Finally, they tested if this "skeleton" could be reused.

  • They took a brain that had learned the patterns (the "Understander") and gave its internal wiring to a new brain that was trying to learn a different task (predicting the next sound).
  • Result: The new brain learned much faster and got better at generalizing.
  • Why? Because the new brain didn't have to build the highways from scratch; it was given the blueprint.

Crucially, this only worked if the first brain had learned the abstract rules. If they gave the new brain a brain that had just memorized the sounds (without understanding the rules), it didn't help. This proves that the "skeleton" is a reusable tool for understanding, not just a memory bank.

Summary: What Does This Mean for Us?

This paper gives us a blueprint for how intelligence works:

  1. Abstraction is physical: When we learn a rule, our brains physically rewire to create a simpler, organized structure (a low-rank skeleton).
  2. Context matters: We only build these structures if the situation forces us to look at the "big picture" rather than just the immediate next step.
  3. Memory is a scaffold: Once we build this mental scaffold, we can use it to learn new things incredibly fast. This is how we go from being a baby who needs to relearn everything, to an adult who can instantly understand a new situation because it fits an old pattern.

In short: Intelligence isn't about having a bigger brain; it's about building better, simpler highways inside the brain to carry the truth of the world.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →