Evidence for predictive computations in a brain hierarchy during a visual search task

By comparing LFP data from a visual search task against three computational models, this study provides evidence for a hybrid account of brain hierarchy function where deep-layer activity aligns with Predictive Coding's input-specific optimization, while superficial-layer dynamics are better explained by predictive routing mechanisms.

Pinotsis, D., Bastos, A., Miller, E. K.

Published 2026-04-09
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

The Big Question: How Does the Brain "Think"?

Imagine your brain is a massive, high-tech newsroom. Every second, millions of sensors (your eyes, ears, skin) are screaming at the newsroom with raw data: "Red light!" "Dog barking!" "Hot coffee!"

If the brain had to process every single piece of that raw data, it would crash. So, the brain has a secret strategy: It predicts the future. Instead of reacting to everything, it builds a mental model of the world and only pays attention to things that surprise it.

For years, scientists have argued about how exactly this prediction machine works. There are three main theories (algorithms) on the table:

  1. Predictive Coding: The brain is a strict editor. It constantly compares what it expects with what it sees. If there's a mismatch (an "error"), it screams, "Fix this!" and sends that error message up the chain.
  2. Predictive Routing: The brain is a smart traffic cop. It doesn't calculate errors. Instead, it uses a "mute button." If it predicts a sound, it mutes the volume of that sound so only the unexpected noises get through.
  3. Autoencoders: The brain is a compression algorithm (like a ZIP file). It just tries to shrink the data as it moves up the chain, keeping the most important bits and throwing away the rest, without much back-and-forth conversation.

The Experiment: The Visual Search Task

The researchers wanted to find out which of these three theories is actually true. They didn't just guess; they looked inside the brain of monkeys while they played a game.

The Game: The monkeys had to stare at a screen and find a specific object (like a blue car) hidden among distractors (like a green block).
The Tool: They used special "laminar probes" (think of them as vertical microphones) inserted into three specific brain areas:

  • V4: The "Junior Reporter" (sees the raw picture).
  • 7A: The "Editor" (in the middle).
  • PFC: The "Chief Editor" (the boss, holding the plan).

These probes could listen to different layers of the brain tissue: the Deep Layers (the bottom floor of the building) and the Superficial Layers (the top floor).

The Detective Work: Listening to the Layers

The researchers built three computer models, one for each theory, and tried to fit them to the electrical signals (LFPs) recorded from the monkeys' brains. They asked: Which model's "voice" sounds most like the actual brain activity?

Here is what they found, broken down by floor:

1. The Deep Layers (The Bottom Floor): The "Predictive Coding" Winner

In the deep layers, the brain was acting like a strict editor.

  • The Analogy: Imagine the Chief Editor (PFC) sends a memo down to the Junior Reporter (V4) saying, "Expect a blue car." The Junior Reporter looks at the screen. If it's a blue car, great. If it's a green block, the Junior Reporter sends a loud "ERROR!" signal back up.
  • The Result: The data showed that the deep layers were constantly exchanging messages up and down, calculating the difference between what was expected and what was seen. This confirmed that Predictive Coding is the right algorithm for how the brain builds its internal models of the world.

2. The Superficial Layers (The Top Floor): The "Predictive Routing" Winner

In the top layers, the brain was acting like a smart traffic cop.

  • The Analogy: Imagine the Chief Editor sends a signal down saying, "I expect a blue car." When the Junior Reporter sees the blue car, the Chief Editor hits a "Mute Button" on the signal. The signal is suppressed because it was expected. But if the Junior Reporter sees a green block, the "Mute Button" isn't pressed, and that signal flies through loud and clear.
  • The Result: The data showed that the top layers didn't need to do complex math to calculate "errors." They just needed to know what to suppress. If the prediction was right, the signal was quiet. If the prediction was wrong, the signal was loud. This confirmed that Predictive Routing is the right algorithm for how the brain filters information.

The Grand Conclusion: A Hybrid System

The paper's big discovery is that the brain isn't just one thing; it's a hybrid system that uses different tools for different jobs.

  • Deep Down (The Foundation): The brain uses Predictive Coding. It's doing the heavy lifting, constantly updating its internal map of the world by checking for errors and refining its predictions. It's the "learning" part.
  • Up High (The Filter): The brain uses Predictive Routing. It's the "filtering" part. It takes the predictions made deep down and uses them to silence the boring, expected stuff so the brain can focus on the surprising, important stuff.

Why This Matters

Think of it like a smart home security system:

  • The Deep Layers are the engineers constantly updating the software to recognize what a "normal" day looks like (Predictive Coding).
  • The Superficial Layers are the cameras and motion sensors. They don't re-calculate the software; they just ignore the cat walking by (because the software said "cat is normal") and only trigger the alarm when a stranger jumps the fence (Predictive Routing).

This study proves that the brain is incredibly efficient. It doesn't just "compute" everything; it uses a sophisticated mix of learning from mistakes (deep down) and smart filtering (up high) to navigate the world without getting overwhelmed.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →