Differences in orthographic processing across species identified by a transparent computational model

Using a transparent predictive coding model, this study reveals that while humans, baboons, and pigeons can all learn to recognize letter strings without semantic knowledge, their underlying orthographic processing strategies differ by phylogenetic distance, with humans and baboons relying more on letter-sequence representations and pigeons depending primarily on pixel- and letter-level features.

Gagl, B., Weyers, I., Eisenhauer, S., Fiebach, C. J., Pauli, J. N. J., Colombo, M., Scarf, D., Ziegler, J. C., Grainger, J., Guentuerkuen, O., Mueller, J. L.

Published 2026-03-10
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine you are teaching three very different students how to tell the difference between a "real" word they've seen before and a "fake" word they've never seen. You aren't teaching them the sounds of the letters (phonics) or what the words mean (semantics). You are just showing them shapes made of letters and asking, "Have you seen this shape before?"

The three students are:

  1. Humans (who can already read).
  2. Baboons (our primate cousins).
  3. Pigeons (birds).

Surprisingly, all three groups got pretty good at the task. But the big question this paper asks is: How are they actually doing it? Are they using the same mental "tools," or are they solving the puzzle in completely different ways?

To find out, the researchers built a digital detective called the "Speechless Reader" (SLR). This isn't a real brain, but a computer program designed to mimic how a brain might process these letter shapes.

The Three Mental Tools (The Detective's Toolkit)

The researchers gave their digital detective three different "lenses" or tools to look at the letters. They wanted to see which lens each species relied on most:

  1. The "Pixel" Lens (The Photocopier): This looks at the raw image, like a photocopier. It sees the black and white dots that make up the letter. It doesn't know it's an "A"; it just sees a specific pattern of pixels.
  2. The "Letter" Lens (The Single Block): This looks at individual letters. It knows, "Oh, that's an 'A' in the first spot and a 'B' in the second." It treats letters like individual building blocks.
  3. The "Sequence" Lens (The Sentence Builder): This looks at the whole flow. It understands that "TH" is a common start to a word, or that "ING" is a common ending. It sees the order and the combination of letters as a single unit.

The Results: How Each Species Solved the Puzzle

The researchers ran thousands of simulations, mixing and matching these lenses to see which combination best predicted how the real animals and humans behaved. Here is what they found:

🧑 Humans: The "Big Picture" Readers

Humans were the best at the task. The digital detective found that humans almost exclusively used the Sequence Lens.

  • The Analogy: Imagine you are reading a street sign. You don't look at the individual pixels of the paint, and you don't just look at the letter 'S' in isolation. You instantly recognize the shape of the word "STOP" as a whole unit. Humans have trained their brains to see the "flow" of letters. They see the forest, not the trees.

🐒 Baboons: The "Hybrid" Learners

Baboons did well, but they used a mix of tools. They used the Sequence Lens (like humans), but they also relied heavily on the Letter Lens.

  • The Analogy: A baboon is like a student who is learning to read. They recognize the word "STOP" as a group, but they are still double-checking, "Is that an 'S'? Yes. Is that a 'T'? Yes." They are building the word from the middle ground—seeing the blocks and the flow.

🐦 Pigeons: The "Detail-Oriented" Observers

Pigeons also did better than random chance, but their strategy was totally different. They barely used the Sequence Lens. Instead, they relied mostly on the Pixel Lens and the Letter Lens.

  • The Analogy: A pigeon is like a master puzzle solver who looks at the tiny details. They don't see the word "STOP" as a single concept. They see, "That's a specific curve here, a straight line there, and a specific letter 'S' in this spot." They are looking at the individual grains of sand rather than the whole beach.

Why Does This Matter? (The Evolutionary Story)

The paper suggests that these differences aren't random; they are tied to evolutionary history.

  • Humans and Baboons are closely related (primates). Our brains are wired to see global patterns (the whole picture). This helps us recognize faces, tools, and complex scenes quickly.
  • Pigeons are birds. Their brains are wired to look at local details. In nature, pigeons need to spot tiny grains of food hidden in grass or dirt. They are experts at finding small differences in a cluttered background.

The researchers call this "neuro-cognitive phenotyping." It's like taking a fingerprint of how a brain thinks, not just what it knows.

The Takeaway

Even though humans, baboons, and pigeons all learned to do the same task (telling real words from fake ones), they took very different mental paths to get there.

  • Humans zoomed out to see the sequence.
  • Baboons looked at both the sequence and the individual letters.
  • Pigeons zoomed in to see the pixels and individual letters.

This study shows that "intelligence" isn't just about getting the right answer; it's about the unique, evolutionary toolkit each species uses to solve the problem. The pigeon isn't "dumb" for not seeing the word as a whole; it's just using the super-powerful detail-oriented brain that helped its ancestors survive in the wild.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →