Cognition does not automatically influence perception: Evidence from neural encoding of colours belonging to different categories

This study challenges the view that cognition automatically influences early perception by demonstrating that neural responses to color changes, previously attributed to categorical processing in speakers with multiple blue terms, are actually driven by low-level contrast adaptation rather than linguistic categories.

Original authors: Martinovic, J., Delov, A. A., Tomastikova, J., Martin, J., Paramei, G. V., Griber, Y. A.

Published 2026-04-17
📖 6 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

The Big Question: Does Language Rewire Your Brain?

Imagine your brain is a high-tech security camera system. For decades, scientists have debated a fascinating question: Does the language you speak change how that camera sees the world?

This idea is called the Whorfian hypothesis. It suggests that if your language has two different words for "light blue" and "dark blue" (like Russian does), your brain might automatically treat those two colors as totally different categories, even before you consciously think about them. It's like having a security filter that instantly flags "light blue" as a "different intruder" than "dark blue," whereas someone speaking English (who just calls them both "blue") might not notice the difference as quickly.

The Previous "Proof" (The Ghost in the Machine)

A few years ago, a famous study claimed to find the "smoking gun" for this theory. They used EEG (a cap with electrodes that reads brain waves) to watch people's brains react to colors. They found that Greek speakers (who also have two words for blue) showed a specific, quick brain spike when switching between light and dark blue. This spike happened so fast (in a fraction of a second) that it seemed to prove language was hacking the brain's early visual system.

It was like seeing the security camera flash a red warning light before the guard even realized a person had walked in.

The New Study: "Wait, Let's Check the Wiring"

The authors of this new paper decided to play detective. They thought, "Maybe that red warning light wasn't about language at all. Maybe it was just about the brightness or contrast of the colors."

They ran three new experiments to test this. Think of it as a series of stress tests for the brain's security system.

Experiment 1: The Russian Replication

They took the exact same test but used Russian speakers (who also have two words for blue: goluboj and sinij).

  • The Setup: They flashed light blue and dark blue, then light green and dark green.
  • The Result: The "language effect" vanished. The brain didn't show that special spike for blue just because they had two words for it. The brain waves for blue and green looked almost identical.
  • The Analogy: It's like checking a smoke detector. The old study said, "It beeped because it smelled smoke!" The new study says, "Actually, we just opened the window, and the wind blew the dust sensor. It wasn't smoke; it was just a draft."

Experiment 2: The "Warm vs. Cool" Test

They tested English speakers. In English, the "cool" colors (blue/green) are just one category, but the "warm" colors (red/pink, yellow/brown) have many distinct names.

  • The Prediction: If language drives the brain, English speakers should have a huge brain spike for the "warm" colors because they have more words for them.
  • The Result: Nope. The brain didn't care about the words. It only cared about contrast. When the colors were bright and high-contrast, the brain reacted. When they were dull or low-contrast, the brain stayed quiet.
  • The Analogy: Imagine a bouncer at a club. The old theory said the bouncer checks your ID (your language) to let you in. The new study shows the bouncer is actually just checking if you are wearing a bright neon jacket (high contrast). If you are wearing neon, you get in. If you are wearing a dull grey suit, you don't, regardless of what your ID says.

Experiment 3: The "Control Freak" Test

Finally, they isolated the variables. They changed the colors, the brightness, and the saturation (how "rich" the color is) one by one.

  • The Finding: The brain's "mismatch" signal (the vMMN) only fired when there was a difference in contrast or brightness. It didn't fire just because the color changed from "Red" to "Green" if the brightness stayed the same.
  • The Conclusion: The brain isn't reacting to the name of the color; it's reacting to the physical energy of the light hitting the eye.

The Real Culprit: "Visual Adaptation"

So, what was actually happening in the brain? The authors suggest it's a phenomenon called Contrast Adaptation.

The Analogy of the Noisy Room:
Imagine you are sitting in a room with a fan humming loudly (the "standard" stimulus). Your brain gets used to the hum and ignores it. Then, the fan suddenly stops or changes pitch (the "deviant"). Your brain jumps: "Wait, something changed!"

The old study thought the brain jumped because the color changed.
The new study shows the brain jumped because the loudness (contrast) of the fan changed. If the fan was always loud, and then got quiet, the brain reacted. If the fan was always quiet, and then got loud, the brain reacted. But if the fan stayed the same volume and just changed color, the brain barely noticed.

Why This Matters

This paper is a major "plot twist" for cognitive science.

  1. Perception is Bottom-Up: It suggests that our early vision is driven by physics (light, contrast, brightness) rather than by our thoughts or language. The brain is a machine that processes light first, and language second.
  2. The "Ghost" was a Glitch: The famous evidence that language shapes perception might have been a statistical fluke or a misunderstanding of how contrast works.
  3. Predictive Coding: It challenges how we think the brain predicts the future. It suggests the brain's "prediction error" signals are actually just the brain getting used to the background noise (adaptation), not necessarily a complex linguistic calculation.

The Bottom Line

The paper concludes that cognition (thinking/language) does not automatically penetrate perception (seeing).

Your brain doesn't see the world through the lens of your vocabulary. It sees the world through the lens of light and contrast. If you speak Russian and see a light blue sky, your brain doesn't instantly scream "Different Category!" just because you have a word for it. It just sees a patch of light. The "magic" of language happens later, when you consciously think about what you are seeing, not in the first split-second of raw perception.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →