Continuous flashing suppression of neural responses and population orientation coding in macaque V1

Using two-photon calcium imaging in awake macaques, this study demonstrates that continuous flash suppression substantially diminishes V1 neuronal responses and population orientation coding, suggesting that while coarse orientation discrimination remains possible, the severely degraded neural signals are insufficient to support high-level visual and cognitive processing.

Original authors: Chen, C.-X., Wang, X., Jiang, D.-Q., Tang, S., Yu, C.

Published 2026-03-03
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

The Big Question: Can Your Brain "See" What You Can't?

Imagine you are watching a movie, but someone is constantly flashing bright, chaotic, colorful patterns right in front of your other eye. So bright and distracting is this flashing light that you completely lose track of the movie playing on the screen. You can't see the actors or the plot. This is called Continuous Flash Suppression (CFS).

Scientists have used this trick for years to study "subconscious vision." The big debate has been: If you can't consciously see the movie, is your brain still watching it? Does your brain understand the plot (high-level thinking), or is it just seeing blurry shapes (low-level features)?

To answer this, the researchers in this paper went straight to the source: the brain's "first camera," a part called V1.

The Experiment: Peeking Behind the Curtain

The researchers didn't just ask monkeys what they saw (monkeys can't talk!). Instead, they used a high-tech "night-vision camera" (two-photon calcium imaging) to watch thousands of individual neurons in the monkeys' brains light up while they looked at these flashing patterns.

Think of the V1 neurons as a massive crowd of security guards at the entrance of a stadium. Their job is to check the tickets (visual information) before letting the guests (the image) into the VIP section (the rest of the brain for complex thinking).

The Setup:

  1. The Target: A simple striped pattern (a grating) shown to one eye.
  2. The Masker: A chaotic, flashing noise shown to the other eye.
  3. The Result: The monkey's brain "sees" the noise, but the stripes disappear from conscious awareness.

The Findings: The Security Guards Are Overwhelmed

The researchers found that the "flashing noise" didn't just hide the stripes; it crippled the security guards.

  1. The "Noise-Loving" Guards: Some guards only listen to the eye seeing the flashing noise. When the noise flashed, these guards stopped talking about the stripes entirely. They were completely silenced.
  2. The "Both-Eyes" Guards: Most guards listen to both eyes. These guys were almost completely silenced too. The noise was so loud it drowned out the signal.
  3. The "Stripe-Loving" Guards: A few guards only listened to the eye seeing the stripes. They kept talking, but their voices were much quieter and fuzzier than usual.

The Analogy: Imagine trying to hear a whisper (the stripes) while someone is screaming in your ear (the flashing noise). Even if you are straining to listen, the whisper is barely audible. The brain's "signal-to-noise ratio" has crashed.

The Test: Can the Brain Reconstruct the Image?

To see what this meant for the rest of the brain, the researchers used two types of "AI detectives" (machine learning models) to analyze the data from the security guards.

Detective 1: The "Rough Sketch" Artist (Classification)

  • The Task: Can the AI tell if the stripes are vertical or horizontal?
  • The Result: Yes. Even with the noise, the AI could still guess the general direction of the stripes about 80-90% of the time.
  • Meaning: The brain can still do simple, low-level tasks like "Is it vertical or horizontal?"

Detective 2: The "Photographer" (Reconstruction)

  • The Task: Can the AI rebuild the actual picture of the stripes from the brain signals?
  • The Result: No. When the AI tried to draw the image based on the suppressed brain signals, the result was a blurry, unrecognizable mess. It was like trying to paint a masterpiece using only a few drops of watered-down paint.
  • Meaning: The brain has lost the detailed information needed to recognize what the object is.

The Conclusion: What Does This Mean for Us?

This study solves a major mystery in psychology.

  • The Old Theory: Maybe the brain is secretly processing complex thoughts (like recognizing a face or a tool) even when we can't see it.
  • The New Reality: The brain is not getting enough information to do that. The "first camera" (V1) is so suppressed that the image is too broken to be useful for complex tasks.

The Final Metaphor:
Imagine you are trying to read a book, but someone is shining a strobe light in your face.

  • You might still be able to tell that the lines are horizontal (low-level processing).
  • But you cannot read the words or understand the story (high-level processing) because the letters are too blurry and broken.

Why does this matter?
It suggests that when we feel like we are "subconsciously" reacting to something we can't see, we might actually just be reacting to very basic, blurry fragments (like "it's long" or "it's vertical"), not the full, complex meaning of the object. The brain needs a clearer picture to do the heavy lifting of thinking and understanding.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →