Disentangling objects' contextual associations from perceptual and conceptual attributes using time-resolved neural decoding

Using time-resolved EEG and representational similarity analysis, this study reveals that while perceptual and conceptual object features are distinctly encoded over time, contextual associations largely overlap with conceptual representations and show limited unique neural encoding under passive viewing conditions.

Original authors: Kim, A. H., Quek, G. L., Moerel, D., Gorton, O. K., Carlson, T. A.

Published 2026-02-26
📖 4 min read☕ Coffee break read
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine your brain is a super-fast librarian trying to sort a massive pile of incoming books (objects) the moment they land on the desk. To do this, the librarian uses three different filing systems:

  1. The "Look" File (Perceptual): How the object looks (color, shape, size).
  2. The "Use" File (Conceptual): What the object is for and what it means (a hammer is for hitting, a dog is a pet).
  3. The "Where" File (Contextual): Where you usually find the object (a toothbrush in a bathroom, a fork in a kitchen).

For a long time, scientists knew the "Look" and "Use" files were being used, but they weren't sure if the "Where" file had its own special slot in the brain, or if it just got mixed up with the "Use" file.

This study asked: When you see an object, does your brain file it by how it looks, what it is, or where it belongs? And does it happen all at once, or in a specific order?

The Experiment: A Speed-Reading Test for the Brain

The researchers set up a massive experiment using two tools:

  • The "Human Opinion" Database: They asked hundreds of people to look at 190 different objects (like a toaster, a giraffe, or a wrench) and sort them into groups based only on how they looked, only on what they were used for, or only on where they are found. They did this twice: once looking at pictures of the objects, and once just reading the names of the objects.
  • The "Brain Camera" (EEG): They put electrodes on the heads of other people and showed them rapid-fire images of those same objects. This "camera" recorded the brain's electrical activity every millisecond, creating a high-speed movie of the brain thinking.

The Discovery: A Relay Race, Not a Traffic Jam

By comparing the "Human Opinion" files with the "Brain Camera" movie, the researchers found a clear timeline of how the brain processes objects. Think of it like a relay race:

1. The First Sprinter: The "Look" (0–100ms)
The moment an object hits your eyes, the brain's first reaction is purely visual. It's like a security guard checking a face. "Is it round? Is it red? Is it big?" This happens incredibly fast, within the first 100 milliseconds. The brain is obsessed with the physical appearance first.

2. The Second Sprinter: The "Use" (160ms+)
Just a split second later, the brain switches gears. It stops asking "What does it look like?" and starts asking "What is it?" and "What does it do?" This is the conceptual phase. Interestingly, this happens whether you are looking at a picture of a dog or just reading the word "Dog." The brain is still fast, but it takes a tiny bit longer to access the meaning than the shape.

3. The Mystery Runner: The "Where" (The Twist)
This is where the study got surprising. The researchers expected the "Where" file (context) to have its own special time in the race. They thought the brain would eventually say, "Oh, this is a toothbrush, so it must be in a bathroom."

But that didn't happen.

The "Where" file didn't get its own lane. Instead, it turned out that the "Where" information was so tightly glued to the "Use" information that the brain couldn't tell them apart.

  • The Metaphor: Imagine trying to separate the smell of coffee from the taste of coffee. They are so linked that when you taste coffee, you automatically smell it. You can't really have one without the other in that moment.
  • The Result: When the brain figured out what an object was (Conceptual), it automatically knew where it belonged (Contextual). The "Where" file didn't need a separate processing time; it rode along for free on the "Use" file.

Why This Matters

This study tells us that our brains are incredibly efficient. We don't waste time processing "where" an object belongs as a separate step. Instead, as soon as we understand what something is, our brain instantly knows its place in the world.

  • Perception (Look) comes first.
  • Concept (Use) comes second.
  • Context (Where) is just a bonus feature that comes bundled with the Concept.

So, the next time you see a fire hydrant on the street, your brain doesn't stop to think, "Hmm, where does this go?" It instantly knows it's a fire hydrant, and because it knows that, it automatically knows it belongs on a sidewalk. The brain is a master of shortcuts!

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →