Semantic Information Orthogonal to Visual Features Peaks in LateralOccipitotemporal Cortex

Using 7T fMRI data and a method to isolate semantic content from visual features, this study reveals that the lateral occipitotemporal cortex, particularly body-selective regions, encodes visually independent semantic information more robustly than ventral stream or early visual areas.

Original authors: Ponnambalam, A. R., Pottore Venkiteswaran, K.

Published 2026-03-15
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

The Big Question: Is Your Brain "Reading" or "Seeing"?

Imagine you are looking at a photo of a nurse chasing a man.

  • Your eyes see pixels: skin tones, the shape of a uniform, the blur of motion.
  • Your brain understands the story: "A nurse is chasing a man."

For a long time, scientists thought the part of your brain that processes images (the visual cortex) was mostly just a super-computer for analyzing shapes and colors. They thought the "story" part (the meaning) was handled by a different part of the brain entirely.

But this new study asks a tricky question: Does the image-processing part of your brain actually understand the story, or is it just really good at guessing the story based on how the picture looks?

The Experiment: The "Magic Eraser"

To find the answer, the researchers used a clever trick. They treated the brain like a radio and the images like songs.

  1. The Setup: They showed 8 people thousands of photos while scanning their brains with a super-powerful MRI (7T fMRI). They also had computers describe these photos using advanced AI language models (like the tech behind Chatbots).
  2. The Problem: The AI descriptions and the photos are naturally linked. If the AI says "a dog," the photo has a dog. So, if the brain reacts to the word "dog," is it reacting to the meaning of the word, or just the shape of the dog in the photo?
  3. The Solution (The Magic Eraser): The researchers used a mathematical "eraser." They taught a computer to look at the photo and predict what the AI would say about it.
    • Step 1: The computer looks at the photo's pixels (visual features).
    • Step 2: It predicts the AI's description.
    • Step 3: It erases that prediction from the AI's description.
    • Result: What's left is the "pure meaning" that has nothing to do with what the picture looks like. It's the semantic "ghost" that remains after you remove the visual "body."

The Discovery: The "Body" Detective

They then asked: Which part of the brain lights up when we show it this "pure meaning" (the ghost) that has no visual shape attached?

They expected the answer to be the "Ventral Stream"—the classic "what is it?" pathway in the brain that handles faces, places, and objects.

The Surprise:
The strongest reaction didn't come from the face area or the place area. It came from the Lateral Occipitotemporal Cortex, specifically a region called EBA (Extrastriate Body Area).

  • The EBA is usually known as the "Body Detector." It lights up when you see a human body.
  • The Finding: Even after removing all the visual shapes of the body, the EBA still reacted strongly to the meaning of the body.
    • Analogy: Imagine you have a radio that only plays music when you show it a picture of a guitar. But this study found a radio that plays music when you tell it the concept of "guitar," even if you show it a picture of a piano. The EBA isn't just seeing the body; it's understanding the role and story of the body.

The "Negative" Proof: The Early Visual Cortex

To make sure their "Magic Eraser" actually worked, they looked at the very first part of the visual system (V1), which is like the camera sensor in your eye.

  • The Result: When they showed this "pure meaning" (the ghost) to the camera sensor, the brain went negative. It actually reacted less than if they showed it nothing.
  • Why this matters: This is like a quality control check. If the eraser had failed and left some visual "dirt" behind, the camera sensor would have lit up. The fact that it went dark proves the eraser worked perfectly. The signal they found in the EBA is genuinely about meaning, not about pixels.

The "Lateral vs. Ventral" Showdown

The study compared two main highways in the brain:

  1. The Ventral Stream (The "Object" Highway): Good at faces (FFA) and places (PPA).
  2. The Lateral Stream (The "Body/Action" Highway): Good at bodies (EBA).

The Verdict:
The "Object" highway (Ventral) was mostly just reacting to the visual shapes. Once you erased the shapes, the meaning didn't matter much.
The "Body" highway (Lateral/EBA) was different. Even without the shapes, it was still screaming, "I understand the story!"

  • Analogy: Think of the Ventral stream as a photographer who cares about lighting and composition. Think of the Lateral stream (EBA) as a detective who cares about the motive and the relationship between people. Even if you hide the suspect's face (remove visual features), the detective still knows who they are based on the context.

Why Does This Matter?

This study changes how we think about the brain. It suggests that the part of your brain responsible for seeing bodies isn't just a camera; it's a social processor.

When you see a person, your brain isn't just calculating the curve of an arm; it's instantly processing the social meaning of that arm. Is it waving? Is it fighting? Is it comforting?

The study proves that meaning is baked into the visual system, specifically in the areas that handle bodies and social interaction. Your brain doesn't just see the world; it understands the story of the world, even when the visual details are stripped away.

Summary in One Sentence

By using a mathematical "eraser" to remove visual details from images, scientists discovered that the part of your brain that sees bodies is actually a master of understanding social stories, far more so than the parts of your brain that recognize faces or places.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →