Encoding models uncover fine-grained feature selectivity for bodies, hands and tools

By combining densely sampled fMRI data with artificial neural network-based encoding models, this study reveals that category-selective areas in the occipitotemporal cortex exhibit fine-grained, distinct feature sensitivities for bodies, hands, and tools that go beyond broad categorical tuning.

Original authors: Cortinovis, D., Hebart, M., Bracci, S.

Published 2026-04-13
📖 4 min read☕ Coffee break read
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine your brain's visual system as a massive, bustling library. For a long time, scientists thought this library had a few big, general sections: a "Faces" section, a "Body" section, and a "Tools" section. If you walked into the "Body" section, you'd expect to see books about bodies, and that was it.

But this new study suggests the library is actually much more complex. It's not just one big room for "Bodies"; it's a collection of tiny, specialized study rooms, each with a very specific job. Some rooms are for whole bodies, some are for just hands, and some are for tools. Even more surprisingly, two rooms that both seem to be about "hands" might actually be reading different chapters of the same book.

Here is how the researchers figured this out, using a mix of brain scans and computer magic.

The Experiment: A Brain Scan "Pop Quiz"

The researchers put three volunteers in an MRI machine (a giant camera that takes pictures of the brain's activity). They showed them hundreds of pictures:

  • Whole people
  • Just hands
  • Tools (like hammers or scissors)
  • Other objects (like cups or buildings)

They asked the volunteers to pay attention to specific things, like spotting a bug, while their brains lit up in response to the images.

The "Virtual Brain" Trick

Here is the clever part. Instead of just looking at the brain scans and guessing what the brain was thinking, the researchers built AI "Virtual Twins" of these brain areas.

Think of it like this: They took the brain's reaction to the 200 pictures they showed, and they taught a computer program (a neural network) to mimic that reaction. Once the computer learned how the brain reacted, they didn't stop there. They let the computer "look" at millions of other images from the internet (like a massive digital photo album) that the humans never saw.

This allowed them to ask: "If this specific part of the brain saw a picture of a hammer, would it light up? What about a broom? Or a cat?"

The Big Discoveries

1. The "Hand" vs. "Tool" Confusion
The study found that the brain has distinct areas for hands and tools, but they are neighbors.

  • The Left-Handed Specialist: One area (in the left side of the brain) loves hands, but it gets really excited when those hands are holding tools. It's like a mechanic who loves hands, but only when they are fixing a car.
  • The Right-Handed Specialist: Another area (on the right side) also loves hands, but it prefers seeing hands as part of a whole body or in social contexts. It's more like an artist who loves drawing hands in portraits.

2. The "Tool" vs. "Object" Split
They found two different areas that respond to tools, but they see them differently:

  • The "Action" Zone (Lateral): This area is like a workshop. It lights up for things you can grab and use—hammers, screwdrivers, scissors. It cares about how you interact with the object.
  • The "Surface" Zone (Ventral): This area is more like a museum display case. It responds to tools, but it also lights up for big, non-graspable objects like streetlights or buildings. It seems to care more about the shape and material (like metal or wood) rather than the action.

3. The "Body" Nuance
Even for whole bodies, the brain isn't uniform. The left side of the brain seems to notice the shape and posture of bodies (and sometimes objects that look like bodies, like a tennis racket), while the right side is more sensitive to textures like fabric or wood.

Why This Matters

Think of the brain's visual system not as a set of rigid folders, but as a team of specialists.

  • If you hand a hammer to the "Action Zone," it says, "I see a tool I can use!"
  • If you hand the same hammer to the "Surface Zone," it says, "I see a shiny, metal object."
  • If you show a hand holding that hammer to the "Left Hand Specialist," it says, "I see a hand doing work!"

The Takeaway:
This study proves that our brains are incredibly precise. Even within a category we think is simple (like "hands" or "tools"), our brain has carved out tiny, specialized neighborhoods. Each neighborhood has a slightly different job, focusing on different details like action, shape, or texture.

By using AI to act as a "translator" for these brain signals, the researchers could uncover these hidden layers of detail that we couldn't see just by looking at the raw brain scans. It's like realizing that a choir isn't just singing "music," but that every single singer is hitting a specific, unique note that creates the whole symphony.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →