Bottom-up and generative computations uniquely explain neural responses across the social brain

This preregistered fMRI study challenges the traditional view of a strict spatial division between social perception and mentalizing brain regions by demonstrating that both posterior STS and TPJ integrate bottom-up relational processing and top-down generative inverse-planning computations, likely operating on distinct temporal scales.

Malik, M., Kim, M., Shu, T., Liu, S., Isik, L.

Published 2026-02-22
📖 4 min read☕ Coffee break read
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine your brain is a bustling newsroom trying to figure out a chaotic scene: two animated shapes are moving around a screen. One is chasing the other. Is it a playful game of tag? Is it a bully chasing a victim? Or are they just ignoring each other?

For decades, scientists believed the brain solved this mystery using a strict assembly line. They thought:

  1. The "Perception Team" (located in the back of the brain, called pSTS) would quickly look at the shapes and say, "Hey, they are touching! One is moving faster!" (Bottom-up processing).
  2. The "Mentalizing Team" (located further forward, called TPJ) would then take that report and think, "Okay, since they are touching and moving fast, the first one intends to tag the second one." (Top-down, goal-based reasoning).

The Big Question: Is the brain really a factory with separate stations for "seeing" and "thinking"? Or do these teams work together in the same room?

The Experiment: A New Kind of Detective Work

The researchers (Manasi Malik and her team) decided to test this by building two "AI detectives" and seeing which one matched the brain's activity best.

  1. Detective "SocialGNN" (The Bottom-Up Observer): This AI is like a security camera with a super-fast eye. It doesn't know about "goals" or "feelings." It only looks at the raw data: Where are the shapes? How fast are they moving? Did they bump into each other? It builds a picture based purely on what it sees.
  2. Detective "SIMPLE" (The Inverse-Planner): This AI is like a Sherlock Holmes. It doesn't just watch; it imagines. It asks, "If the red shape wanted to catch the blue shape, what path would it take?" It simulates the world in its head, guessing the agents' hidden goals and beliefs to explain the movement.

They put 25 people in an MRI machine, showed them videos of these animated shapes, and watched which parts of their brains lit up. Then, they compared the brain's "light patterns" to the "thought patterns" of their two AI detectives.

The Surprise: The Brain is a Hybrid

The researchers expected to find that the "Perception Team" (pSTS) only liked the security camera AI, and the "Mentalizing Team" (TPJ) only liked the Sherlock Holmes AI.

They were wrong.

Instead, they found that both brain regions loved both detectives.

  • The pSTS (the "perception" area) wasn't just watching the shapes; it was also figuring out the hidden goals, just like the Sherlock AI.
  • The TPJ (the "thinking" area) wasn't just guessing goals; it was also paying close attention to the raw movement and bumps, just like the security camera AI.

The Analogy: It's not a factory assembly line where the product moves from Station A to Station B. It's more like a kitchen.

  • Imagine a chef (the brain) making a complex dish.
  • You might think the "chopping station" only chops vegetables and the "sauce station" only mixes spices.
  • But this study shows that the chef at the chopping station is also tasting the sauce, and the chef at the sauce station is also chopping vegetables. They are both doing everything, just at slightly different speeds.

The Twist: It's About Timing, Not Location

If both brain areas are doing both types of work, why do we have two different areas? The answer might be time, not space.

The researchers looked at when the brain reacted during the 10-second video:

  1. Early in the video: The "Security Camera" (SocialGNN) style of thinking fired up first. The brain quickly grabbed the visual facts: "They are moving! They bumped!"
  2. Later in the video: The "Sherlock Holmes" (SIMPLE) style of thinking took over. The brain started to slow down and reason: "Oh, I see, the red one is trying to trap the blue one."

So, the brain isn't split into "seeing" and "thinking" rooms. Instead, it's a single, powerful engine that starts with fast visual observation and gradually shifts into deep, goal-based reasoning.

Why This Matters

This changes how we understand human social intelligence. We don't just "see" a social interaction and then "think" about it later. Our brains are constantly running two simulations at once: one that tracks the physical world and one that predicts the mental world.

It's like driving a car. You don't just look at the road (bottom-up) and then later decide to turn the wheel (top-down). You are constantly doing both: your eyes track the lane lines while your brain predicts where the other car is going to be in three seconds. The brain is a master of doing both at the same time, using different tools to solve the complex puzzle of human connection.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →