Belief-Sim: Towards Belief-Driven Simulation of Demographic Misinformation Susceptibility

The paper introduces BeliefSim, a framework that leverages psychology-informed belief profiles to successfully simulate demographic variations in misinformation susceptibility using Large Language Models, achieving up to 92% accuracy.

Angana Borah, Zohaib Khan, Rada Mihalcea, Verónica Pérez-Rosas

Published 2026-03-05
📖 4 min read☕ Coffee break read

Imagine you are trying to predict who is most likely to believe a fake news story.

In the past, researchers tried to guess this by looking at demographics (who the person is). They thought, "Maybe older people believe more fake news," or "Maybe people with less education are more susceptible." It's like trying to guess someone's favorite movie just by knowing their age and zip code. Sometimes it works, but often it's a wild guess that leads to stereotypes.

This paper introduces a new, smarter way to simulate human behavior using AI. They call it Belief-Sim.

Here is the core idea, explained through a simple analogy:

The "Personality Profile" vs. The "Resume"

Think of a person's Demographics (Age, Gender, Location) as their Resume. It lists the facts about them, but it doesn't tell you how they think.

Think of a person's Beliefs (Do they trust science? Are they worried about the future? Do they value tradition?) as their Personality Profile. This tells you how they process information.

The authors argue that if you want an AI to act like a specific group of people, giving it the Resume isn't enough. You have to give it the Personality Profile.

How They Did It (The Recipe)

The researchers built a system called Belief-Sim to test this. Here is how they cooked it up:

  1. The Ingredients (The Taxonomy): They created a "menu" of 7 different types of beliefs, like a psychological menu.

    • Worldview: How you see your place in the world.
    • Trust: Who do you trust? (Scientists? The government? Your neighbor?)
    • Thinking Style: Do you think logically, or go with your gut?
    • Conspiracy: Do you think there are secret plots everywhere?
    • Values: What is right and wrong to you?
    • Emotions: Are you easily scared or angry?
    • Shortcuts: Do you believe things just because you've heard them before?
  2. The Simulation (The AI Chef): They used Large Language Models (LLMs)—the same kind of AI that writes essays and chats with you.

    • Old Way: They told the AI, "Pretend you are a 65-year-old woman living in a rural area."
    • New Way (Belief-Sim): They told the AI, "Pretend you are a 65-year-old woman living in a rural area who deeply trusts science, is very worried about the economy, and prefers to think things through logically before believing news."
  3. The Test: They fed the AI real news headlines (some true, some fake) and asked, "Is this true or false?" They then compared the AI's answer to what real humans actually said.

The Big Discovery

The results were surprising and very clear:

  • Demographics alone are weak. Just telling the AI the person's age or gender didn't help much. In fact, it sometimes made the AI worse at guessing, because it started relying on stereotypes (e.g., "Old people must believe this").
  • Beliefs are the secret sauce. When the AI was given the "Personality Profile" (the beliefs), it became incredibly accurate—up to 92% accurate in some cases.
  • The "Belief" matters more than the "Person." The study found that what you believe is a much stronger predictor of whether you'll fall for fake news than who you are.

The "Two-Step" Trick (BAFT)

The researchers also found a clever way to train the AI so it doesn't get confused. Imagine you are teaching a student:

  • Step 1: First, teach the student how different groups of people generally think (using survey data).
  • Step 2: Then, teach the student how to apply those thoughts to specific news stories.

They call this BAFT (Belief-Adapter Fine-Tuning). It's like building a solid foundation of "how people think" before asking the AI to solve specific problems. This stopped the AI from cheating by just memorizing stereotypes.

Why This Matters

This isn't just about making AI smarter; it's about saving us from fake news.

If we want to stop misinformation, we can't just target people based on their age or location. We need to understand their beliefs.

  • If someone believes in conspiracies, they need a different kind of fact-check than someone who just doesn't trust the media.
  • If we know why a group believes a lie (e.g., because they are scared or because they distrust science), we can design better messages to stop it.

In a nutshell: To understand why people believe fake news, stop looking at their ID card (demographics) and start listening to their inner voice (beliefs). The AI learned this, and now we can too.