Normative Common Ground Replication (NormCoRe): Replication-by-Translation for Studying Norms in Multi-agent AI

This paper introduces NormCoRe, a novel methodological framework that systematically translates human subject experiments into multi-agent AI environments to study normative coordination, demonstrating through a distributive justice replication that AI agents' normative judgments differ from human baselines and are sensitive to foundation model and persona choices.

Luca Deck, Simeon Allmendinger, Lucas Müller, Niklas Kühl

Published 2026-03-13
📖 5 min read🧠 Deep dive

The Big Idea: Don't Just Copy-Paste; Translate!

Imagine you are a chef who has a famous, award-winning recipe for a cake that humans love. Now, you want to see if a robot chef can make the same cake.

The Old Way (The Mistake):
Most researchers today try to just "copy-paste" the human recipe into the robot's brain. They assume the robot is just a "digital human." They give the robot the same instructions and expect the exact same result. If the robot makes a slightly different cake, they might say, "The robot failed," or "The robot is broken."

The New Way (NormCoRe):
The authors of this paper say: "Wait a minute! A robot isn't a human. It doesn't have a stomach, it doesn't have childhood memories, and it doesn't eat food."

Instead of trying to copy the human experiment exactly, they propose a method called NormCoRe (Normative Common Ground Replication). Think of this as translating a book from English to French. You can't just swap words one-for-one; you have to adapt the meaning, the grammar, and the cultural context so the story makes sense in the new language.

NormCoRe is a guidebook for researchers on how to "translate" human experiments into robot experiments so we can actually learn something useful.


The 4 Layers of Translation

The paper argues that to translate a human experiment into an AI experiment, you have to adjust four specific "layers," like adjusting the settings on a complex machine:

  1. The Brain (Cognitive Layer):

    • Human: Has a brain built by evolution, life experiences, and reading books.
    • AI: Has a "Foundation Model" (like a giant library of text it was trained on).
    • The Translation: You have to choose which library the robot reads from. Does it read mostly American news? Chinese history? This choice changes how the robot thinks, just like a human's background changes their opinion.
  2. The Person (Ontological Layer):

    • Human: A living person with a name, a personality, and a memory of what happened five minutes ago.
    • AI: A digital character (a "persona") created by a prompt.
    • The Translation: You have to decide: Does the robot remember the whole conversation? Does it have a "personality"? If you tell the robot to be "a student in New York" vs. "a student in Beijing," it will act differently, even if the task is the same.
  3. The Conversation (Interactional Layer):

    • Human: People talk over each other, interrupt, use body language, and have social pressure.
    • AI: Robots usually take turns speaking in a strict, orderly line.
    • The Translation: You have to design the rules of the chat. Who speaks first? How long can they talk? If you change the rules of the conversation, the final decision changes.
  4. The Task (Interventional Layer):

    • Human: A researcher hands out papers and watches them.
    • AI: A computer program runs the whole show.
    • The Translation: You have to program exactly how the task happens. Is the robot rewarded with points? Is it forced to agree?

The Experiment: The "Veil of Ignorance" Cake

To test their method, the authors recreated a famous human experiment about fairness.

The Human Experiment:
Imagine a group of people are asked to design a society. But there's a catch: they have to wear a "Veil of Ignorance." This means they don't know if they will be born rich or poor, smart or not smart. They have to agree on a rule for sharing money before they know their own fate.

  • The Result: Humans usually agree on a rule that says, "Maximize the total money, but make sure the poorest person gets enough to survive."

The AI Experiment (Using NormCoRe):
The researchers created groups of AI agents to play the same game. But they didn't just copy-paste. They used NormCoRe to translate the layers:

  • They tried different "Brains" (US-based AI models vs. Chinese-based AI models).
  • They tried different "Languages" (English, Mandarin, Spanish) for the robot's personality.

What They Found:

  1. The Robots agreed more than humans. When humans argued, they disagreed a lot. When the robots argued, they reached a consensus very quickly and very easily.
  2. The "Brain" mattered. If the robot was trained on Chinese data, it made different fairness choices than if it was trained on US data.
  3. The "Language" mattered. If the robot's personality was described in Spanish, it argued differently than if it was described in English.

The Lesson:
The robots didn't just "act like humans." They acted like robots with specific settings. If you change the settings (the translation), you change the result.


Why This Matters (The "So What?")

Imagine you hire a team of AI robots to decide how to distribute money in a hospital, or how to assign traffic lights in a city.

  • If you ignore NormCoRe: You might think, "Oh, the robots decided to give the most money to the rich because that's what they chose." You might blame the robots for being greedy.
  • If you use NormCoRe: You realize, "Wait, I programmed them to speak in a specific language and used a specific AI model. That specific combination made them greedy. If I change the language or the model, they might be fair."

The Takeaway:
We cannot treat AI agents as simple substitutes for humans. They are different creatures with different "brains" and "personalities."

NormCoRe is a tool that forces researchers to be honest. It says: "Don't just say 'We ran an experiment.' Tell us exactly how you translated the human rules into robot rules, because those translation choices are what actually created the result."

It's like a recipe card that doesn't just list ingredients, but explains why you swapped butter for oil, so anyone else can understand exactly how the cake turned out.