From We to Me: Theory Informed Narrative Shift with Abductive Reasoning

This paper proposes a neurosymbolic approach that leverages social science theory and abductive reasoning to automatically extract rules for guiding Large Language Models in effectively shifting narratives between individualistic and collectivistic frameworks while preserving the original message's core meaning.

Jaikrishna Manojkumar Patil, Divyagna Bavikadi, Kaustuv Mukherji, Ashby Steward-Nolan, Peggy-Jean Allin, Tumininu Awonuga, Joshua Garland, Paulo Shakarian

Published 2026-03-05
📖 4 min read☕ Coffee break read

Imagine you are a master translator, but instead of translating words from French to English, you are translating stories from one culture's mindset to another's.

Specifically, this paper tackles the challenge of taking a story written with a Collectivist mindset (where the group, family, and community are everything) and rewriting it to sound Individualistic (where the hero, personal choice, and self-reliance are everything), without losing the original plot.

Here is the breakdown of the paper using simple analogies:

1. The Problem: The "Lazy" Translator

Think of a standard AI (like a basic version of ChatGPT) as a lazy tourist who is asked to translate a story.

  • The Request: "Rewrite this story about a village working together to build a dam, but make it sound like it's about one brave hero doing it alone."
  • The Result: The lazy tourist just changes a few words here and there. They might say, "The hero built the dam," but they forget to change the feeling of the story. The AI often misses the subtle clues that make a story feel "group-oriented" (like phrases such as "all hands on deck" or "we did this").
  • The Paper's Finding: Current AI is bad at this. It either fails to change the story's "soul" or it changes the plot so much that it's no longer the same story.

2. The Solution: The "Detective + Architect" Team

The authors propose a new method called Neurosymbolic Abductive Reasoning. Let's break that scary name down into a team of two:

  • The Detective (Social Science Theory): Before the AI writes anything, a "Detective" (based on real psychology rules about individualism vs. collectivism) reads the story. The Detective asks: "What specific parts of this story make it feel like a group effort?"
    • Example: "Ah, the phrase 'young, old, weak, strong' is a clue that this is about the community. We need to change this."
  • The Architect (Abductive Reasoning): This is the logic engine. It doesn't just guess; it works backward. It asks: "If we want the story to feel like a solo hero, what specific ingredients must we swap out?"
    • It creates a "shopping list" of changes: "Change 'all hands' to 'one determined soul'."

The Magic: The AI (the writer) only gets to work after the Detective and Architect have handed it a precise list of what to change. This prevents the AI from hallucinating or messing up the story.

3. The Analogy: Renovating a House

Imagine you have a house built for a large, multi-generational family (Collectivist). It has one giant dining room, shared bedrooms, and a communal kitchen.

  • The Goal: You want to renovate it to suit a single, independent person (Individualist).
  • The Bad Way (Zero-Shot AI): You just put a "For Sale" sign on the house and hope the new owner likes it. Or, you paint the walls red but leave the giant dining room. It feels weird and mismatched.
  • The Paper's Way:
    1. Inspect: You walk through and identify exactly what needs to change (e.g., "Divide the dining room into a private study," "Lock the communal kitchen").
    2. Plan: You draw a blueprint (the logic rules) that ensures the house still functions as a home, just for one person.
    3. Build: You hire a contractor (the LLM) to make only those specific changes.
    4. Result: The house is now perfect for a solo dweller, but the foundation and the view (the core story) are exactly the same.

4. The Results: Why It Matters

The researchers tested this on many different AI models (GPT-4, Grok, Llama, etc.).

  • Success Rate: Their method was 55% better at successfully shifting the story's mindset than just asking the AI to "do it."
  • Fidelity: Crucially, their method kept the story's meaning intact. The "Zero-Shot" AI often changed the plot so much it became a different story. The new method kept the story recognizable.
  • Efficiency: The AI didn't need to rewrite the whole story. It only changed about 32% of the words (the specific "clues" the Detective found), leaving the rest untouched.

5. The Big Picture

This isn't just about rewriting fairy tales. This is about communication.

  • If you are a diplomat trying to explain a policy to a culture that values the group, you need to speak their language.
  • If you are a journalist reporting on a tragedy, you might need to frame it differently for different audiences to ensure they understand the cause and effect correctly.

In short: The paper teaches AI how to be a cultural chameleon. It gives the AI a "rulebook" from social scientists so it can change the vibe of a story without breaking the plot, ensuring the message lands perfectly with the intended audience.