Federation over Text: Insight Sharing for Multi-Agent Reasoning

This paper introduces "Federation over Text" (FoT), a gradient-free framework where multiple LLM agents iteratively share and aggregate their reasoning traces into a central metacognitive insight library, significantly improving reasoning accuracy and efficiency across diverse tasks without requiring gradient optimization or supervision.

Original authors: Dixi Yao, Tahseen Rabbani, Tian Li

Published 2026-04-21✓ Author reviewed
📖 4 min read☕ Coffee break read

This is an AI-generated explanation of the paper below. It is not written by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine a world where every time a brilliant student solves a difficult math problem, they write down their solution, throw it in a trash can, and then the next student has to figure out the exact same problem from scratch, completely forgetting the first student's clever tricks.

That is essentially how most current AI "agents" (smart computer programs) work today. They are incredibly smart, but they suffer from amnesia. Every time they face a new challenge, they start from zero, wasting time and energy reinventing the wheel. They also tend to work in isolation, never sharing their "aha!" moments with their peers.

The paper you shared, "Federation over Text" (FoT), proposes a brilliant solution to this problem. It's like creating a global, shared "Wisdom Library" where AI agents can swap their best thinking strategies without ever revealing their private secrets.

Here is the breakdown using simple analogies:

1. The Problem: The "Reinventing the Wheel" Syndrome

Currently, if you have 100 AI agents trying to solve 100 different math problems, they all work alone.

  • Agent A solves a hard algebra problem by using a specific shortcut.
  • Agent B tries to solve a similar problem but doesn't know the shortcut, so they struggle for hours.
  • The Result: Agent A's hard-earned wisdom is lost. Agent B wastes time. The system is inefficient.

2. The Solution: The "Shared Wisdom Library" (FoT)

The authors propose a framework called Federation over Text. Think of it as a centralized "Recipe Book" that gets updated every day.

Here is how the process works, step-by-step:

  • Step 1: The Local Chef (The Agent)
    Imagine an AI agent is a chef in a restaurant. They are given a specific dish to cook (a task). They cook it using their own skills.
  • Step 2: The "Meta-Commentary" (The Insight)
    Instead of sending the raw ingredients or the messy kitchen to the boss, the chef writes a short, clever note on a card.
    • Bad Note: "I cooked a steak." (Too vague)
    • Good Note (The Insight): "When the pan is too hot, lower the flame immediately and add butter at the end to prevent burning. This works for any searing task."
      This note is called a Reasoning Trace. It's a distilled lesson, not the raw data.
  • Step 3: The Librarian (The Server)
    All the chefs send their cards to a central Librarian. The Librarian doesn't just stack them; they read them, find patterns, and combine similar notes into Master Recipes.
    • Example: If Chef A says "Lower heat for steak" and Chef B says "Lower heat for fish," the Librarian creates a new rule: "Rule: Always lower heat for delicate proteins."
  • Step 4: The Update
    The Librarian sends this updated "Master Recipe Book" back to all the chefs. Now, when Chef B goes to cook a fish tomorrow, they don't have to guess; they just look up the rule in the book and cook perfectly.

3. Why is this special? (The "Text" vs. "Gradients" Magic)

In traditional AI training, computers share "weights" (mathematical numbers that define the brain). This is like trying to share a brain by sending someone a pile of sand; it's messy, requires huge computing power, and you can't read it.

FoT is different. It shares Text (ideas and language).

  • Analogy: Instead of trying to merge two people's brains by mixing their DNA (gradients), FoT is like two people sitting down to have a conversation and agreeing on a set of best practices.
  • Privacy: Because they only share the lessons (the "how-to" notes) and not the problems (the raw data), the agents don't need to reveal their private information. It's like sharing a tip on how to fix a leaky faucet without showing the plumber your entire house blueprint.

4. The Results: Superpowers for Everyone

The paper tested this on three tough challenges:

  1. Math: Agents solved harder math problems faster and with fewer mistakes.
  2. Cross-Domain: An agent learned a math trick and successfully applied it to a chemistry problem (e.g., using a logic puzzle technique to solve a molecule structure).
  3. Research: Agents read old scientific papers and created a "cheat sheet" that predicted the core ideas of future papers with over 90% accuracy.

The Bottom Line:
Federation over Text turns a group of isolated geniuses into a collective super-intelligence. It allows AI to learn from its own past mistakes and successes, share those lessons instantly, and get better at everything—math, coding, and science—without needing to be retrained from scratch.

It's the difference between a student who studies alone in a library and a student who has access to a living, breathing textbook written by thousands of other students, updated every single day.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →