Increasing intelligence in AI agents can worsen collective outcomes

This paper demonstrates that increasing the sophistication and diversity of AI agents can paradoxically worsen collective outcomes and system overload when resources are scarce, with the overall impact depending entirely on the capacity-to-population ratio rather than the agents' inherent intelligence.

Neil F. Johnson

Published Fri, 13 Ma
📖 5 min read🧠 Deep dive

Imagine a crowded room full of smart robots, all trying to get through a single door to get a prize (like a charging station, a Wi-Fi signal, or a turn to speak). The door can only let a certain number of people through at once. If too many try to push through at the same time, the door jams, everyone gets stuck, and no one gets the prize. This is the "traffic jam" of the AI world.

This paper asks a fascinating question: If we make these robots smarter, will they figure out how to share the door nicely, or will they make the traffic jam worse?

The answer is surprising: Sometimes, making them smarter actually makes the chaos worse.

Here is the story of the experiment, broken down into simple concepts:

1. The Setup: The "Lord of the Flies" of Robots

The researchers set up a game with 7 AI agents (robots) and a limited number of "prizes" (resources).

  • The Robots: They aren't all the same. Some are "Followers" (they trust the group), some are "Rebels" (they do the opposite of the group), and some are in between. They are like the characters in the book Lord of the Flies.
  • The Goal: Each robot has to decide every second: "Do I try to go through the door, or do I wait?"
  • The Catch: They can't talk to a central boss. They have to decide on their own, based only on what happened in the last few seconds.

2. The Four Ingredients of the Experiment

The researchers tweaked four things to see how they changed the outcome:

  1. Nature (The Brain): Are the robots all identical clones, or are they different models with different personalities?
  2. Nurture (Learning): Can the robots learn from their mistakes and change their strategy?
  3. Culture (Tribes): Can the robots sense each other and form "gangs" or "tribes"?
  4. Scarcity (The Door): How many prizes are there compared to the number of robots?

3. The Big Discovery: The "Smart" Trap

The researchers found a "tipping point" based on how crowded the room is.

Scenario A: The Room is Packed (Scarcity)

Imagine there are 7 robots but only 2 prizes.

  • The "Dumb" Strategy (Level 1): If the robots are simple and identical, they act like a random coin flip. Sometimes they jam, sometimes they don't. It's chaotic, but manageable.
  • The "Smart" Strategy (Level 4 & 5): When you give the robots learning abilities and tribal instincts, they get too coordinated.
    • The "Followers" all decide to rush the door at the exact same time.
    • The "Rebels" all decide to rush at the exact same time (just to be different).
    • Result: Instead of spreading out, they clump together in huge waves. The door gets crushed. The system crashes.
    • The Metaphor: It's like a mosh pit. If everyone is just randomly jumping, it's fine. But if everyone starts jumping in perfect unison because they are "smart" and "coordinated," the floor collapses.

In short: When resources are scarce, simplicity is safer than sophistication.

Scenario B: The Room is Spacious (Abundance)

Imagine there are 7 robots and 6 prizes.

  • Now, the "Smart" robots shine. Because there is plenty of room, their ability to learn and coordinate helps them avoid the door entirely when it's full, letting others go first.
  • The "Dumb" robots still crash into each other occasionally because they don't learn.
  • Result: The smart robots win.

4. The "Tribal" Twist

The most interesting part is the "Tribes" (Level 5).

  • When resources are scarce, the robots naturally split into two opposing gangs (e.g., 3 robots vs. 3 robots, with 1 lonely robot in the middle).
  • Why this helps: Even though they are fighting, the gangs are small. They can't all rush the door at once because the gang leaders (the "Followers") only control 3 robots. This limits the damage.
  • Why this hurts (when abundant): If there are plenty of prizes, these gangs are too small. They don't take advantage of the extra space because they are too busy sticking to their gang rules.

5. The "Rich Get Richer" Problem

Here is the dark side of the experiment.

  • When the system crashes (high overload), the individual robots that belong to the winning tribe actually do very well.
  • Imagine the door jams, but the "Follower" tribe gets through 80% of the time because they moved together. They are happy!
  • But the system (the hospital, the city, the battlefield) is failing. Critical alerts are delayed, or cars are stuck.
  • The Lesson: A robot can be "successful" individually while the whole world burns. This is exactly what happens in human societies during crises: the loudest, most coordinated groups often win, even if it destroys the common good.

The Final Verdict: The "Magic Number"

The paper concludes that we don't need to guess whether to use "smart" AI or "dumb" AI. There is a simple math formula: The Ratio of Resources to Robots.

  • If you have few resources (Scarcity): Use simple, identical, cheap AI. Do not give them learning skills or the ability to form tribes. Let them act randomly. It prevents the "mosh pit" crash.
  • If you have plenty of resources (Abundance): Use smart, diverse, learning AI. They will optimize the flow and make everything run smoother.

The Takeaway:
More intelligence in AI isn't always better. Just like giving a toddler a chainsaw is a bad idea, giving a chaotic crowd of AI agents "tribal" intelligence when they are starving for resources is a recipe for disaster. The solution isn't always "smarter AI"; sometimes, the best solution is "dumber, simpler AI" that knows its place.