Universal statistical signatures of evolution in artificial intelligence architectures

This paper demonstrates that the statistical laws governing artificial intelligence architectural evolution, including the distribution of fitness effects and patterns of convergence, mirror those of biological evolution, suggesting that evolutionary dynamics are substrate-independent and driven by fitness landscape topology rather than specific selection mechanisms.

Theodor Spiro

Published 2026-04-14
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

Imagine you are a master chef trying to invent the perfect recipe for a new dish. You have a huge kitchen full of ingredients (the AI architecture), and you want to see what happens if you change things.

  • What if I take out the salt? (Maybe the dish tastes terrible.)
  • What if I swap the oven for a microwave? (Maybe it cooks faster, maybe it burns.)
  • What if I add a pinch of cinnamon? (Maybe it's slightly better, maybe it doesn't matter.)

This paper is a massive study of exactly that process, but instead of food, the "chefs" are computer scientists designing Artificial Intelligence. The author, Theodor Spiro, asked a big question: Does the way AI evolves look like the way animals and plants evolve in nature?

To answer this, he didn't just guess; he looked at 935 real experiments from 161 different scientific papers. He treated every time a scientist removed or changed a part of an AI as a "mutation" and measured how much it helped or hurt the AI's performance.

Here are the four main discoveries, explained with simple analogies:

1. The "Recipe Test" (Most changes make things worse)

In nature, if you randomly change a gene in a virus or a fruit fly, it usually hurts the organism. Very rarely does it help.

  • The AI Finding: The study found that AI behaves exactly the same way.
    • 68% of the changes made the AI worse (like removing the salt from soup).
    • 19% made no difference at all (like swapping a red spoon for a blue one).
    • Only 13% actually made the AI better.
  • The Analogy: Imagine trying to fix a car by randomly swapping parts. Most of the time, the car won't start. Sometimes, it runs the same. Very rarely, you accidentally put in a better engine. The AI follows this same "risky business" rule as nature.

2. The "Directed Search" Advantage

Here is the one big difference. In nature, mutations happen by accident (blind search). In AI, scientists intentionally try to make changes they think will help (directed search).

  • The AI Finding: Because scientists are "sighted" (they know what they are looking for), they found a higher percentage of "good" changes (13%) compared to nature (usually 1–6%).
  • The Analogy: If you are looking for a needle in a haystack, a blindfolded person (nature) will rarely find it. A person with a flashlight (AI engineers) will find it much more often. But even with the flashlight, the shape of the haystack (the difficulty of the task) remains the same. The "rules of the game" haven't changed, even if the players are smarter.

3. The "Boom and Bust" of New Ideas

Nature has periods where life explodes with new types of animals (like after a mass extinction), followed by long periods where not much new happens.

  • The AI Finding: AI history shows the exact same pattern.
    • The Boom: In 2017, a new idea called "Transformers" exploded, creating many new AI types. In 2021, "Diffusion models" (the tech behind image generators) did the same.
    • The Bust: Between these booms, things were quiet.
  • The Analogy: Think of AI history like a forest fire. For years, the forest is quiet. Then, a spark hits, and suddenly the whole forest is full of new, fast-growing plants (new AI models). Eventually, the forest gets full, and it's hard for new types of plants to squeeze in. The study predicts we are getting close to that "full forest" limit.

4. Different Chefs, Same Dishes (Convergent Evolution)

In nature, sharks (fish) and dolphins (mammals) evolved separately, but they both developed fins and streamlined bodies because those shapes are best for swimming. This is called "convergent evolution."

  • The AI Finding: Different research groups, working on totally different problems (like recognizing faces vs. writing poems), independently invented the exact same tools.
    • They all invented "Attention" mechanisms (like a camera eye focusing on what's important).
    • They all invented "Gating" (like a door that opens only for specific information).
  • The Analogy: It's like if a chef in Tokyo and a chef in New York, who have never met, both decided that the best way to chop onions was with a specific curved knife. It proves that there are only a few "best ways" to solve these problems, no matter who is doing the cooking.

The Big Conclusion: The Map is the Same, the Traveler is Different

The most important takeaway is this: Evolution follows the same statistical laws whether it's made of flesh and blood or silicon and code.

The author argues that the "landscape" of possibilities (the map of what works and what doesn't) is shaped by the problem itself, not by how you are trying to solve it.

  • Whether you are a monkey evolving over millions of years or an engineer tweaking code over a few years, you are climbing the same mountain.
  • The mountain has steep cliffs (bad changes), flat plateaus (neutral changes), and a few hidden peaks (good changes).
  • The only difference is that engineers have a helicopter (directed search) to get to the peaks faster, but the mountain itself hasn't changed.

Why does this matter?
It means we can use the rules of biology to predict how AI will grow, and we can use the fast history of AI to test theories about how evolution works in nature. It suggests that the universe has a "grammar" for how complex things evolve, and we are finally starting to read it.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →