An abstract model of nonrandom, non-Lamarckian mutation in evolution using a multivariate estimation-of-distribution algorithm

This paper presents a simulation model based on estimation-of-distribution algorithms that demonstrates how nonrandom, non-Lamarckian mutations, driven by internally accumulated genomic information, interact with selection and recombination to advance evolution, thereby offering a computational framework that aligns with Interaction-based Evolution theory and connects to Darwinian observations and computational learning theory.

Vasylenko, L., Livnat, A.

Published 2026-04-01
📖 6 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

The Big Question: How Do We Evolve?

For over a century, scientists have believed there are only two ways life changes over time:

  1. The "Random Dice" Theory (Standard Evolution): Mutations are like rolling dice. They happen by pure accident, with no connection to what the animal needs. If a rabbit happens to grow a faster leg by accident, it survives. If not, it doesn't. Nature just picks the winners.
  2. The "Lamarckian" Theory (The Old Idea): An animal senses a problem (like a cold winter) and decides to change its genes to grow a thicker coat, passing that new coat to its babies. This idea was largely rejected because it seems too magical—like a computer rewriting its own code just because it's cold outside.

This paper proposes a third option: A "Goldilocks" theory called Interaction-Based Evolution (IBE). It suggests mutations aren't random dice rolls, but they also aren't conscious decisions. Instead, they are smart reactions to the organism's own internal history.


The Analogy: The "Smart Chef" vs. The "Blind Cook"

Imagine you are trying to invent the perfect recipe for a cake.

  • The Random Cook (Standard Evolution): You throw ingredients into a bowl completely at random. Sometimes you get flour and eggs; sometimes you get sand and ketchup. You taste them, throw away the bad ones, and keep the good ones. You have to try millions of random combinations to find a cake.
  • The Lamarckian Cook: You taste the batter, realize it's too sweet, and magically change the recipe in your head to add less sugar before you bake the next batch. This is impossible in biology.
  • The Smart Chef (This Paper's Model): You look at the cakes that worked well yesterday. You notice a pattern: "Every time I used vanilla and cinnamon together, the cake was great." You don't just guess randomly; you learn from the successful cakes of the past. You then use that knowledge to mix the ingredients for today's batch. The new ingredients aren't random, but they aren't magic either—they are based on the history of what worked.

The Computer Experiment: The "Restricted Boltzmann Machine"

The authors didn't use real animals; they used a computer simulation. They set up a puzzle (a math problem called MAX-SAT) where a population of "digital organisms" had to find the best solution.

They compared two groups:

  1. Group A (Random Mutation): They took the best digital organisms, copied them, and randomly flipped a few bits (like flipping a coin to change a 0 to a 1).
  2. Group B (The "Smart" Model): They took the best organisms and fed their data into a special AI brain (called a Restricted Boltzmann Machine). This AI brain looked at the winners and asked, "What patterns do these winners share?" It then used those patterns to generate the next generation.

The Result:
The "Smart" group (Group B) solved the puzzle much faster and better than the "Random" group.

  • Why? Because the AI brain noticed that certain genes (bits of code) worked well together. It didn't just flip bits randomly; it flipped them in a way that respected those partnerships.
  • The Metaphor: Imagine a dance troupe. The Random group keeps swapping dancers randomly. The Smart group watches who dances well together and ensures those pairs stay together in the next show.

Key Concepts Made Simple

1. The "Used-Together-Fused-Together" Rule

In the real world, if two genes are used together constantly (like genes for running and genes for breathing), they might physically stick together or mutate in a way that keeps them linked.

  • Analogy: Think of a toolbox. If you always use a hammer and a nail together, you might eventually glue them into a single "Hammer-Nail" tool. You don't need to invent a new tool from scratch; you just simplify the process by combining what you already use.
  • The Paper's Point: Mutations aren't random accidents; they are often the result of the genome "gluing together" things that have been working well together for a long time.

2. Simplicity Creates Complexity

Usually, we think "simple" means "less complex." But this paper argues that simplification is the engine of complexity.

  • Analogy: Think of learning to drive. At first, you have to consciously think about every gear shift, mirror check, and pedal press (complex). Eventually, your brain "chunks" these actions into one smooth motion: "Drive." You simplified the process, which freed up your brain to learn how to race or drive in snow (complex new skills).
  • The Paper's Point: Evolution simplifies the internal rules of the organism. Once the organism is "simplified" and efficient, it can handle much more complex challenges.

3. The Bell Curve Mystery

In nature, traits like height form a "Bell Curve" (most people are average, few are very tall or very short). Scientists usually say this happens because many genes add up like numbers (1 + 1 + 1 = 3).

  • The Paper's Twist: The simulation showed you can get a Bell Curve even when genes interact in complex, non-additive ways. It's not just a math sum; it's a complex dance where the whole is greater than the sum of its parts.

Why Does This Matter?

This paper suggests that evolution is a form of learning.

  • The Population is the Student: The group of organisms learns from its collective history.
  • The Environment is the Teacher: It tells the group what works (survival) and what doesn't (death).
  • The Mutation is the Homework: Instead of guessing randomly, the organism uses its internal "notes" (accumulated genetic history) to figure out the next step.

The Bottom Line

This paper argues that we have been thinking about evolution wrong. We assumed mutations were either random accidents or magical wishes.

Instead, mutations are intelligent summaries of the past. They are the genome saying, "We've tried a million things. These specific combinations worked. Let's build the next generation based on those successes, not on a coin flip."

It bridges the gap between biology and computer science, showing that life evolves not just by surviving the fittest, but by learning from the winners.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →