Inherited or produced? Inferring protein production kinetics when protein counts are shaped by a cell's division history

This paper addresses the challenge of inferring protein production kinetics in dividing cells by overcoming the intractability of standard likelihood methods caused by non-Markovian division history effects, utilizing conditional normalizing flows to reveal that the yeast *glc3* gene exhibits mostly inactive, brief, and transient expression under nutrient stress.

Original authors: Pedro Pessoa, Juan Andres Martinez, Vincent Vandenbroucke, Frank Delvigne, Steve Pressé

Published 2026-04-10
📖 5 min read🧠 Deep dive
⚕️

This is an AI-generated explanation of a preprint that has not been peer-reviewed. It is not medical advice. Do not make health decisions based on this content. Read full disclaimer

The Big Problem: The "Family Heirloom" Confusion

Imagine you are a detective trying to figure out how fast a factory is producing a specific toy (a protein). You walk into the factory and take a snapshot of the workers holding these toys.

In a normal factory, if you see a worker holding 10 toys, you might guess they made 10 toys recently. But in a cell, things are weird. When a cell divides (splits into two), it doesn't throw away its toys. Instead, it passes them down to its children, like a family heirloom.

So, if you see a cell holding 100 toys, it might be because:

  1. The factory is running at full speed right now (high production).
  2. OR, the factory has been quiet for a long time, but the cell just inherited a huge pile of toys from its great-grandmother, grandmother, and mother.

The Mistake: If you ignore the "family history" (the cell's division history), you will think the factory is super busy when it's actually sleeping. This is the problem the authors faced: How do you tell the difference between a toy made today and a toy inherited from yesterday?

The Old Way vs. The New Way

The Old Way (The "Perfect Math" Approach):
Scientists usually try to write a perfect math equation (a likelihood) to predict what they should see. They assume that cells behave like simple, predictable machines where the past doesn't matter (Markovian).

  • The Problem: Cell division isn't simple. It's messy, irregular, and depends on history. Trying to write a math equation for this is like trying to predict the exact path of a leaf blowing in a hurricane using a ruler. It's too complex; the math breaks.

The New Way (The "Video Game" Approach):
The authors realized that while writing the math equation is impossible, simulating the process is easy.

  • The Analogy: Think of a video game. You can't write a single equation that predicts exactly where every pixel will be in a complex 3D world. But you can run the game. You can press "start," let the game run, and see what happens.
  • The Innovation: The authors built a "Video Game Simulator" for cells. They ran millions of simulations with different settings (e.g., "What if the factory makes toys fast?" vs. "What if it makes them slow?").

The Secret Weapon: The "Neural Network Translator"

Here is the tricky part: They have the simulator, but they need to go backward. They have the real data (the snapshot of the factory), and they need to find the settings that created it.

Usually, you need a math formula to do this. Since they don't have one, they used a Neural Network (a type of AI) to act as a translator.

  1. Training the AI: They fed the AI millions of pairs of data:
    • Input: "Here are the factory settings (e.g., fast production)."
    • Output: "Here is the result of the simulation (the snapshot of toys)."
  2. The Learning: The AI learned to recognize the pattern. It learned, "Oh, when I see this specific pattern of toy counts, it usually comes from these specific settings."
  3. The Result: The AI became a "Likelihood Machine." It could look at a real snapshot and say, "I'm 99% sure this came from a factory running at 5% speed, even though it looks like it's running at 100%."

The Real Discovery: The "Sleeping Giant"

They applied this new method to yeast cells (a type of fungus) under stress (starvation). They were looking at a gene called glc3, which helps the yeast store energy.

What they expected:
The yeast was starving (high stress). The fluorescence (glow) was super bright. They thought, "The cells must be frantically working to make energy proteins!"

What they actually found:
Using their "Video Game + AI" method, they discovered the opposite.

  • The cells were actually mostly asleep (the gene was inactive 95% of the time).
  • However, every once in a while, a cell would wake up for a split second, make a huge burst of proteins, and go back to sleep.
  • Because the proteins last a long time and get passed down to children, the "heirlooms" piled up.

The Analogy:
Imagine a house that looks like a mess of toys everywhere.

  • Naive View: "The kid must be playing with toys right now!"
  • Real View: "The kid hasn't played in weeks. But every month, they get a new box of toys from their parents, and they never throw the old ones away. The mess is just a pile of old boxes."

Why This Matters

This paper is a game-changer because it stops scientists from making the "Naive View" mistake.

  1. It fixes the math: It admits that cell division is messy and history-dependent, so it stops trying to force simple math on it.
  2. It uses simulation: Instead of solving impossible equations, it uses the power of computers to simulate reality and then uses AI to interpret the results.
  3. It reveals the truth: In the yeast example, it showed that the cells aren't constantly stressed; they are using a "bet-hedging" strategy. They stay mostly inactive to save energy, only waking up briefly to stockpile supplies, which they then pass down to their children.

In short: The authors built a time-machine simulator and an AI detective to separate "newly made" proteins from "inherited" ones, revealing that cells are often much more efficient and lazy than we thought.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →