On the importance of stochasticity in closures of turbulence

This paper demonstrates that while deterministic closures fail to capture the rapid growth of uncertainty in coarse-grained turbulence models, incorporating data-driven stochastic closures is essential for accurately restoring the correct timing and magnitude of variance growth across scales.

Original authors: André Freitas, Luca Biferale, Mathieu Desbrun, Gregory Eyink, Alexei A. Mailybaev, Kiwon Um

Published 2026-02-24
📖 5 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

The Big Picture: Predicting the Unpredictable

Imagine you are trying to predict the weather. You have a supercomputer, but it's too slow to calculate every single air molecule in the atmosphere. So, you decide to only track the "big" movements (like massive storm systems) and ignore the tiny details (like a single gust of wind or a swirling eddy).

This is what scientists call Large-Eddy Simulation (LES). It's like looking at a forest from a helicopter: you see the shape of the canopy, but you can't see individual leaves.

The problem? Those tiny, invisible leaves (the small scales) still affect the big branches. If you ignore them completely, your prediction might look okay on average, but it will fail when you try to predict how fast things will go wrong.

The Core Problem: The "Butterfly Effect" vs. The "Silent Room"

In chaos theory, there's a famous idea called the Butterfly Effect: a tiny flap of a butterfly's wings in Brazil can eventually cause a tornado in Texas. In a real, fully detailed simulation of turbulence, a microscopic error (like a tiny rounding error in a number) gets amplified instantly. It ripples up through the system, causing the whole prediction to diverge from reality very quickly. This is uncertainty growth.

However, when scientists use simplified models (LES) that ignore the tiny details, they usually treat the missing parts as if they are perfectly calm or just "average."

  • The Analogy: Imagine you are trying to predict how a crowd of people will move.
    • Real Life: If one person sneezes, the person next to them flinches, bumping into the next person, causing a ripple of movement. Chaos spreads fast.
    • The Old Model (Deterministic): You tell the computer, "Ignore the sneeze. Just assume everyone moves smoothly." The computer predicts the crowd will move in a straight line for a long time. It fails to realize that the sneeze would have caused a stampede.

The paper argues that ignoring the tiny, random "sneezes" makes your model too confident. It thinks it knows the future better than it actually does.

The Solution: Adding "Controlled Chaos"

The authors tested a new idea: What if we admit we don't know the tiny details, so we add random noise (stochasticity) to our model to represent them?

They used a mathematical toy model called a Shell Model (think of it as a simplified version of fluid dynamics where energy moves between different "rungs" of a ladder).

  1. The Reference (The Truth): They ran a super-detailed simulation that included tiny, random thermal fluctuations (like the sneezes). This showed how uncertainty spreads from the bottom of the ladder to the top very quickly.
  2. The Old Way (Deterministic): They ran a simplified model where they only added a tiny error at the very beginning (the start of the simulation) and then let it run without any new noise.
    • Result: The model was slow to react. It took a long time for the error to travel up the ladder. It was "overconfident" and wrong about when things would go chaotic.
  3. The New Way (Stochastic): They ran a simplified model where they added a little bit of random "jitter" at every single step of the calculation to represent the missing tiny details.
    • Result: This model reacted exactly like the super-detailed one. The uncertainty spread up the ladder at the correct speed.

The "Data-Driven" Twist

Usually, adding random noise is just a guess. But this team used Artificial Intelligence (Machine Learning).

  • They trained a neural network to learn the "average" behavior of the missing tiny details.
  • Then, they told the AI: "Don't just give us the average. Give us the average plus a random shake that mimics the missing chaos."

This created a Stochastic Closure. It's like having a weather forecaster who not only predicts the temperature but also says, "There's a 50% chance a sudden gust will knock over your umbrella in 10 minutes," and they are right about the timing.

Why Does This Matter?

The paper concludes that for any complex system (weather, climate, galaxy formation, even traffic flow), if you simplify the math by ignoring small details, you must add randomness back in.

  • Without randomness: Your model thinks the system is stable for longer than it really is. It's like a driver who thinks the road is clear because they aren't looking at the potholes, only to hit a massive bump later.
  • With randomness: Your model admits, "I don't know exactly what's happening in the small stuff, so I'll assume it's jittery." This makes the prediction of uncertainty much more accurate.

The Takeaway Metaphor

Imagine you are trying to predict the path of a leaf floating down a river.

  • The Deterministic Model assumes the water is a smooth, glassy sheet. It predicts the leaf will glide in a perfect line.
  • The Real World is full of tiny, invisible eddies and currents. The leaf gets tossed around unpredictably.
  • The Stochastic Model admits the water is rough. It doesn't try to predict the exact path of the leaf, but it correctly predicts that the leaf will start wobbling and drifting away from the center much sooner than the smooth-water model suggests.

In short: To predict the future of chaotic systems, you can't just smooth over the rough edges. You have to embrace the chaos, or your predictions will be dangerously wrong about how fast things will go off the rails.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →