Comparison of data-driven symmetry-preserving closure models for large-eddy simulation

This paper demonstrates that while unconstrained and symmetry-preserving data-driven neural networks both outperform classical large-eddy simulation closures in accuracy, enforcing physical symmetries is crucial for generating more physically consistent velocity-gradient statistics and improving the overall quality of the learned closure.

Syver Døving Agdestein, Benjamin Sanderse

Published 2026-03-06
📖 5 min read🧠 Deep dive

Imagine you are trying to predict the weather, but you don't have a supercomputer powerful enough to track every single raindrop, gust of wind, and swirl of air. That's the problem scientists face with turbulence (chaotic fluid flow like air over a wing or smoke from a chimney).

To solve this, they use a technique called Large-Eddy Simulation (LES). Think of it like looking at a storm through a foggy window. You can see the big, swirling clouds (the "large eddies"), but the tiny, chaotic ripples are blurred out. The problem is, those tiny ripples still affect the big clouds. If you ignore them, your simulation eventually falls apart.

To fix this, scientists use a "closure model"—a mathematical guess that estimates what those tiny, invisible ripples are doing so the big picture stays accurate.

The Old Way vs. The New Way

For decades, scientists used simple, hand-crafted formulas (like the Smagorinsky or Clark models) to make these guesses. They work okay, but they are like using a blunt knife to cut a steak; they get the job done but lack precision.

Recently, scientists started using Artificial Intelligence (Neural Networks) to learn these guesses from data. It's like teaching a robot to watch a million storms and learn exactly how the tiny ripples behave. This is much more accurate, but it has a big flaw: The robot doesn't know the rules of physics.

If you rotate the storm 90 degrees, the physics should look the same. But a standard AI might get confused and give a completely different answer just because the picture turned sideways. This breaks the laws of physics and makes the simulation unstable.

The Three "Robots" in This Paper

The authors of this paper wanted to see if they could teach AI to respect the symmetries of physics (the idea that the laws of nature don't change if you rotate, flip, or shift the view). They built and compared three different types of AI "robots":

  1. The "Unconstrained" Robot (Conv):

    • Analogy: A student who is told to memorize the answers but isn't told the rules of the game.
    • How it works: It's a standard neural network. It's very flexible and learns fast, but it doesn't inherently know that "up" is the same as "down" if you flip the world.
    • Result: It was accurate at predicting the numbers, but it sometimes gave physically weird results (like a storm behaving differently just because you rotated the map).
  2. The "Tensor-Basis" Robot (TBNN):

    • Analogy: A student who is given a set of Lego bricks that are pre-shaped to fit the laws of physics. They can only build things that are structurally sound.
    • How it works: Instead of guessing the whole storm, this AI guesses the coefficients (the numbers) for a pre-defined set of "physics-safe" building blocks. No matter how the AI learns, the final result is guaranteed to obey the rules of rotation and reflection.
    • Result: It was very stable and physically consistent, though it sometimes missed the "wild" details of the turbulence.
  3. The "Group-Convolution" Robot (G-conv):

    • Analogy: A student who is forced to wear a magic suit that automatically translates their thoughts into the correct physics language. If they think "left," the suit automatically knows that "left" means "right" if the world is flipped.
    • How it works: This is a complex neural network architecture where the math is built into the very structure of the network. Every single calculation is forced to respect the symmetry groups (rotations and flips).
    • Result: Like the Lego robot, it was perfectly physically consistent. However, it was computationally heavy (slow) because the "magic suit" was very bulky.

What Did They Find?

The researchers ran these robots against a "gold standard" simulation (Direct Numerical Simulation) to see who performed best.

  • Accuracy: All three AI robots were better than the old, hand-crafted formulas. They predicted the stress of the tiny ripples much more accurately.
  • Stability: The "Unconstrained" robot was fast and accurate, but it was a bit "sloppy" with the laws of physics. The two "Symmetry-Preserving" robots (Lego and Magic Suit) produced results that looked much more like real physics, especially when looking at the complex shapes of the swirling air.
  • The "Teardrop" Test: The authors used a specific test involving the shape of swirling air (called the velocity gradient distribution). The symmetry-preserving models recreated the famous "teardrop" shape of turbulence perfectly. The unconstrained model got the shape slightly wrong, even though its numbers looked okay.
    • Metaphor: Imagine trying to draw a perfect circle. The unconstrained robot drew a circle that was mathematically close to the right size but looked a bit squashed. The symmetry-preserving robots drew a perfect circle every time.

The Big Takeaway

The paper concludes that baking the laws of physics directly into the AI's brain is worth it.

Even though the "Unconstrained" robot was fast and got the numbers right, the "Symmetry-Preserving" robots produced a simulation that felt more "real" and consistent. It's the difference between a GPS that gets you to the destination quickly but takes you through a wall, versus a GPS that respects traffic laws and road geometry, ensuring a safe and logical journey.

In short: If you want an AI to simulate nature, don't just let it learn from data; teach it the rules of the game first. The "Lego" approach (Tensor-Basis) seemed to offer the best balance of speed and physical correctness, while the "Magic Suit" (Group-Conv) was the most rigorous but slowest.