Uncertainty Quantification in PINNs for Turbulent Flows: Bayesian Inference and Repulsive Ensembles

This paper develops and evaluates probabilistic extensions of Physics-Informed Neural Networks (PINNs), specifically Bayesian inference, Monte Carlo dropout, and repulsive deep ensembles, to provide reliable uncertainty quantification for inverse turbulent flow problems, demonstrating that Bayesian methods yield the most consistent estimates while repulsive ensembles offer a computationally efficient alternative.

Original authors: Khemraj Shukla, Zongren Zou, Theo Kaeufer, Michael Triantafyllou, George Em Karniadakis

Published 2026-04-21
📖 5 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine you are trying to recreate a complex, swirling storm inside a computer. You have a few scattered weather reports (data) and the laws of physics (the rules of how air moves). Your goal is to fill in the gaps and predict exactly what the wind and pressure are doing everywhere, even where you have no measurements.

This is the challenge of Turbulent Flow Modeling. It's notoriously difficult because the math is messy, and the data is often sparse (like trying to guess the shape of a whole cloud by looking at just a few raindrops).

For a long time, scientists used Physics-Informed Neural Networks (PINNs) to solve this. Think of a PINN as a super-smart student who is given a test. The student has to memorize the few weather reports they have and follow the strict rules of physics. If they get the answer right, they pass.

The Problem:
The old PINN students were overconfident. They would give you a single, precise answer for the wind speed, but they wouldn't tell you how sure they were. If they were guessing wildly in an area with no data, they would still say, "I'm 100% certain!" This is dangerous in engineering. If you are designing a bridge or a plane, you need to know where the model is shaky so you don't build something that fails.

The Solution:
This paper introduces a new way to teach these students how to admit when they are unsure. The authors tried three different "classroom strategies" to quantify uncertainty (how much the model might be wrong):

1. The Bayesian Approach (The "Many-Worlds" Student)

  • The Analogy: Imagine instead of one student, you have a single student who is incredibly indecisive. They run the simulation thousands of times, slightly changing their internal logic each time, just to see how the answer might wiggle.
  • How it works: This is called Bayesian PINN. It treats the network's weights (its knowledge) as a range of possibilities rather than fixed numbers. It uses a sophisticated sampling method (like a random walk through a maze) to explore all the different ways the physics could fit the data.
  • The Result: This is the gold standard. It gives the most honest answer. It says, "Here is my best guess, and here is a 'confidence bubble' around it. If the bubble is huge, you know the model is guessing." It even figured out the pressure field perfectly, even though no one ever told it what the pressure was!

2. The Dropout Method (The "Sleepy" Student)

  • The Analogy: Imagine a student who is forced to take a nap during the test. Every time they answer a question, they randomly forget a few facts they learned. You ask them the same question 1,000 times, and every time they forget something different.
  • How it works: This is MC Dropout. By randomly "turning off" parts of the network during the test, it forces the model to make slightly different predictions each time. The spread of these answers tells you the uncertainty.
  • The Result: It's fast and cheap, but it's a bit clumsy. In this paper, it tended to be too conservative. It would say, "I'm not sure at all!" even when it had plenty of data. It's like a student who is so afraid of being wrong that they refuse to guess, even when they should.

3. The Repulsive Ensemble (The "Debate Club")

  • The Analogy: Imagine you hire 10 different students to solve the problem.
    • The Old Way (Vanilla Ensemble): You just give them all the same textbook and let them study alone. The problem? They all end up thinking the exact same thing. They all converge on the same "wrong" answer because the physics rules are so strong. They have zero diversity.
    • The New Way (Repulsive Ensemble): You add a rule: "You must disagree with your friends!" You force the students to stay apart from each other in their thinking. If two students start to think too similarly, you push them apart.
  • How it works: The authors created a "Repulsive Deep Ensemble." They trained 10 networks but added a special penalty if the networks started to produce the same output. This forces them to explore different solutions.
  • The Result: This was a huge success for speed. It was much faster than the Bayesian method but still gave a good idea of the uncertainty for the main wind speeds. However, for the trickiest parts (the "Reynolds stress," which is like the chaotic swirls of the storm), it wasn't quite as honest as the Bayesian method.

The Big Takeaway

The paper tested these methods on two scenarios:

  1. A simulated cylinder: Where they knew the "true" answer (like a perfect simulation).
  2. Real experimental data: Where they used actual wind tunnel measurements (which are noisy and imperfect).

The Verdict:

  • If you need the absolute truth and can wait: Use the Bayesian PINN. It's the most reliable, honest, and well-calibrated. It tells you exactly where it's guessing.
  • If you need a quick answer for the main flow: Use the Repulsive Ensemble. It's like a fast, efficient debate club that gives you a good estimate of the wind, even if it's a little less precise on the chaotic swirls.
  • Don't use the old "Vanilla" Ensembles: Without the "repulsive" rule to force them to disagree, they all collapse into a single, overconfident, and potentially wrong answer.

In short: The authors figured out how to make AI models for weather and fluid dynamics stop lying about how sure they are. They gave us tools to know when to trust the computer and when to double-check the math.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →