Learning constitutive models and rheology from partial flow measurements

This paper presents an end-to-end framework that combines a differentiable non-Newtonian solver with a frame-invariant tensor basis neural network to learn form-agnostic constitutive models from partial flow measurements, enabling the discovery of interpretable, geometry-portable rheological laws through automated Bayesian model selection.

Original authors: Alp M. Sunol, James V. Roggeveen, Mohammed G. Alhashim, Henry S. Bae, Michael P. Brenner

Published 2026-02-24
📖 5 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine you are trying to figure out how a specific type of honey behaves. You know that some honey is runny, some is thick, and some changes its thickness depending on how fast you stir it. In the world of science, this is called rheology—the study of how materials flow and deform.

For a long time, scientists have had a hard time figuring out the "rules" (constitutive models) that govern how complex fluids like blood, paint, or polymer solutions behave. Here is the problem:

The Old Way: The "Taste Test" in a Vacuum

Traditionally, to understand a fluid, scientists put it in a simple, boring machine (a rheometer) that squeezes or stirs it in a straight line. It's like trying to understand how a car handles by only driving it in a straight, empty parking lot at 5 mph.

  • The Flaw: Real life is messy. Fluids flow through crooked pipes, around obstacles, and into tiny blood vessels. The simple "parking lot" tests often fail to predict how the fluid will act in these complex, real-world situations.
  • The Result: Scientists often guess the wrong mathematical formula for the fluid, leading to bad predictions when they try to use it in a real engine or a human body.

The New Way: The "Digital Twin" Detective

This paper introduces a clever new method called "Digital Rheometry." Instead of guessing the rules from a simple test, they let the fluid show them the rules by watching it flow through a complex environment.

Here is how they did it, using a few simple analogies:

1. The "Differentiable Solver" (The Super-Teacher)

Imagine you have a video game engine that simulates fluid flow perfectly. Usually, if you change the rules of the game (like making the fluid thicker), you have to restart the whole game to see what happens.
The authors built a special version of this engine where every single step is mathematically reversible.

  • The Analogy: Think of a teacher who can not only solve a math problem but also instantly see exactly which number you changed to get the wrong answer. If the simulation predicts the fluid moves too fast, the "teacher" knows exactly which rule to tweak to fix it. This allows them to learn from the data instantly, rather than guessing and checking.

2. The "TBNN" (The Shape-Shifting Translator)

They used a type of AI called a Tensor Basis Neural Network (TBNN).

  • The Analogy: Imagine you are trying to describe a complex dance to someone who has never seen it. Instead of memorizing every single step, you teach the AI the principles of the dance (like "if the music speeds up, the dancer spins faster").
  • The TBNN learns the relationship between how the fluid is being stretched/squeezed (the dance moves) and how much stress (resistance) it creates. Crucially, it learns the universal rules of the dance, not just the specific moves in one room. This means if you take this learned AI and put it in a different room (a new geometry), it still knows how to dance correctly.

3. The "Distillation" (Turning AI into a Recipe)

AI is great at predicting, but it's often a "black box"—you don't know why it made a decision. Scientists need simple, understandable formulas (like a recipe) to use in engineering.

  • The Analogy: The AI is like a master chef who can cook a perfect dish but can't explain the recipe. The authors developed a way to watch the AI cook and then reverse-engineer the recipe.
  • They use a statistical tool (Bayesian Information Criterion) to ask: "Is this complex AI recipe actually necessary, or can we explain the flavor with a simple, classic recipe (like the Carreau-Yasuda model)?"
  • They found that their AI could perfectly mimic complex fluids, and then they successfully extracted the exact mathematical "recipe" that describes those fluids.

Why This Matters

This framework is a game-changer for three reasons:

  1. It works in the real world: You don't need to take the fluid out of its environment. You can learn the rules just by watching it flow through a pipe, a micro-vessel, or an oil well. It's like learning how a car handles by driving it on a mountain road, not just in a parking lot.
  2. It's robust: Even if your measurements are blurry, noisy, or low-resolution (like a shaky video), the physics of the simulation acts as a "guardrail," correcting the AI so it doesn't learn nonsense.
  3. It finds the truth: It can tell you if a fluid is simple or complex. If the data is too simple to tell the difference between two complex models, the system honestly says, "We can't tell the difference yet," rather than forcing a wrong answer.

The Bottom Line

The authors have built a universal translator for fluids. They can take messy, real-world flow data, use a super-smart, physics-aware AI to learn the hidden rules, and then translate those rules back into simple, human-readable equations. This allows engineers and scientists to predict how complex fluids will behave in any situation, from drug delivery in the body to oil extraction in the ground, without needing perfect, idealized lab tests.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →