A hierarchy of thermodynamics learning frameworks for inelastic constitutive modeling

This paper presents a unified machine learning framework to systematically compare the impact of different thermodynamic structures—such as dissipation potentials, generalized standard materials, and metriplectic systems—on the learnability, stability, and generalization of data-driven constitutive models for complex inelastic materials.

Reese E. Jones, Jan N. Fuhg

Published 2026-03-04
📖 5 min read🧠 Deep dive

Imagine you are trying to teach a robot how to predict how a piece of metal, a rubber band, or a complex composite material will behave when you squeeze, stretch, or twist it. This is the job of constitutive modeling.

For a long time, scientists built these models by hand, writing down complex math equations based on their best guesses about physics. But materials are messy, and guessing the right equations is hard.

Now, we have Machine Learning (AI). We can show the AI thousands of examples of how materials behave, and it can learn the patterns. But there's a catch: if you just let the AI learn freely, it might come up with answers that look right but break the fundamental laws of physics (like creating energy out of nothing or violating the laws of thermodynamics).

This paper is a taste test of three different "rulebooks" (thermodynamic frameworks) that scientists use to force the AI to behave like a real physicist. The authors wanted to see: Which rulebook helps the AI learn the best, without breaking the laws of the universe?

Here is the breakdown using simple analogies:

The Three Rulebooks (Frameworks)

The authors tested three different ways to structure the AI's learning. Think of them as three different coaches teaching a student how to play a sport.

1. The Dissipation Potential (DP) Coach: "The Flexible Coach"

  • The Idea: This coach says, "You must follow the law of conservation of energy, and you must lose some energy to friction (heat) when you move. But beyond that, you can figure out the rest."
  • The Analogy: Imagine a car driving on a road. The coach says, "The engine must work, and the brakes must create heat. But you can steer however you want, as long as you don't crash."
  • Result: This framework is very flexible. It allowed the AI to learn complex, messy behaviors (like a metal alloy that hardens and softens unpredictably) very well. It was the most adaptable.

2. The Generalized Standard Material (GSM) Coach: "The Strict Coach"

  • The Idea: This coach is very rigid. "Not only must you follow the laws of physics, but you must also follow a specific mathematical symmetry called 'normality.' If you push this way, you must slide exactly that way. No exceptions."
  • The Analogy: Imagine a train on a track. The coach says, "The train must move forward, and the wheels must spin. But you are locked onto the tracks. You cannot turn left or right, even if the scenery changes."
  • Result: This works beautifully for simple, clean materials (like a perfect rubber band). However, when the material got messy and complex (like the metal alloy), the AI struggled because the "tracks" were too narrow. The real material wanted to do things the strict rules didn't allow.

3. The Metriplectic (MP) Coach: "The Geometric Coach"

  • The Idea: This coach looks at the problem as a dance between two forces: Conservation (energy staying the same) and Dissipation (energy turning into heat). It uses special geometric tools to keep these two forces separate but working together.
  • The Analogy: Imagine a spinning top. The top spins (conserving energy) but eventually slows down due to friction (dissipating energy). This coach teaches the AI to separate the "spin" from the "friction" perfectly, ensuring the math never gets confused.
  • Result: This was a very strong performer. It handled the complex materials well and offered a very clean, geometric way to understand how the material evolves.

The Experiment: The "Taste Test"

The researchers didn't just talk about theory; they built three identical AI brains (neural networks) and gave them three different "rulebooks" (DP, GSM, MP). They then fed them data from three different types of materials:

  1. The Alloy (Metal): A complex mix of silicon and aluminum that acts like a metal but has a messy internal structure. (The "Hard" test).
  2. The Composite (Rubber & Glass): A rubbery material with glass beads inside. It stretches and snaps back, but slowly (Viscoelastic).
  3. The Crystal (Iron): A metal made of tiny crystals that slide past each other when stressed.

The Results: Who Won?

  • The Flexible Coach (DP) & The Geometric Coach (MP): Both did an amazing job on all three materials. They could predict how the metal, the rubber, and the crystal would behave with high accuracy. They were flexible enough to handle the messy, real-world data.
  • The Strict Coach (GSM): Did great on the rubber and the crystal. But on the messy metal alloy, it stumbled a little bit. Because the metal's behavior was too complex to fit into the "strict tracks" of the GSM rulebook, the AI couldn't learn the nuances as well as the other two.

The Big Takeaway

The paper teaches us a valuable lesson about AI and Physics:

"One size does not fit all."

If you are modeling a simple, predictable material, a strict, highly structured rulebook (like GSM) is great because it guarantees the AI won't make silly mistakes. But if you are modeling a complex, messy, real-world material (like a 3D-printed metal alloy), you need a flexible rulebook (like DP or MP) that respects the laws of physics but doesn't force the material into a box that is too small.

In short: The best AI for materials science isn't just the one that learns the fastest; it's the one that is given the right kind of rules to let it learn the truth without breaking the laws of the universe.