Out-of-distribution transfer of PDE foundation models to material dynamics under extreme loading

This paper benchmarks the out-of-distribution transfer capabilities of pretrained PDE foundation models (POSEIDON and MORPH) to extreme-loading material dynamics by evaluating their sample efficiency in predicting terminal states for shock-driven multi-material interfaces and dynamic fracture, comparing fine-tuning against training from scratch.

Mahindra Rautela, Alexander Most, Siddharth Mansingh, Aleksandra Pachalieva, Bradley Love, Daniel O Malley, Alexander Scheinker, Kyle Hickmann, Diane Oyen, Nathan Debardeleben, Earl Lawrence, Ayan Biswas

Published 2026-03-05
📖 4 min read☕ Coffee break read

Imagine you are trying to teach a super-smart robot how to predict the future of physical objects.

For a long time, scientists have been training these robots (called PDE Foundation Models) using "calm" data. Think of this like teaching a student only by showing them videos of smooth, flowing rivers and gentle wind. The student becomes an expert at predicting how water ripples or how air moves around a wing.

But what happens when you ask that same student to predict what happens when a meteorite hits a spaceship, or when a metal beam shatters under extreme pressure? The physics change completely. Instead of smooth flows, you get shockwaves, sudden cracks, and chaotic explosions.

This paper is a report card on how well these "river experts" can handle "meteorite impacts."

The Big Experiment: From Rivers to Explosions

The researchers from Los Alamos National Laboratory took two of the best "river expert" robots available (named POSEIDON and MORPH) and tried to teach them two new, very dangerous jobs:

  1. The "Shockwave" Job (PLI): Imagine two different liquids (like oil and water) stacked on top of each other. Now, hit them with a massive shockwave. The boundary between them gets messy, turbulent, and chaotic. The robot has to predict what the mess looks like after the shockwave has passed.
  2. The "Shattering" Job (FRAC): Imagine a block of metal. Now, hit it until it cracks. The cracks spread like lightning, branching out in unpredictable ways. The robot has to predict the final pattern of the broken pieces.

The Challenge: The "Out-of-Distribution" Problem

In machine learning, "Out-of-Distribution" (OOD) is a fancy way of saying: "You've never seen anything like this before."

It's like taking a chef who only knows how to bake perfect, fluffy cakes and asking them to cook a steak that's on fire. The chef knows about heat and ingredients, but the situation is totally different.

The researchers wanted to see:

  • Can these robots learn the new job quickly (Sample Efficiency)?
  • Do they need to be retrained from scratch, or can they just "fine-tune" their existing knowledge?

The Results: Who Won?

The results were a mix of success and "not quite there yet."

1. The "Shockwave" Test (PLI):

  • Winner: MORPH.
  • The Analogy: MORPH was like a student who had studied a bit of everything (different types of fluids, different dimensions). When faced with the chaotic shockwave, MORPH adapted better. It could see the big picture of the messy interface.
  • POSEIDON (the specialist in smooth fluid flows) struggled a bit more, getting confused by the sudden, violent jumps in the data.

2. The "Shattering" Test (FRAC):

  • Winner: POSEIDON (by a tiny margin).
  • The Analogy: Here, the specialist actually did slightly better. It seems the way POSEIDON was trained on fluid dynamics helped it understand how cracks propagate, even though it wasn't trained on cracks specifically.
  • MORPH did okay, but it didn't have the same edge here.

The "Data Diet" Experiment

The researchers also tested how much "food" (training data) the robots needed to learn.

  • The "Starving" Phase (Little Data): When they gave the robots very few examples to learn from, the pre-trained robots (the ones who knew about rivers) were much better than robots trained from zero. They had a head start, like a student who already knows how to read, trying to learn a new language.
  • The "Feast" Phase (Lots of Data): As they gave the robots more and more examples of the new jobs, the advantage of being pre-trained disappeared. If you show a river-expert enough pictures of meteorites, they can eventually learn the meteorite rules just as well as someone who started with meteorites.

The Big Takeaway

The main lesson of this paper is: You can't just teach a robot about rivers and expect it to be an expert on explosions.

While the pre-trained models were helpful (especially when data was scarce), they weren't perfect. The "smooth river" knowledge didn't transfer perfectly to "chaotic explosion" knowledge.

The Future:
To make truly robust AI for engineering and safety (like designing better spacecraft or predicting earthquakes), we need to stop training these models only on smooth, calm data. We need to feed them more "extreme" data—shocks, fractures, and explosions—right from the start.

Think of it as upgrading the curriculum: We need to stop teaching our AI students just about calm lakes and start teaching them about tsunamis and earthquakes so they are ready for the real world.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →