Identification of optimal history variables and corresponding hereditary laws in linear viscoelasticity

This paper presents an operator-theoretic framework for linear viscoelasticity that utilizes Kolmogorov NN-widths to identify optimal finite-rank internal-variable approximations, ensuring thermodynamic consistency, stability, and rigorous convergence bounds for reduced-order modeling.

Original authors: Ignacio Romero, Michael Ortiz

Published 2026-04-20
📖 5 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine you are trying to predict how a piece of memory foam or a sticky polymer will react when you squeeze it. Unlike a simple spring that snaps back instantly, these materials are "viscoelastic." They remember how you squeezed them in the past, and that history affects how they push back right now.

In physics and engineering, we call this the hereditary law. It's a fancy way of saying: "The stress you feel today depends on every little bit of strain you applied yesterday, last week, and a million years ago."

The Problem: The "Infinite Memory" Burden

The paper starts with a massive headache for computer simulations. To be perfectly accurate, a computer needs to remember the entire history of the material's deformation.

Imagine trying to calculate the weather. If you had to remember every single raindrop that fell since the beginning of time to predict tomorrow's storm, your computer would crash instantly. It's too much data.

Engineers usually try to fix this by using "shortcuts" (like the Prony series), which are like guessing the material's memory based on a few standard patterns. But the authors ask: "What if our shortcuts aren't the best ones? What if there's a perfect, most efficient way to summarize this memory?"

The Solution: The "Best Summary" (N-Widths)

The authors use a branch of mathematics called Kolmogorov N-widths. Think of this as the ultimate "compression algorithm" for memory.

Here is the analogy:
Imagine you have a 10-hour movie (the material's full history).

  • The Old Way: You try to summarize it by picking 5 random scenes. You might miss the plot entirely.
  • The Old "Smart" Way: You pick 5 scenes based on what you think is important (like action scenes). It's better, but maybe you missed the emotional climax.
  • The Authors' Way: They use math to find the 5 specific scenes that, if you watched them, would allow you to reconstruct the entire movie with the least amount of missing information possible.

In the paper, these "scenes" are called Optimal History Variables. They are the specific, mathematically perfect "notes" you need to take to remember the material's behavior without storing the whole encyclopedia.

How They Did It (The "Encoder/Decoder" Trick)

The paper treats the material's memory as a machine (an operator) that takes a history of squeezing (input) and outputs a history of resistance (output).

  1. The Encoder: They figure out the best way to "compress" the input history into a small number of numbers (variables).
  2. The Decoder: They figure out the best way to "rebuild" the output from those numbers.

They proved that if the material behaves nicely (which most real materials do), this machine is compact. In math-speak, this means it can be perfectly approximated by a smaller, simpler machine.

The "Gibbs Phenomenon" (The Glitch)

In their tests, they found something interesting. When they tried to simulate a sudden, sharp change (like slamming the material with a step-function force), their perfect summary had a little "glitch" at the very end. It looked like a ringing bell that wouldn't stop vibrating.

They call this the Gibbs phenomenon.

  • Analogy: Imagine trying to draw a perfect square using only smooth, wavy sine waves. You can get close, but at the sharp corners, your waves will overshoot and wiggle.
  • The Fix: The authors explain that this wiggle is a natural trade-off. If you want a super-smooth summary, you have to accept a little wiggle at the sharp edges. They showed how to manage this so it doesn't ruin the simulation.

Why This Matters (The "Polycrystal" Test)

To prove this isn't just theory, they tested it on a Representative Volume Element (RVE).

  • The Analogy: Imagine a block of Swiss cheese where every hole is a different material, and every piece of cheese is a different material. It's a chaotic mess.
  • The Result: They took this chaotic, complex material and used their "Optimal Summary" method. They found that they could replace the incredibly complex, slow-to-calculate model with a tiny, fast model that was just as accurate.

The Takeaway

This paper gives engineers a rulebook for efficiency.

  1. Don't guess: You don't need to guess which history variables are important. The math tells you exactly which ones are the "best."
  2. Save time: You can simulate complex materials (like new metamaterials or biological tissues) much faster because you only need to track a few "optimal" variables instead of the whole history.
  3. Data-agnostic: It doesn't matter if you got your data from a lab experiment or a supercomputer; this method finds the best summary for any linear viscoelastic material.

In short: They turned the problem of "remembering everything" into "remembering the most important things," using a mathematical lens that guarantees you aren't missing anything crucial. It's the difference between carrying a library of books with you to answer a question, versus carrying a single, perfectly written cheat sheet.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →