On the rate of convergence in superquadratic Hamilton--Jacobi equations with state constraints

This paper establishes the convergence rates in the vanishing viscosity limit for superquadratic Hamilton–Jacobi equations with state constraints, proving an O(ε1/2)\mathcal{O}(\varepsilon^{1/2}) rate for nonnegative Lipschitz data and an improved O(εp2(p1))\mathcal{O}\big(\varepsilon^{\frac{p}{2(p-1)}}\big) rate for semiconcave data when p>2p>2.

Prerona Dutta, Khai T. Nguyen, Son N. T. Tu

Published 2026-03-10
📖 5 min read🧠 Deep dive

Imagine you are trying to find the most efficient path for a hiker to travel across a mountainous landscape. This landscape has a specific shape (a bounded domain), and the hiker wants to minimize their total "effort" or "cost" to get from point A to point B.

In the world of mathematics, this is modeled by something called a Hamilton-Jacobi equation. It's a complex formula that tells us the "value" (or cost) of being at any specific spot on the map, assuming the hiker plays optimally.

Now, here is the twist in this paper:

1. The Two Types of Hikers (The Equations)

The authors are comparing two different ways of modeling this hiker's journey:

  • The Deterministic Hiker (The Ideal): This hiker moves perfectly according to the rules. They know exactly where to step to minimize cost. This is represented by the equation with no noise.
  • The Wobbly Hiker (The Realistic): In the real world, things aren't perfect. Maybe the ground is slippery, or there's a tiny bit of wind pushing the hiker off course. In math, we model this "wobble" or "noise" with a term called viscosity (represented by the symbol ϵ\epsilon). The wobbly hiker follows a slightly different equation that includes a "smoothing" effect.

The paper asks a simple but deep question: As the wobble gets smaller and smaller (as ϵ\epsilon goes to zero), how fast does the Wobbly Hiker's path converge to the Ideal Hiker's path?

2. The "Superquadratic" Mountain

The terrain in this paper is special. It's described as superquadratic.

  • Analogy: Imagine a bowl. A standard bowl is a parabola (quadratic). A superquadratic bowl is like a bowl that gets steeper and steeper as you go up the sides. It's much more "aggressive."
  • Why it matters: When the terrain is this steep (mathematically, when the power p>2p > 2), the behavior of the hiker near the edge of the map (the boundary) becomes very tricky and unpredictable.

3. The "State Constraint" (The Fence)

The hiker is not allowed to leave the map. There is an invisible fence around the domain Ω\Omega.

  • The Problem: If the hiker hits the fence, they can't just walk through it. They have to stay inside.
  • The Difficulty: When the terrain is steep (superquadratic), we don't have a clear rulebook for what happens exactly when the hiker touches the fence. In less steep terrains, we know the hiker bounces off or stops in a predictable way. Here, it's a mystery. This makes it hard to build "barriers" (mathematical fences) to prove how close the two hikers are.

4. The Main Discovery: How Fast Do They Meet?

The authors wanted to know the speed of convergence. If we reduce the wobble by half, does the error in the path also get cut in half? Or does it get cut by a square root?

They found two different answers depending on the "terrain" (the data ff):

Scenario A: The Rough Terrain (General Lipschitz Data)

Imagine the landscape has some bumps and jagged edges, but nothing too crazy.

  • The Result: The Wobbly Hiker gets close to the Ideal Hiker at a rate of O(ϵ)O(\sqrt{\epsilon}).
  • The Analogy: Think of it like walking on a foggy day. As the fog (ϵ\epsilon) clears, your view improves, but not instantly. If you cut the fog density by 4 times, your visibility only improves by 2 times (the square root). It's a "slow and steady" convergence.
  • Why? The authors had to use a clever trick called "doubling of variables." Imagine taking a photo of the Ideal Hiker and the Wobbly Hiker at the same time and measuring the distance between them. Because the terrain is steep and the boundary is tricky, the math forces this square-root relationship.

Scenario B: The Smooth, Flat Terrain (Semiconcave Data)

Now, imagine the landscape is very smooth, and the "cost" of walking drops to zero right at the edge of the map (like a flat valley floor).

  • The Result: The convergence is much faster! The error shrinks at a rate of O(ϵp22(p1))O(\epsilon^{\frac{p-2}{2(p-1)}}).
  • The Analogy: If the terrain is smooth and the hiker knows exactly where the "safe zone" is, the wobble matters less. As the fog clears, the hiker snaps into the perfect path much more quickly than in the rough terrain case.
  • The Catch: This only works if the "cost" function is nice and smooth (semiconcave) and has a compact support (it's zero far away from the center). If the data is messy, we fall back to the slower square-root speed.

5. Why This Paper is a Big Deal

Before this paper, mathematicians knew how fast these hikers met up when the terrain was less steep (when p2p \le 2). But for the steep terrain (p>2p > 2), the math was stuck. The boundary behavior was too mysterious to solve.

The authors, Dutta, Nguyen, and Tu, managed to:

  1. Build new tools: They created new "barriers" (mathematical walls) to handle the tricky boundary behavior of the steep terrain.
  2. Prove the speed: They showed that even in this difficult, steep world, the Wobbly Hiker does eventually catch up to the Ideal Hiker, and they calculated exactly how fast.
  3. Refine the estimate: They found that for smooth, well-behaved landscapes, the convergence is actually faster than the standard "square root" rule, which is a significant improvement for simulations and computer models.

Summary

In plain English: This paper figures out how quickly a "noisy" mathematical model of a hiker on a steep, bounded mountain converges to the "perfect" model as the noise disappears. They proved that for general rough terrain, the speed is moderate (square root), but for smooth, flat terrain, the speed is much faster.

This is crucial for computer simulations in physics, economics, and engineering, where we often use "noisy" approximations to solve complex problems. Knowing the exact speed of convergence tells engineers how much "noise" they can tolerate before their simulation becomes inaccurate.