On the Stability Connection Between Discrete-Time Algorithms and Their Resolution ODEs: Applications to Min-Max Optimisation

This paper establishes a rigorous theoretical framework proving that exponential stability in O(sr)O(s^r)-resolution ODEs implies exponential stability in their corresponding discrete-time algorithms under sufficiently small step sizes, and applies this connection to analyze the convergence properties of various min-max optimization methods while relaxing traditional Hessian invariance assumptions.

Amir Ali Farzin, Yuen-Man Pun, Philipp Braun, Iman Shames

Published 2026-03-03
📖 4 min read☕ Coffee break read

Imagine you are trying to find the perfect spot to set up a campfire on a hillside. You want to be high enough to see the view (maximize) but low enough to stay warm (minimize). This is a Min-Max problem: two competing goals happening at the same time.

In the real world, we don't have a magic map. Instead, we use algorithms (step-by-step rules) to take small steps toward the perfect spot. These algorithms are like a hiker taking discrete steps: step, step, step.

However, analyzing whether these steps will actually lead to the campfire or get the hiker lost in a loop is very hard. It's like trying to predict the exact path of a bouncing ball by looking at every single frame of a video.

This paper introduces a clever trick: Stop looking at the steps; look at the flow.

The Big Idea: The "Smooth River" vs. The "Rocky Path"

The authors propose that instead of analyzing the hiker's jerky, step-by-step movements (the Discrete-Time Algorithm), we should imagine a smooth river flowing down the hill that the hiker is trying to follow (the Continuous-Time ODE).

  • The Discrete Algorithm (DTA): This is the hiker taking big, clumsy steps. Sometimes they overshoot, sometimes they wobble. It's hard to predict if they will reach the goal or fall into a pit.
  • The Resolution ODE: This is the smooth river. Water flows continuously. It's much easier to study the river's current to see where it naturally goes.

The paper's main discovery is a Stability Bridge. They prove that if the river (the smooth flow) is guaranteed to flow smoothly into the campfire (a stable equilibrium), then the hiker (the algorithm) will also find the campfire, provided the hiker takes small enough steps.

The "Resolution" Concept

The authors use a mathematical tool called an O(sr)-Resolution ODE. Think of this as a "high-definition" map of the river.

  • O(1) Resolution: A rough sketch of the river. It tells you the general direction.
  • O(s) Resolution: A detailed, GPS-level map that accounts for the hiker's specific stride length. It's more accurate.

The paper proves that if the river on this detailed map flows safely into the goal, the hiker will too.

Applying it to Min-Max Games

The authors tested this "River vs. Hiker" theory on several famous hiking strategies (algorithms) used in AI and economics:

  1. Gradient Descent-Ascent (GDA): The basic hiker.
    • Finding: Sometimes this hiker gets stuck in a circle (a limit cycle) and never finds the fire. The "river" analysis shows why: the river itself swirls in a circle under certain conditions.
  2. Extragradient (GEG) & Proximal Point (PPM): These are "smart hikers" who peek ahead before stepping.
    • Finding: The river analysis shows these hikers are much better. If they take small steps, the river guides them straight to the saddle point (the perfect campfire spot).
  3. Newton Methods: These hikers use a telescope and a compass (second-order information) to know the exact shape of the hill.
    • Finding: They are very robust. Even if the hill is weirdly shaped, the river analysis proves they will find the spot, as long as the map isn't broken.

Why This Matters (The "No-Hessian" Superpower)

Usually, to prove a hiker won't get lost, mathematicians have to check the "Hessian" (a complex measure of how curved the hill is). If the hill is flat or weird at the goal, the old math says, "We can't prove anything!"

The paper's breakthrough: By looking at the smooth river directly using a tool called a Lyapunov function (think of it as a "height gauge" that always goes down as you get closer to the goal), they can prove the hiker will succeed even if the hill is weird or flat at the top. They don't need the Hessian to be perfect.

The Takeaway

This paper gives us a new, powerful lens to study optimization algorithms:

  1. Don't just watch the steps. Model the continuous flow the steps are trying to follow.
  2. If the flow is stable, the steps are stable. If the smooth river leads to the goal, the hiker will get there too, as long as they don't take giant, reckless leaps.
  3. It works for tricky problems. This method works even when the "hill" is weird, helping us design better AI training methods and economic models.

In short: If you want to know if a robot will find its way, don't just watch its feet. Watch the wind blowing it, and if the wind blows it home, the robot will get there too.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →