Riemannian Langevin Dynamics: Strong Convergence of Geometric Euler-Maruyama Scheme

This paper establishes the strong convergence with order $1/2$ for a geometric Euler-Maruyama scheme applied to stochastic differential equations on Riemannian manifolds and derives a corresponding Wasserstein bound for sampling via Riemannian Langevin dynamics.

Zhiyuan Zhan, Masashi Sugiyama

Published 2026-03-05
📖 4 min read☕ Coffee break read

Imagine you are trying to teach a robot to walk through a very specific, winding forest. The forest floor isn't flat; it's a complex, curved landscape (a manifold) with hills, valleys, and twists. The robot's goal is to explore this forest randomly but eventually settle down in the most beautiful, peaceful clearing (the target distribution).

This is the problem of Riemannian Langevin Dynamics (RLD). It's a mathematical recipe for how a robot (or a computer algorithm) should wander around a curved surface to find the best spot.

However, computers are bad at walking on curves. They are built to walk on flat, grid-like floors (Euclidean space). To make the robot walk on the curved forest floor, we use a special trick called the Geometric Euler-Maruyama (GEM) scheme. Think of this as a "stepping stone" method: instead of trying to walk the curve perfectly, the robot takes a straight step, then magically teleports back onto the forest floor to the nearest valid spot.

The Big Question

The authors of this paper asked a crucial question: How accurate are these stepping stones?

In the flat world, we know that if you take small steps, your path stays very close to the true path. The error shrinks predictably as you make the steps smaller. But in the curved forest, no one had proven that this "stepping stone" method worked just as well for every single step the robot took. They only knew it worked on average (like saying the robot ends up in the right neighborhood), but not that the robot didn't take a wild, wrong turn at any specific moment.

The Breakthrough

The authors proved that yes, the stepping stone method works perfectly well on curved surfaces too.

They showed that if you take steps of size hh, the robot's path stays within a distance of roughly h\sqrt{h} from the true, perfect path. This is the same level of accuracy we get on flat ground.

How They Did It (The Analogy)

To prove this, the authors used a clever two-step strategy:

  1. The "Shadow" Trick (Extrinsic Extension):
    Imagine the forest is a sculpture floating in a giant, empty room. The robot is supposed to stay on the sculpture. The authors realized they could pretend the robot was walking in the empty room (flat space) instead of on the sculpture.

    • They created a "shadow" version of the forest rules that works everywhere in the room, not just on the sculpture.
    • They proved that if the robot walks in the room using standard rules, it stays very close to where it should be if it were walking on the sculpture.
  2. The "Snap-Back" Comparison:
    Now they had two paths:

    • Path A: The robot walking on the sculpture (the real, hard way).
    • Path B: The robot walking in the room and being "snapped back" to the sculpture at every step (the easy, computer-friendly way).

    They used a mathematical "ruler" (Taylor expansion) to measure the tiny gap between Path A and Path B. They proved that because the forest (the manifold) doesn't have any weird, infinitely sharp spikes or infinite curves (geometric boundedness), the gap between the two paths is tiny and predictable.

Why This Matters

This isn't just about math theory; it's about Generative AI (like the tools that create images of cats or write stories).

  • The Problem: Real-world data (like photos of faces) doesn't live on a flat grid. It lives on a complex, curved shape.
  • The Solution: To generate new, realistic data, AI models use these "random walk" algorithms (diffusion models) on that curved shape.
  • The Impact: Before this paper, we didn't have a solid guarantee that the computer's "stepping stone" approximation was accurate enough for every single step. Now, we know it is. This gives engineers confidence that they can build faster, more reliable AI models that understand the true, curved structure of data, rather than forcing it into a flat box.

In a Nutshell

The authors took a complex, curved problem that computers struggle with, mapped it onto a flat surface where computers are experts, and proved that the "shortcut" method is just as accurate as the real thing. They closed a gap in our understanding, ensuring that the next generation of AI models can navigate the complex landscapes of real-world data with precision.