A robust and adaptive MPC formulation for Gaussian process models

This paper presents a robust and adaptive model predictive control framework that utilizes Gaussian Processes with contraction metrics to learn uncertain nonlinear dynamics online, thereby guaranteeing recursive feasibility, robust constraint satisfaction, and convergence for systems affected by bounded disturbances and unmodeled nonlinearities.

Original authors: Mathieu Dubied, Amon Lahr, Melanie N. Zeilinger, Johannes Köhler

Published 2026-04-14
📖 5 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine you are trying to teach a robot drone to fly through a complex, hilly landscape. You want it to be fast, efficient, and, most importantly, safe. It must never crash into the ground or fly too close to obstacles.

The problem is, you don't have a perfect map of the wind, the terrain, or how the drone's motors react. You have a "best guess" model, but it's incomplete. If you rely only on your guess, the drone might get scared and fly very slowly to stay safe (too conservative). If you fly too fast based on your guess, it might crash (too risky).

This paper presents a clever solution called Robust Adaptive Model Predictive Control (RAMPC) using a machine learning tool called Gaussian Processes (GP). Here is how it works, broken down into simple concepts:

1. The "Smart Guess" (Gaussian Processes)

Think of the drone's unknown behavior (like how wind pushes it or how the ground affects its lift) as a mystery function.

  • The Old Way: Engineers usually try to guess the shape of this mystery with a simple formula (like a straight line). If the real world is curvy, the formula fails.
  • The New Way (GP): Instead of a rigid formula, the paper uses a Gaussian Process. Imagine this as a "living, breathing map." It doesn't just guess the value; it draws a cloud of uncertainty around the guess.
    • Where the drone has flown before, the cloud is thin (high confidence).
    • Where the drone hasn't been, the cloud is thick (low confidence).
    • As the drone flies, it collects new data, and the cloud shrinks, making the map more accurate in real-time.

2. The "Safety Bubble" (Robust Prediction)

Even with a smart map, the drone can't be 100% sure where it will be in 5 seconds. So, how do we guarantee safety?

  • The Analogy: Imagine the drone is walking through a dark forest. You can't see the trees perfectly, but you know they are somewhere within a certain distance.
  • The Solution: Instead of planning a path for a single point (the drone), the controller plans a path for a tube (a safety bubble) around the drone.
  • The Magic Trick (Contraction Metrics): Usually, as you look further into the future, your uncertainty grows like a balloon inflating until it's huge and useless. This paper uses a mathematical tool called a Contraction Metric. Think of this as a "deflator" or a "squeeze." It mathematically proves that even if the drone wobbles, the "tube" of possible future positions won't explode in size. It stays tight and manageable.

3. The "Adaptive Learner" (Online Updates)

This is the "Adaptive" part of the title.

  • The Problem: If you update your map while the drone is flying, the rules of the game change. The "safety bubble" you calculated 1 second ago might not fit the new map, causing the computer to panic and say, "I can't find a safe path!" (This is called losing recursive feasibility).
  • The Solution: The authors created a system that keeps a collection of maps (old and new).
    • When the drone learns something new, it doesn't just throw away the old map. It keeps the old one as a "safety net."
    • The controller then finds a path that works for all the maps in the collection simultaneously.
    • The Result: The drone can safely update its brain while flying, getting faster and more efficient as it learns, without ever breaking the safety rules.

4. The Real-World Test (The Quadrotor)

The authors tested this on a planar quadrotor (a 2D drone).

  • The Challenge: The drone had to fly near a "hill" where the ground creates weird air currents (ground effects) that are very hard to model.
  • The Outcome:
    • Old Method: The drone was so scared of the unknown air that it flew very slowly and took a long, winding path.
    • New Method (RAMPC): The drone started cautiously but, as it learned the wind patterns, it confidently sped up and took a direct route.
    • The Win: It reached its destination 6% faster and used 9% less energy (tracking cost) than the non-adaptive version, all while staying strictly within safety limits.

Summary

Think of this paper as giving a self-driving car a superpower:

  1. It learns on the fly: It builds a better map of the road as it drives.
  2. It plans with a safety net: It doesn't just plan for one future; it plans for a "bubble" of all possible futures.
  3. It never panics: Even when the map changes, it has a mathematical guarantee that a safe path always exists.

This allows robots to be bold (fast and efficient) without being reckless (unsafe), making them ready for the messy, unpredictable real world.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →