Radio-based Multi-Robot Odometry and Relative Localization

This paper presents a robust, open-source multi-robot UGV-UAV localization system that fuses UWB and radar data with standard odometry sensors within a pose-graph optimization framework to accurately estimate relative positions in challenging environments, outperforming state-of-the-art methods and offering extensibility to full SLAM.

Andrés Martínez-Silva, David Alejo, Luis Merino, Fernando Caballero

Published 2026-03-10
📖 5 min read🧠 Deep dive

Imagine you are in a thick fog inside a massive, empty warehouse. You can't see more than a few feet in front of you, and your GPS is broken because the roof is too high. You are trying to find your way, but you are also part of a team: you are a small flying drone, and your partner is a large, heavy robot on wheels.

Your goal? To stay together and know exactly where you both are relative to each other, even though you can't see each other and the environment is confusing.

This paper describes a clever new "teamwork system" that helps robots do exactly that. Instead of relying on cameras or lasers (which get blinded by fog, dust, or darkness), the robots use radio waves and sound-like pulses to "feel" their way around.

Here is how the system works, broken down into simple parts:

1. The Two Superpowers: UWB and Radar

The robots are equipped with two special tools:

  • The "Radio Ruler" (UWB): Think of this like a high-tech tape measure that uses radio signals. The flying robot has "tags" (like little name tags), and the ground robot has "anchors" (like fixed rulers). They constantly shout out, "How far apart are we?" The system uses these distance measurements to figure out where the drone is relative to the ground robot. It's like two people in a dark room holding a rope; even if they can't see, they know exactly how far apart they are by how much rope is between them.
  • The "Echolocation Eye" (Radar): This is like a bat's sonar. The radar sends out radio pulses and listens for the echoes bouncing off walls or the ground. It can tell the robot how fast it's moving and in what direction, even in total darkness or heavy rain. It's the robot's way of saying, "I'm moving forward at 2 meters per second," even if its wheels are slipping.

2. The Problem: Drifting Off Course

Every robot has a "memory" of where it thinks it is (called odometry). But this memory is flawed.

  • If the ground robot's wheels slip on a patch of mud, it thinks it moved 10 meters, but it only moved 8.
  • If the drone gets hit by a gust of wind, it thinks it's in one spot, but it's actually somewhere else.

Over time, this "drift" makes the robots lose their way. If they rely only on their own memory, they will eventually think they are in a completely different part of the warehouse than they actually are.

3. The Solution: The "Group Hug" (Pose-Graph Optimization)

This is the brain of the operation. The researchers built a mathematical system that acts like a group hug for the robots' data.

Instead of each robot trying to solve the puzzle alone, they share their information:

  1. The Radio Ruler tells them, "Hey, we are exactly 5 meters apart right now."
  2. The Echolocation Eye tells them, "I'm moving forward, and I'm not slipping."
  3. The Group Hug (Optimizer): This is a smart computer algorithm (using something called Ceres) that takes all these conflicting clues and smooths them out. It says, "Okay, the ground robot thinks it moved 10 meters, but the radio says it's only 8 meters away from the drone. Let's correct the ground robot's memory."

It constantly adjusts their positions, pulling them back to the truth, ensuring that even if one robot slips or gets confused, the other one helps correct it.

4. The "Magic" Simulation

To test this, the researchers didn't just build robots; they built a virtual reality video game (using a tool called Gazebo) that mimics real life perfectly.

  • They created a fake radio system that behaves exactly like real radios (including the static and errors).
  • They tested it in this virtual world first, then tried it with real robots in a real warehouse.

The Results: A Winning Team

The system worked incredibly well.

  • In the fog: While cameras would have failed, the radio and radar kept the robots on track.
  • In the dark: The system didn't need a single light source.
  • The Accuracy: Even when the robots were moving fast or far apart, the system kept their relative position accurate to within about a meter (or roughly 3 feet), which is amazing for robots moving in 3D space.

Why This Matters

Think of this as the foundation for a future where robots can work together in disaster zones (like collapsed buildings or nuclear plants) where humans can't go, and where GPS doesn't work.

  • Old Way: Robots get lost because they can't see and their wheels slip.
  • New Way: They hold hands (via radio) and listen to the wind (via radar), constantly correcting each other so they never get lost.

The researchers even made all their code and data public, like sharing a recipe, so other scientists can use it to build even better robot teams. It's a big step toward making robots that are truly reliable partners in the real world.