Integrated cooperative localization of heterogeneous measurement swarm: A unified data-driven method

This paper proposes a unified data-driven method for cooperative localization in heterogeneous robotic swarms that utilizes pairwise relative localization and a distributed pose-coupling strategy to achieve robust performance under weakly connected directed measurement topologies, overcoming the restrictive geometric requirements of existing approaches.

Kunrui Ze, Wei Wang, Guibin Sun, Jiaqi Yan, Kexin Liu, Jinhu Lü

Published 2026-03-06
📖 4 min read☕ Coffee break read

Imagine a group of friends trying to find their way through a giant, pitch-black maze. They can't see the exit, and they don't have GPS. The only way they can figure out where they are is by talking to each other and remembering how far they've walked.

This is the problem of Cooperative Localization (CL) for robots. But here's the twist: in this specific paper, the friends are heterogeneous.

The Problem: A Mismatched Team

In a perfect world, every robot would have a super-accurate camera, a laser scanner, and a perfect compass. But in the real world, things break or are too expensive.

  • Robot A has a camera that can see its neighbor.
  • Robot B only has a basic odometer (like a pedometer) and can't "see" Robot A, but Robot A can see B.
  • Robot C has a laser that measures distance but not direction.
  • Robot D has nothing but a pedometer.

Most old methods for solving this maze problem required a strict rule: "To know where you are, you must be able to see at least three different friends at the same time, and they must be standing in a perfect triangle."

If your team is mismatched (like the one above), you can't form those perfect triangles. The old methods fail, and the robots get lost.

The Solution: The "Data-Driven" Pairing Strategy

The authors of this paper propose a new, smarter way to play the game. Instead of demanding a perfect group, they focus on pairs.

1. The "Pair-Up" Game (Relative Localization)

Imagine Robot A and Robot B. Even if Robot B is "blind" and only has a pedometer, Robot A can still see B.

  • The Old Way: "I can't calculate our positions because I only have one friend, and he's blind."
  • The New Way: "I don't care if he's blind. I will watch how he moves relative to me. I'll use my camera to see where he is, and I'll use my memory of how we both moved to figure out our starting positions."

The paper introduces a Unified Data-Driven Estimator. Think of this as a super-smart calculator that looks at the "history" of movement (odometry) and the "snapshots" of relative position (measurements). It doesn't matter if the data is messy or if only one robot is doing the looking. The calculator finds a pattern in the data to figure out exactly where the two robots started relative to each other.

Analogy: It's like two people walking in the dark. One has a flashlight (the sensor), the other doesn't. The person with the flashlight watches the other person's steps. By combining the "steps taken" (odometry) with the "visuals seen" (sensor data), they can mathematically reconstruct their entire path together, even if the flashlight only works for one of them.

2. The "Chain Reaction" (Distributed Localization)

Once every pair of neighbors has figured out their relative starting positions, they pass that information along.

  • Robot A tells Robot B, "We started 5 meters apart."
  • Robot B tells Robot C, "I know where A is, and I know where I am relative to A, so I can tell you where I am."

The paper proves that as long as the group is connected (like a chain where everyone can talk to at least one neighbor), the whole swarm can figure out where everyone is relative to the "Leader" (the person holding the map), even if the connections are one-way (A sees B, but B can't see A).

Why This is a Big Deal

  • It's Flexible: You don't need expensive sensors on every robot. You can mix and match cheap and expensive sensors.
  • It's Robust: If a sensor breaks, the system doesn't crash. It just relies on the remaining data pairs.
  • It's Minimalist: It works with the weakest possible connection (a simple chain), whereas old methods required a complex web of connections.

The Real-World Test

The authors didn't just write math; they tested it with five tiny drones (Crazyflies) in a lab.

  • Some drones had sensors; others relied on internal movement tracking.
  • They formed a formation (like a flying V-shape).
  • Result: The drones successfully figured out where they were relative to the leader and held their formation perfectly, even with the mismatched sensors.

Summary in a Nutshell

This paper teaches a group of robots how to find their way in the dark without needing a perfect team. Instead of demanding everyone to have a super-sensor, they use a clever "pair-up" strategy. By combining movement data with whatever limited sensing they have, they build a shared map of their world, proving that even a mismatched team can stay together and find their way home.