Degeneracy-Resilient Teach and Repeat for Geometrically Challenging Environments Using FMCW Lidar

This paper presents a degeneracy-resilient Teach and Repeat navigation system using FMCW lidar that overcomes the limitations of traditional ICP-based methods in geometrically sparse environments by leveraging Doppler velocity for stable odometry and curvature-aware scan-to-map localization.

Katya M. Papais, Wenda Zhao, Timothy D. Barfoot

Published Thu, 12 Ma
📖 5 min read🧠 Deep dive

Imagine you are teaching a robot to drive a specific route, like a delivery truck that needs to make the same trip every day. This is called "Teach and Repeat." You drive the route once (the "Teach" phase), and the robot memorizes the landmarks. Later, the robot drives it again on its own (the "Repeat" phase), trying to stay exactly on the path it learned.

Usually, robots use Lidar (a laser scanner) to see the world. They look for sharp corners, trees, and buildings to know where they are. But what happens if the robot has to drive across a flat, empty airport runway or a desert? There are no trees, no buildings, and the ground looks exactly the same for miles.

In these "boring" places, standard robot navigation systems get confused. It's like trying to navigate a city using only a map of a giant, empty white room. The robot loses its way, drifts off course, and crashes. This is called geometric degeneracy.

This paper introduces a new, super-smart robot system that can handle these boring, flat places without getting lost. Here is how it works, using some simple analogies:

1. The "Speedometer" Vision (FMCW Lidar)

Standard Lidar is like a camera that takes a picture of how far away things are. It doesn't know if things are moving until it takes the next picture.

The robot in this paper uses a special FMCW Lidar. Think of this like a super-powered speedometer for every single point in the air.

  • How it works: When the robot moves forward, the air in front of it "rushes" toward the sensor, and the air behind it "rushes" away. Even if the ground is flat and empty, the robot can feel this "wind" of movement hitting the laser.
  • The Analogy: Imagine you are in a car with your eyes closed. You can't see the road, but you can feel the wind on your face. If you turn left, the wind hits your left ear harder. This robot feels the "wind" of its own movement hitting every point in its laser scan. This tells it exactly how fast and in what direction it is moving, even if it sees nothing but flat ground.

2. The "Curiosity" Filter (Curvature Downsampling)

When a robot scans a flat field, it gets millions of laser points. Most of them are just boring, flat grass. Processing all of them is a waste of time and confuses the computer.

The authors created a smart filter that acts like a curious artist.

  • Standard method: Takes a photo and blurs out the boring parts equally.
  • This robot's method: It looks at the ground and asks, "Is this flat? Yes? Ignore it. Is this a rock? Is this a bump? Yes? Keep it!"
  • The Analogy: Imagine you are looking at a vast, flat white wall with a single red dot on it. A normal scanner might accidentally paint over the red dot while trying to clean up the white wall. This robot's filter knows to zoom in on the red dot (the interesting rocks) and ignore the rest of the white wall. This helps the robot find its "landmarks" even when they are very sparse.

3. The "Smart Map" Check (Degeneracy-Aware Localization)

When the robot tries to match its current view to the map it memorized, it usually tries to fit the puzzle pieces together perfectly. But in a flat field, the puzzle pieces look identical. If the robot tries to force a fit, it might slide sideways and think it's in the right place when it's actually 10 meters off.

This new system is humble. It knows when it's confused.

  • The Analogy: Imagine you are walking in a foggy forest. You see a tree and think, "That's the oak I passed earlier." But wait, there are 50 identical oaks.
    • Old Robot: "I'm sure! I'm next to the oak!" (It walks confidently into a tree).
    • New Robot: "Hmm, these trees all look the same. I can't be 100% sure which way is North. So, I will trust my speedometer (the FMCW data) to tell me I'm moving forward, but I will stop guessing about my left/right position until I see something unique."
  • It only updates its position in the directions it is sure about, and relies on its speedometer for the directions it is unsure about. This prevents it from spiraling out of control.

The Big Test: The Airport Runway

The researchers tested this on a flat airport runway. They even added a few fake rocks to make it slightly interesting.

  • The Old Robots: Tried to match the flat ground, got confused, and failed to finish the route.
  • The New Robot: Used its "speedometer vision" to know it was moving, used its "curiosity filter" to find the few rocks, and admitted when it was unsure. It successfully drove the whole loop, staying on the path.

Why This Matters

This technology is a game-changer for places where GPS doesn't work and the landscape is boring, like:

  • Mars: The Martian surface is often flat and dusty.
  • Mines: Underground tunnels can be featureless.
  • Disaster Zones: After an earthquake, the ground might be flattened.

In short, this paper gives robots a "sixth sense" for movement and a "smart eye" that knows when to trust the map and when to trust its own motion, allowing them to navigate safely even in the most boring, empty places on Earth (or other planets).