Efficient Autonomous Navigation of a Quadruped Robot in Underground Mines on Edge Hardware

This paper presents a fully autonomous navigation stack for a quadruped robot running on low-power edge hardware without GPUs or network connectivity, which successfully navigated complex underground mine environments with a 100% success rate across 20 trials using a combination of LiDAR-inertial odometry, map-based localization, and classical planning algorithms.

Yixiang Gao, Kwame Awuah-Offei

Published 2026-03-06
📖 5 min read🧠 Deep dive

Imagine you are trying to guide a four-legged robot dog through a pitch-black, narrow, and bumpy underground mine. There is no GPS (like Google Maps), no Wi-Fi, no internet, and it's so dark you can't see your hand in front of your face. Most robots would get lost, confused, or require a supercomputer the size of a refrigerator to figure out where to go.

This paper describes a clever solution that lets a Boston Dynamics "Spot" robot do this job all by itself, using a tiny, low-power computer (like a high-end laptop) strapped to its back.

Here is the story of how they did it, explained with some everyday analogies.

1. The Problem: The "Dark Maze" Challenge

Underground mines are the ultimate test for robots. They are like a maze made of rock, with:

  • No Light: Cameras (which most robots use) are useless here. It's like trying to drive a car with your eyes closed.
  • No GPS: You can't ask for directions because the satellite signal doesn't reach underground.
  • No Internet: You can't send the data to a cloud server to solve the problem; the robot has to think for itself.
  • Rough Terrain: The ground is uneven, with rocks, holes, and low ceilings.

Most modern robots try to solve this with "AI" (Artificial Intelligence) that needs massive, expensive graphics cards (GPUs) and huge amounts of training data. But the researchers wanted something simpler, cheaper, and more reliable that could run on a small computer.

2. The Solution: The "Old-School" Smart Navigator

Instead of using a "black box" AI that learns by trial and error, the team built a navigation system using classic, logical rules. Think of it like giving the robot a very strict, step-by-step instruction manual rather than teaching it to "guess" the way.

The system works in four main steps, like a relay race:

Step A: Feeling the Way (LiDAR + IMU)

Since the robot can't see, it uses a LiDAR sensor. Imagine a bat using echolocation, but instead of sound, it shoots out invisible laser beams 10 times a second to map the walls and floor in 3D.

  • The Analogy: It's like walking through a dark room with a cane, tapping the walls to know where you are.
  • The Boost: They also added an IMU (an accelerometer, like the one in your phone that knows which way is up). This helps the robot know exactly how it's tilting or turning, even if the laser beams get a little jumpy.

Step B: The "Check-In" System (Localization)

Even with a cane, you can get lost if you walk too far (this is called "drift"). To fix this, the robot carries a digital map of the mine in its memory.

  • The Analogy: Imagine you are walking in a dark forest, but you have a mental picture of the trees. Every few steps, you stop, look at the trees around you, and say, "Ah, that tree looks like the one on my map. I am here."
  • How it works: The robot constantly compares its laser scan of the current tunnel against the pre-loaded map. This corrects any mistakes and keeps it on the right track without needing a GPS.

Step C: Reading the Floor (Terrain Segmentation)

The robot needs to know what is safe to walk on. Is that pile of rocks a path, or is it a wall?

  • The Analogy: Imagine the robot is wearing special glasses that turn the floor green (safe to walk) and the walls/rocks red (do not walk).
  • How it works: A mathematical filter looks at the laser data and instantly decides: "This is flat ground," or "This is a wall." It ignores the ceiling (which is low in mines) so the robot doesn't get confused by overhead pipes.

Step D: The Route Planner (Visibility Graph)

Once the robot knows where it is and what is safe, it needs a route to the goal.

  • The Analogy: Instead of drawing a straight line (which might go through a wall), the robot draws a "string" connecting all the open corners of the tunnel. It finds the shortest path along these strings.
  • The Trick: They didn't make the robot calculate this from scratch every time. Before the mission, a human drove the robot through the mine once to create a "roadmap" (a visibility graph). The robot just loads this roadmap and follows it, only adjusting if it sees a new obstacle (like a fallen rock).

3. The Results: The Perfect Score

The team tested this system 20 times in a real experimental mine in Missouri. They sent the robot to four different locations, ranging from an easy straight hallway to a deep, dark, winding tunnel.

  • Success Rate: 100%. The robot reached every single goal without human help.
  • Distance: It traveled over 700 meters (about 7 football fields) completely autonomously.
  • Hardware: It did all this on a tiny computer (Intel NUC) that uses very little power, with no GPU and no internet.

Why This Matters

This paper proves that you don't need a supercomputer or a "learning" AI to navigate dangerous places. By using smart, logical rules and good sensors, you can build a robot that is:

  1. Reliable: It doesn't get confused by darkness.
  2. Efficient: It runs on a small battery-friendly computer.
  3. Ready for Industry: It's simple enough to be used in real mines to check for safety hazards without risking human lives.

In short: They taught a robot dog to navigate a pitch-black cave by giving it a laser "cane," a mental map, and a strict set of rules, proving that sometimes the simplest, most logical approach is the most powerful.