InsSo3D: Inertial Navigation System and 3D Sonar SLAM for turbid environment inspection

This paper presents InsSo3D, a robust SLAM framework that fuses 3D sonar point clouds with Inertial Navigation System data to enable accurate, large-scale 3D mapping and drift correction for underwater inspections in turbid environments.

Simon Archieri, Ahmet Cinar, Shu Pan, Jonatan Scharff Willners, Michele Grimaldi, Ignacio Carlucho, Yvan Petillot

Published 2026-03-09
📖 4 min read☕ Coffee break read

Imagine you are a diver trying to explore a shipwreck, but the water is so muddy and dark that you can't see your own hand in front of your face. If you tried to use a regular camera, you'd be blind. If you tried to use a standard sonar (like a bat's echolocation), you'd get a flat, 2D picture that looks like a shadow puppet show—you'd know something is there, but you wouldn't know if it's a rock, a fish, or a wall, or how high it is off the ground.

This paper introduces InsSo3D, a new "super-sense" for underwater robots that solves this problem. Here is how it works, broken down into simple concepts:

1. The Problem: The "Flat Shadow" vs. The "3D Cloud"

Think of traditional sonar like a flashlight in a foggy room. It tells you how far away an object is and which direction it's in, but it flattens everything into a 2D slice. It's like trying to guess the shape of a car by looking at its shadow on a wall; you might see the outline, but you can't tell if it's a sedan or a truck.

InsSo3D uses a special 3D Sonar. Instead of a flat shadow, this sensor creates a "cloud of dots" (a point cloud) that fills the space, giving the robot height, width, and depth. It's like switching from a 2D sketch to a full 3D hologram.

2. The Challenge: The Robot Gets Lost

Even with a 3D map, robots have a problem: Drift.
Imagine walking through a dark cave with your eyes closed, counting your steps. After 100 steps, you might think you are in a straight line, but you've actually drifted slightly left. After 1,000 steps, you might be miles off course. This is "odometry drift." In murky water, the robot's internal compass and speed sensors get confused by magnetic interference or water currents, making the map look like a twisted, stretched piece of rubber.

3. The Solution: The "Memory Lane" System

InsSo3D fixes this by acting like a very organized librarian who never forgets a book's location. It uses a two-step process:

  • The Frontend (The Immediate Memory): As the robot moves, it constantly compares its new 3D sonar "cloud" with the one it just took a second ago. It's like looking at a new photo and saying, "Okay, I moved a little bit to the right." It uses a clever math trick (called CFEAR) to match specific features in the water, ignoring the noise and bubbles.
  • The Backend (The Long-Term Memory): As the robot explores, it builds small "chunks" of the map (sub-maps). When the robot loops back around and sees a place it visited earlier, the system says, "Wait a minute! I've been here before!" This is called Loop Closure.
    • Analogy: Imagine you are walking through a maze. You think you are in a new hallway, but you recognize a specific crack in the wall. Suddenly, you realize you've walked in a circle. InsSo3D uses this realization to instantly "snap" the map back into place, correcting all the previous drift errors.

4. The Result: A Clear Picture in Murky Water

The researchers tested this in two places:

  1. A giant water tank: A very tricky environment with concrete walls that bounce sonar signals around (like an echo chamber).
  2. A flooded quarry: A real-world, large-scale underwater site.

The Outcome:

  • Accuracy: Even after a 50-minute mission, the robot's map was only off by about 8 inches (21 cm). That's incredibly precise for a robot swimming in dark, muddy water.
  • Scale: It successfully mapped an area the size of a tennis court (10m x 20m) with high detail.
  • Speed: It works fast enough to keep up with the robot's movement, processing data in real-time.

Why This Matters

Before this, mapping underwater in murky conditions was like trying to draw a picture of a room while wearing a blindfold and a foggy mask. InsSo3D gives the robot "super-vision." It allows autonomous underwater vehicles (AUVs) to safely inspect underwater pipelines, shipwrecks, or oil rigs without needing clear water or human divers.

In short, InsSo3D is the robot's way of saying, "I can't see, but I can feel the shape of the world around me, and I know exactly where I am."