Outlier-robust Autocovariance Least Square Estimation via Iteratively Reweighted Least Square

This paper proposes ALS-IRLS, a novel outlier-robust algorithm that integrates an innovation-level adaptive thresholding mechanism with an IRLS-based Huber cost function to significantly improve the accuracy of noise covariance estimation and state filtering in the presence of measurement outliers compared to conventional methods.

Jiahong Li, Fang Deng

Published Tue, 10 Ma
📖 4 min read☕ Coffee break read

Imagine you are trying to teach a robot to walk through a foggy, noisy room. To do this, the robot uses a "Kalman Filter," which is like a smart guesser. It predicts where the robot will be next, checks its sensors to see where it actually is, and then blends those two pieces of information to make the best guess possible.

However, for this guesser to work perfectly, it needs to know two things:

  1. How much the robot wobbles on its own (Process Noise).
  2. How much the sensors lie or glitch (Measurement Noise).

In the real world, we rarely know these numbers exactly. So, we have to estimate them by watching the robot move. This is where the Autocovariance Least Squares (ALS) method comes in. It's like a detective looking at a history of sensor data to figure out the noise levels.

The Problem: The "Bad Apple" Effect

The old detective (standard ALS) is very sensitive to "bad apples." Imagine you are trying to calculate the average height of a group of people. If one person is a giant (an outlier caused by a sensor glitch or a sudden jump), the old detective gets confused. It thinks, "Wow, everyone must be huge!" and calculates a massive, wrong average.

In our robot example, a single sensor glitch makes the old detective think the sensors are incredibly unreliable. It then tells the robot to ignore its own movement predictions and trust the sensors blindly, causing the robot to stumble and fall.

The Solution: The "Smart Filter" (ALS-IRLS)

This paper introduces a new, super-smart detective called ALS-IRLS. It uses a "two-tier" strategy to handle the bad apples, like a bouncer at a club and a security guard inside.

Tier 1: The Bouncer (Innovation-Level Thresholding)

Before the detective even starts crunching numbers, it puts on a pair of "smart glasses."

  • The Metaphor: Imagine the robot is walking, and every step it takes is a "prediction." The sensor tells it where it is. The difference between the prediction and the sensor is the "innovation" (the surprise).
  • The Action: If the robot predicts it's on the floor, but the sensor suddenly says it's on the ceiling, the "Bouncer" says, "That's impossible! That's a glitch!" and throws that specific data point out of the room immediately.
  • The Result: The detective never even sees the giant bad apples. It only looks at the normal, healthy data.

Tier 2: The Security Guard (Iteratively Reweighted Least Squares)

Sometimes, a bad apple sneaks past the bouncer, or a piece of data is just "suspicious" but not obviously broken.

  • The Metaphor: Imagine the detective is now trying to solve a puzzle. Most pieces fit perfectly. But one piece looks a little warped.
  • The Action: Instead of forcing that warped piece to fit (which would ruin the whole picture), the Security Guard says, "We'll use that piece, but we'll hold it very lightly so it doesn't pull the whole puzzle out of shape."
  • The Magic: The detective does this iteratively. It solves the puzzle, sees which pieces are still pulling things out of shape, and then makes them even lighter. It repeats this until the picture is perfect. This is the "Iteratively Reweighted Least Squares" (IRLS) part.

Why This Matters

The paper tested this new method against the old one and some other "robust" methods. Here is what happened:

  1. Accuracy: The old method was off by a huge margin (like estimating a 5-foot person as 50 feet tall). The new method was almost perfect, reducing the error by 100 times.
  2. The "Oracle" Goal: In science, there is an "Oracle" (a magical being that knows the true noise levels perfectly). The new method got the robot's performance to within 9% of this magical Oracle.
  3. Better than the Competition: Other methods tried to fix the problem by changing how the robot thinks (using complex math to handle glitches). But this paper showed that if you just fix the data first (by removing the glitches), the robot can use simple, standard thinking and still perform amazingly well.

The Bottom Line

Think of the old method as trying to bake a cake while someone keeps throwing rocks into the batter. You end up with a rock-filled mess.

The new ALS-IRLS method is like having a chef who:

  1. Inspects the ingredients and throws out the rocks before they hit the bowl.
  2. Stirs gently, making sure any tiny pebbles that slipped through don't ruin the texture.

The result? A perfect cake (a robot that walks smoothly) even in a very messy, noisy kitchen. This proves that cleaning your data is often more powerful than trying to build a smarter algorithm to cope with dirty data.