Evaluating randomized smoothing as a defense against adversarial attacks in trajectory prediction

This paper proposes and evaluates randomized smoothing as an effective, simple, and computationally efficient defense mechanism that significantly enhances the robustness of trajectory prediction models against adversarial attacks without compromising their accuracy in standard settings.

Julian F. Schumann, Eduardo Figueiredo, Frederik Baymler Mathiesen, Luca Laurenti, Jens Kober, Arkady Zgonnikov

Published 2026-03-12
📖 4 min read☕ Coffee break read

Here is an explanation of the paper using simple language and creative analogies.

The Big Picture: The "Tricky Driver" Problem

Imagine you are driving a self-driving car. To drive safely, the car's computer needs to guess what other drivers will do next. Will that car turn left? Will that pedestrian step off the curb?

The paper starts by pointing out a scary flaw: These prediction computers are easily tricked.

Just like a magician can fool your eyes, a "bad actor" (an adversarial attacker) can make tiny, almost invisible changes to how another car moves. To a human, the car looks like it's driving normally. But to the self-driving car's computer, those tiny changes look like a signal to predict something completely wrong—like thinking a car is going to drive straight into a wall when it's actually just turning.

If the self-driving car believes this wrong prediction, it might slam on the brakes or swerve dangerously, causing an accident.

The Solution: The "Blindfolded Judge" (Randomized Smoothing)

The authors propose a defense mechanism called Randomized Smoothing. To understand this, let's use an analogy.

Imagine you are a judge trying to guess the outcome of a race based on a runner's starting position.

  • The Vulnerable Way: You look at the runner's exact starting spot. If a sneaky person nudges the runner's shoe just a millimeter to the left, you might get confused and predict they will run a totally different path.
  • The Randomized Smoothing Way: Instead of looking at the runner's exact spot, you put on a pair of fuzzy, blurry glasses. You look at the runner's position, but you also imagine them standing in a hundred slightly different spots around that original spot (some a bit left, some a bit right, some a bit forward). You make a prediction for each of those imaginary spots, and then you take the average of all those predictions.

Why does this work?
If the sneaky person tries to nudge the runner just a tiny bit to trick you, it doesn't matter much. Because you are looking at hundreds of slightly different positions, that tiny nudge gets "drowned out" by the noise. The average prediction stays stable and accurate, ignoring the tiny trick.

How They Tested It

The researchers took two popular self-driving prediction models (think of them as two different "brains") and tested them on real-world driving data. They then:

  1. Attacked them: They used a computer program to create those "sneaky nudges" to see if they could break the models.
  2. Applied the defense: They made the models use the "fuzzy glasses" method (Randomized Smoothing) to make their predictions.

The Results: Stronger Without Being Slower

The results were very promising:

  • The Defense Worked: When the models used Randomized Smoothing, they became much harder to trick. Even when the "bad actors" tried to push the cars off course with their invisible nudges, the self-driving car's predictions stayed accurate.
  • No Penalty for Normal Driving: Usually, when you make a system more secure, it gets slower or less accurate in normal situations. But here, the models were just as good (or even slightly better) at predicting normal traffic as they were before.
  • It's Cheap: This method doesn't require retraining the whole computer brain from scratch. It's like adding a software filter that is easy to install and doesn't cost much computing power.

The Two Types of "Fuzzy Glasses"

The paper tried two different ways to apply this "fuzziness":

  1. Position Smoothing: Blurring the location of the car (e.g., "Is the car at 10 meters or 10.1 meters?").
  2. Control Smoothing: Blurring the actions of the car (e.g., "Is the car pressing the gas pedal a little harder or a little softer?").

They found that which method worked best depended on which "brain" (model) they were using, but both methods successfully stopped the attacks.

The Takeaway

This paper shows that we can make self-driving cars much safer against hackers or tricky situations without making them drive clumsily. By using Randomized Smoothing, we essentially tell the car's computer: "Don't panic over tiny details. Look at the big picture, average out the noise, and make a steady guess."

It's a simple, inexpensive, and effective shield that keeps our future autonomous vehicles from being fooled by the smallest of tricks.