Natural Adversaries: Fuzzing Autonomous Vehicles with Realistic Roadside Object Placements

This paper introduces TrashFuzz, a black-box fuzzing algorithm that manipulates the realistic placement of common roadside objects to generate adversarial scenarios causing autonomous vehicles to misperceive traffic signals and violate traffic laws, demonstrating significant vulnerabilities in the Apollo system without relying on unnatural adversarial patches.

Yang Sun, Haoyu Wang, Christopher M. Poskitt, Jun Sun

Published 2026-03-05
📖 5 min read🧠 Deep dive

Imagine you are teaching a robot to drive a car. You want to make sure it's safe, so you test it by throwing obstacles in its path. Most researchers test this by putting up weird, glowing, or strangely shaped signs that no human would ever see in real life. It's like testing a robot chef by throwing a glowing, neon-colored tomato at them. If the robot gets confused, sure, you found a bug, but that's not a realistic problem.

This paper introduces a new way to test self-driving cars called TRASHFUZZ. Instead of using weird, fake objects, the researchers ask: "What if we just move the normal, boring stuff we see on every street—like trash cans, benches, and trees—to slightly different spots? Could that confuse the robot?"

Here is the breakdown of their work using some everyday analogies:

1. The Problem: The Robot vs. The Real World

Self-driving cars (Autonomous Vehicles or AVs) rely on cameras and sensors to "see" the road. They are trained to follow rules, just like human drivers. However, the rules for where to put trash cans, trees, and signs were written for humans, not robots.

  • The Analogy: Imagine a human driver sees a trash can on the sidewalk and thinks, "That's just a bin." But a robot might look at that same bin and think, "Is that a pedestrian? Is that a car? Is that a giant rock?"
  • The Issue: Previous tests used "adversarial patches" (like putting a sticker on a stop sign to make the robot think it's a speed limit sign). But in the real world, nobody puts stickers on stop signs. We need to know if normal things can break the robot.

2. The Solution: TRASHFUZZ (The "Trash Can Tester")

The researchers built a tool called TRASHFUZZ. Think of it as a very picky, super-fast robot chef who is trying to break a new recipe.

  • The Rules: The tool has a strict rulebook (based on real city laws) that says, "You can move the trash can, but you can't put it in the middle of the road, and you can't put it on top of a tree." It must look like a normal street.
  • The Method: The tool uses a "Greedy Search." Imagine you are trying to find the perfect spot to trip a robot dog. You don't just throw a rock randomly. You move the rock an inch to the left. Did the dog stumble? Good. Now move it an inch to the right. Did it stumble more? Keep moving it that way.
  • The Goal: The tool moves normal objects (bins, benches, trees) around, checking if the car starts to make illegal moves, like running a red light or stopping in the middle of the road for no reason.

3. The Experiment: Breaking the Robot

They tested this on Apollo, a famous self-driving car system, using a video game simulator (LGSVL) that looks like the real world.

  • The Result: They found that by simply arranging normal trash cans and benches in specific, legal ways, they could trick the car into breaking 15 out of 24 traffic laws they tested.
  • The Scary Part: In one case, they arranged a few innocent-looking trash cans near a traffic light. The car got so confused by the clutter that it thought a Red Light was Green. It almost drove right into traffic!
  • The "Overload" Analogy: Imagine you are trying to listen to a song, but someone starts whispering 50 different things in your ear at once. You get overwhelmed and stop listening to the music. That's what happened to the car's "brain." The trash cans were so confusing that the car's brain stopped working correctly.

4. Why This Matters

This paper proves that we don't need evil hackers with glowing stickers to break self-driving cars. We just need to look at how we design our streets.

  • The Takeaway: The rules for where we put trash cans and trees were written for human eyes. They might be perfect for us, but they are a nightmare for robots.
  • The Fix: We might need to change city planning. Maybe trash cans need to be a specific shape or color so robots don't get confused. Or maybe we need to test cars with "real world" messiness, not just perfect, clean test tracks.

Summary

TRASHFUZZ is like a "stress test" for self-driving cars using only the boring, everyday objects we see every day. It showed that if we just move a few trash cans around legally, we can make a self-driving car run a red light or stop in the middle of the road. This tells us that to make self-driving cars safe, we need to design our cities with robots in mind, not just humans.