PRAM-R: A Perception-Reasoning-Action-Memory Framework with LLM-Guided Modality Routing for Adaptive Autonomous Driving

This paper introduces PRAM-R, a unified framework that leverages an LLM-guided router and hierarchical memory within an asynchronous dual-loop architecture to dynamically optimize sensor modality usage, significantly reducing computational costs and routing instability while maintaining high trajectory accuracy in autonomous driving.

Yi Zhang, Xian Zhang, Saisi Zhao, Yinglei Song, Chengdong Wu, Nenad Petrovic, Alois Knoll

Published 2026-03-05
📖 4 min read☕ Coffee break read

Imagine you are driving a car, but instead of just you, there's a team of experts inside the vehicle helping you navigate. This team includes a Camera (the eyes), a LiDAR (a laser scanner that sees 3D shapes), a Radar (which sees through fog and rain), and a Map (the brain's memory of the road).

In most self-driving cars today, all these experts are shouting at the computer 24/7, even when they aren't needed. It's like having a choir of 100 people singing in a small room; it's loud, exhausting, and often redundant. This wastes battery power and slows down the car's computer.

This paper introduces PRAM-R, a smarter way to run a self-driving car. Think of it as upgrading the car's "brain" to be a smart manager who knows exactly who to listen to, when to listen, and what to remember.

Here is how PRAM-R works, broken down into simple parts:

1. The Two-Loop System: The "Fast Reflex" vs. The "Slow Thinker"

PRAM-R uses a clever two-speed system, like a human brain:

  • The Fast Loop (Reflexes): This is the car's immediate reaction. It happens super fast (like catching a ball before you even think about it). It handles steering and braking right now.
  • The Slow Loop (Deliberation): This is the car's "thinking" mode. It runs a bit slower. It looks at the big picture, checks the weather, looks at the map, and asks: "Hey, do we really need the laser scanner right now? It's foggy, so maybe just the radar and camera are enough."

The Analogy: Imagine you are walking through a dark forest.

  • The Fast Loop is your feet moving quickly to avoid tripping.
  • The Slow Loop is your brain pausing for a second to say, "Okay, it's foggy, I can't see far. I should turn on my flashlight (Camera) and rely on my hearing (Radar), but I don't need to scan the whole forest with a laser (LiDAR) because it's too thick."

2. The "Smart Manager" (The LLM Router)

At the heart of this system is a Large Language Model (LLM). Think of this as a very experienced, wise tour guide sitting in the driver's seat.

  • Instead of just crunching numbers, this guide reads the situation. It looks at the sensor data and says, "The camera is blurry because it's raining, so let's trust the radar more. The LiDAR is working great, so let's keep it on."
  • It decides which sensors to turn ON and which to turn OFF (or dim down) to save energy. It's like a smart home system that turns off lights in empty rooms but keeps them on in the kitchen.

3. The "Memory" (The Notebook)

One of the biggest problems with AI is that it often forgets what happened five seconds ago. PRAM-R has a Hierarchical Memory, which is like a multi-level notebook:

  • Short-term Memory: "I just saw a red light 2 seconds ago."
  • Mid-term Memory: "I've been driving in the rain for 10 minutes; the road is slippery."
  • Long-term Memory: "I've driven this specific intersection 50 times before; I know the traffic light is tricky here."

Why is this cool?
Without memory, the AI might panic every time the rain starts, thinking it's a new problem. With memory, it remembers, "Oh, I've handled rain before. I know to switch to Radar mode." This stops the car from constantly flipping sensors on and off (which is called "oscillation" in the paper) and keeps the ride smooth.

4. The Results: Smarter, Faster, and Cheaper

The researchers tested this system in two ways:

  1. Simulated Stress Tests: They threw fake problems at the car (sudden sensor failures, heavy rain, flickering lights).
    • Result: The system was 87% more stable. It didn't panic and flip switches wildly; it stayed calm and stuck to the best sensors.
  2. Real-World Test (nuScenes Dataset): They tested it on real driving data.
    • Result: The car used 6% fewer sensors on average (saving power and computing power) but drove just as safely and accurately as cars that use all sensors all the time.

The Big Picture

PRAM-R is about efficiency. It teaches the self-driving car to be a "smart consumer" of its own sensors. Instead of burning energy by using everything all the time, it uses its "brain" (the LLM) and its "memory" to pick the perfect tool for the job at hand.

It's the difference between a student who studies every single page of a textbook for every exam (wasting time and energy) versus a student who knows exactly which chapters to review based on the specific questions being asked (smart, fast, and effective).