All-day Multi-scenes Lifelong Vision-and-Language Navigation with Tucker Adaptation

To address the challenge of catastrophic forgetting in lifelong vision-and-language navigation across diverse environments, this paper proposes Tucker Adaptation (TuKA), a method that decouples multi-hierarchical knowledge into shared subspaces and scenario-specific experts via high-order tensor decomposition, enabling the development of the AlldayWalker agent that consistently outperforms state-of-the-art baselines.

Xudong Wang, Gan Li, Zhiyu Liu, Yao Wang, Lianqing Liu, Zhi Han

Published 2026-03-17
📖 5 min read🧠 Deep dive

Imagine you are teaching a robot dog how to navigate your house. You give it a simple command: "Go to the kitchen, turn left, and stop at the fridge."

In a perfect world, the robot learns this once and remembers it forever. But in the real world, things get messy. What if you ask it to do the same thing at 3:00 PM when the sun is blazing through the window (overexposure)? Or at 3:00 AM when it's pitch black (low-light)? Or on a foggy day when the air is thick with dust (scattering)?

If you try to teach the robot to handle the "dark" scenario, it often forgets how to handle the "bright" scenario. This is called catastrophic forgetting. It's like a student who studies for a math test, passes it, but then immediately forgets how to read because they studied for a history test right after.

This paper introduces a solution called AlldayWalker, a robot brain designed to learn everything without forgetting anything, no matter the time of day or the weather.

Here is how it works, explained with simple analogies:

1. The Problem: The "Two-Dimensional" Trap

Most current robot learning methods (like a technique called LoRA) are like flat spreadsheets.

  • Imagine a spreadsheet where you have one column for "Shared Knowledge" (like how to walk) and one column for "Specific Knowledge" (like how to walk in the dark).
  • The problem is that real life is more complex. You need to know how to walk in the dark AND in the kitchen AND in the living room.
  • A flat spreadsheet gets messy when you try to add too many columns. It can't easily separate "Darkness" from "Kitchen" from "Fog." It just sees a jumbled mess, so the robot gets confused and forgets old skills.

2. The Solution: The "Rubik's Cube" of Knowledge

The authors propose a new method called TuKA (Tucker Adaptation). Instead of a flat spreadsheet, imagine a Rubik's Cube (a 3D or even 4D puzzle).

  • The Core (The Center of the Cube): This holds the Shared Knowledge. It's the robot's basic brain: "How to walk," "What a door looks like," "How to follow a voice." This part stays the same for everyone.
  • The Layers (The Slices):
    • One slice handles Scenes (Kitchen, Bedroom, Living Room).
    • Another slice handles Environments (Sunny, Dark, Foggy, Bright).
  • The Magic: Because this is a 3D/4D cube, the robot can twist and turn the layers independently. It can say, "I need the 'Kitchen' slice AND the 'Dark' slice," without messing up the 'Living Room' or 'Sunny' slices.

This allows the robot to decouple (separate) its knowledge. It learns that "Darkness" is a specific setting, and "Kitchen" is a specific setting, and it can combine them instantly without overwriting its memory of "Sunny Living Room."

3. The Learning Strategy: The "Library" Approach

The paper also introduces a strategy called DKIL (Decoupled Knowledge Incremental Learning). Think of this as a very organized librarian.

  • The Shared Librarian (Core Tensor): There is one main librarian who knows the rules of the library (how to walk, how to listen). This librarian never changes, ensuring the robot doesn't lose its basic common sense.
  • The Specialized Assistants (Experts):
    • There is an assistant for "Low Light."
    • There is an assistant for "Overexposure."
    • There is an assistant for "The Kitchen."
  • The Process: When the robot enters a new room in the dark, it doesn't fire up the whole brain. It just calls the "Low Light Assistant" and the "Kitchen Assistant."
  • The Safety Net: If the robot learns something new about the "Low Light" assistant, the system makes sure it doesn't accidentally erase what the "Sunny" assistant knows. It uses a "consistency check" to ensure the new learning fits perfectly alongside the old learning.

4. The Result: The "All-Day Walker"

The team built a robot named AlldayWalker. They tested it in a simulated world where they could change the lighting from bright noon to pitch black, or add fog and glare.

  • Old Robots: When they tried to learn the "Dark" task, they forgot the "Bright" task. Their success rate dropped to near zero.
  • AlldayWalker: It learned the dark task, the bright task, the foggy task, and the kitchen task. When tested on all of them later, it remembered everything. It didn't forget the bright days just because it learned the dark nights.

Why This Matters

This isn't just about robots walking in houses. It's about creating AI that can evolve.

  • Current AI: Like a student who forgets last week's lesson when they study for today's test.
  • AlldayWalker: Like a wise elder who accumulates knowledge over a lifetime, remembering every season, every room, and every weather condition, getting smarter and more adaptable every day without losing a single memory.

In short, the paper teaches us how to build robots that don't just learn one thing at a time, but learn everything at once, keeping their knowledge organized in a high-dimensional "Rubik's Cube" so they can navigate our messy, changing world all day long.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →