On the Superimposed Noise Accumulation Problem in Sequential Knowledge Editing of Large Language Models

This paper identifies the "superimposed noise accumulation problem" as the cause of declining success rates in sequential knowledge editing of large language models and proposes DeltaEdit, a method using dynamic orthogonal constraints to mitigate knowledge conflicts and significantly improve editing performance.

Ding Cao, Yuchen Cai, Yuqing Huang, Xuesong He, Rongxi Guo, Guiquan Liu, Guangzhong Sun

Published 2026-04-01
📖 4 min read☕ Coffee break read

Imagine you have a brilliant, well-read librarian named LLM (Large Language Model). This librarian knows millions of facts about the world. But sometimes, facts change. Maybe a new president is elected, or a celebrity gets a new phone. You need to tell the librarian, "Hey, update your records!"

This is called Knowledge Editing.

The Problem: The "Whispering Gallery" Effect

In the past, if you wanted to update one fact, you could just whisper the new info to the librarian, and they'd remember it. But what if you need to update 3,000 facts in a row?

The paper argues that existing methods for doing this are like trying to fix a leaky roof by stacking buckets on top of each other.

  1. The First Bucket: You fix the first leak (update Fact #1). It works great.
  2. The Second Bucket: You fix the second leak (Fact #2). But to do it, you accidentally knock over the first bucket. Now water is splashing everywhere.
  3. The 3,000th Bucket: By the time you get to the end, the librarian is so confused by all the splashing water (noise) from the previous 2,999 buckets that they can't remember any of the new facts. They start repeating nonsense or forgetting everything.

The authors call this the "Superimposed Noise Accumulation Problem."

The Analogy:
Imagine the librarian is trying to hear a specific song (the correct answer) on the radio.

  • Correct Knowledge: The song you want to hear.
  • Irrelevant Knowledge: Static noise from other stations.
  • The Problem: Every time you try to tune in to a new song (edit a fact), the old static noise gets louder and louder. Eventually, the static is so loud that you can't hear the new song at all. The librarian starts hallucinating because the "noise" of old updates is drowning out the new instructions.

The Investigation: Why is the noise so loud?

The researchers broke down the "update" into two parts:

  1. The "Push" (Influence Vector): How hard you push the new fact into the librarian's brain.
  2. The "Trigger" (Activation Vector): What makes the librarian decide to listen to that push.

They found that existing methods were great at controlling the "Push," but they were terrible at controlling the "Trigger."

  • The Mistake: When you ask the librarian about a new fact, their brain accidentally lights up with old, irrelevant facts too. It's like trying to tell someone "The capital of France is Paris," but their brain is also screaming "The capital of France is London!" and "The capital of France is Berlin!" at the same time.
  • The Result: The librarian gets confused by the conflicting signals and gives a wrong answer.

The Solution: DeltaEdit (The "Noise-Canceling Headphones")

The authors created a new method called DeltaEdit. Think of it as giving the librarian a pair of high-tech, noise-canceling headphones that only let the new fact in while blocking out the static from all the previous updates.

How it works (The "Orthogonal" Strategy):
Imagine you are painting a wall.

  • Old Method: You keep painting over the same spot. The colors mix, get muddy, and eventually, the wall looks like a brown mess.
  • DeltaEdit: Before you paint a new color, you check the wall. If the new color would mix with the old colors, you find a fresh, empty space on the wall (a mathematical "null space") to paint it.
  • The Magic: DeltaEdit ensures that every new fact is stored in a direction that is completely perpendicular (at a 90-degree angle) to all the old facts. This way, the new fact doesn't interfere with the old ones, and the old ones don't interfere with the new one.

The Results: A Cleaner Library

The researchers tested this on two different "librarians" (AI models: GPT-2 and Llama-3).

  • The Competition: Other methods (like AlphaEdit) started failing after about 1,000 updates. The models became confused and started making up nonsense.
  • DeltaEdit: Even after 3,000 updates, DeltaEdit kept the librarian sharp.
    • Success Rate: It improved editing success by 16.8% compared to the best existing method.
    • Memory: The librarian didn't forget how to do other things (like writing poetry or solving math problems) while learning the new facts.

The Takeaway

Updating an AI's memory is hard because every new piece of information tends to mess up the old ones, creating a pile-up of confusion (noise).

DeltaEdit solves this by acting like a smart organizer. Instead of shoving new facts into a crowded closet where they knock things over, it finds a perfectly empty shelf for each new fact, ensuring that the library stays organized, accurate, and reliable, no matter how many books you add.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →