Semantic-Guided Dynamic Sparsification for Pre-Trained Model-based Class-Incremental Learning

This paper proposes Semantic-Guided Dynamic Sparsification (SGDS), a novel class-incremental learning method that mitigates catastrophic forgetting and enhances plasticity by sculpting class-specific sparse activation subspaces to facilitate knowledge transfer among similar classes while preventing interference between dissimilar ones, thereby outperforming existing parameter-constrained approaches.

Ruiqi Liu, Boyu Diao, Zijia An, Runjie Shao, Zhulin An, Fei Wang, Yongjun Xu

Published 2026-02-17
📖 4 min read☕ Coffee break read

Imagine you are a master chef who has spent years perfecting a classic French recipe book (this is your Pre-Trained Model). Now, you want to learn to cook dishes from entirely new cuisines—say, Thai and Mexican—without forgetting how to make that perfect French soufflé.

The problem? If you try to learn the new recipes by rewriting your old book, you might accidentally erase the French instructions. If you just keep the old book closed and write a tiny, separate note for the new recipes, you might run out of space or get confused about which note to use.

This is the challenge of Class-Incremental Learning (CIL): teaching an AI to learn new things without forgetting the old, all while being efficient.

The Old Way: The "Rigid Filing Cabinet"

Most current methods try to solve this by building a Rigid Filing Cabinet.

  • They take your existing knowledge (the French book) and freeze it.
  • For every new cuisine, they add a small, separate drawer (an Adapter).
  • To make sure you don't mix up the Thai spices with the French herbs, they force these new drawers to be completely orthogonal (at a 90-degree angle) to everything else.

The Flaw: Imagine trying to fit a giant, flexible yoga mat into a tiny, rigid box. By forcing the new drawers to be perfectly rigid and separate, you limit how much the AI can actually learn and adapt. You are sacrificing plasticity (the ability to bend and learn) for stability (not forgetting).

The New Way: SGDS (The "Smart Traffic Controller")

The paper proposes a new method called SGDS (Semantic-Guided Dynamic Sparsification). Instead of forcing the drawers to be rigid, SGDS acts like a Smart Traffic Controller for the AI's thoughts (activations).

Here is how it works, using a simple analogy:

1. The "Semantic Strategy" (The GPS)

Before the AI learns a new dish, SGDS asks: "Is this new dish similar to something we already know?"

  • Scenario A (Similar): If you are learning to make "Pad Thai" and you already know "Pad See Ew," SGDS says, "Great! Let's use the same kitchen counter space for both." It encourages the AI to share its mental space for similar things.
  • Scenario B (Different): If you are learning to make "Sushi" (Japanese) and you already know "Tacos" (Mexican), SGDS says, "No overlap! These are too different. Let's build a completely new, empty room for Sushi."

2. "Semantic Exploration" (Drawing the Map)

SGDS guides the AI's thoughts into specific "lanes" or "rooms."

  • If the tasks are similar, it aligns their paths so they walk together.
  • If the tasks are different, it forces them into orthogonal subspaces. Think of this like building a new room that has no doors connecting to the old rooms. This ensures that when you think about Sushi, you don't accidentally knock over the Tacos.

3. "Activation Compaction" (The Vacuum Cleaner)

This is the secret sauce. Even if you have a new room for Sushi, you don't want it to be a giant, empty warehouse where thoughts get lost.

  • SGDS uses a Vacuum Cleaner (Targeted Sparsification) to suck out all the unnecessary clutter.
  • It forces the AI to use only the most important neurons (the core ingredients) for that specific task.
  • By making the "Sushi room" very compact and sparse, it leaves a huge amount of empty space (a "Null Space") around it. This empty space is a Sanctuary where future tasks (like learning Italian) can be built without ever bumping into the Sushi or Tacos.

Why is this better?

  • The Old Way (Rigid Constraints): Like trying to park cars in a garage where every car is bolted to the floor. It's stable, but you can't move anything, and you can't fit new cars easily.
  • The New Way (SGDS): Like a dynamic parking lot with smart sensors. It groups similar cars together, but for unique cars, it clears out a specific, compact spot and leaves the rest of the lot wide open for future arrivals.

The Result

The researchers tested this on difficult image datasets (like recognizing animals or objects).

  • Performance: SGDS beat the current best methods (State-of-the-Art) by a significant margin.
  • Privacy: Because it doesn't need to store old images (exemplars) to remember them, it's perfect for privacy-sensitive situations like healthcare.
  • Efficiency: It learns new things faster and forgets less, all while using fewer computing resources.

In a nutshell: Instead of locking the AI's brain into rigid boxes to prevent confusion, SGDS gently guides its thoughts into organized, compact, and separate "mental rooms," leaving plenty of room for the future.

Get papers like this in your inbox

Personalized daily or weekly digests matching your interests. Gists or technical summaries, in your language.

Try Digest →