GraphKeeper: Graph Domain-Incremental Learning via Knowledge Disentanglement and Preservation

The paper proposes GraphKeeper, a novel framework for Graph Domain-Incremental Learning that addresses catastrophic forgetting through knowledge disentanglement and deviation-free preservation, achieving state-of-the-art performance across multiple graph domains while remaining compatible with various graph foundation models.

Zihao Guo, Qingyun Sun, Ziwei Zhang, Haonan Yuan, Huiping Zhuang, Xingcheng Fu, Jianxin Li

Published Wed, 11 Ma
📖 4 min read☕ Coffee break read

Imagine you are a master chef who has spent years perfecting recipes for different types of cuisine: Italian, Japanese, and Mexican. You have a "Graph Foundation Model," which is like your super-talented brain that understands food.

Now, imagine a new challenge: You need to keep learning new recipes from entirely new cultures (like Ethiopian or Peruvian) without forgetting how to cook your old ones.

This is the problem of Graph Domain-Incremental Learning. In the world of AI, "graphs" are networks of connected data (like social networks, chemical molecules, or traffic systems). Usually, AI learns new things by overwriting old memories, a phenomenon called "Catastrophic Forgetting." It's like your brain suddenly forgetting how to make pasta because you just learned how to make sushi.

The paper introduces GraphKeeper, a new system designed to be the ultimate "Memory Chef" that never forgets. Here is how it works, broken down into three simple tricks:

1. The "Specialized Aprons" (Domain-Specific PEFT)

The Problem: When you try to learn a new cuisine, you might accidentally change the way you chop vegetables for your old recipes. Your brain gets confused, and your Italian sauce starts tasting like Japanese soy sauce. This is called an "Embedding Shift."

The GraphKeeper Solution: Instead of rewriting your entire brain, GraphKeeper gives you a specialized apron for each cuisine.

  • When you learn Italian, you put on the "Italian Apron."
  • When you learn Japanese, you switch to the "Japanese Apron."
  • Your core brain (the pre-trained model) stays frozen and untouched.
  • The Result: You can learn new recipes without messing up the old ones because the "Italian Apron" doesn't touch the "Japanese Apron." They stay separate.

2. The "Soundproof Rooms" (Disentanglement)

The Problem: Even with different aprons, your brain might still get confused. Maybe the "Italian" and "French" sections of your memory start bleeding into each other. You might call a risotto a "French stew." This is Semantic Confusion.

The GraphKeeper Solution: GraphKeeper builds soundproof rooms inside your kitchen.

  • Intra-domain: It makes sure all the Italian dishes are grouped tightly together in their own room, so they are easy to find.
  • Inter-domain: It pushes the "Italian Room" far away from the "Japanese Room" so they never accidentally bump into each other.
  • The Result: Every type of graph (or cuisine) has its own clear, distinct space in your memory. No mixing up!

3. The "Unchanging Menu" (Deviation-Free Knowledge Preservation)

The Problem: Usually, when an AI learns something new, it also changes its "Decision Boundary"—basically, it rewrites the rules for how it decides what something is. It's like if, after learning sushi, you suddenly decided that all round food is "sushi," even though you used to know a pizza was a pizza.

The GraphKeeper Solution: GraphKeeper separates the Cooking (making the embeddings) from the Menu (making the final decision).

  • It uses a clever math trick (Ridge Regression) to update the "Menu" without ever touching the "Cooking" part.
  • Think of it like adding a new page to a recipe book without erasing the old pages. The rules for identifying old dishes stay exactly the same, even as new dishes are added.
  • The Result: The AI's judgment remains stable. It knows a pizza is a pizza, even after learning about sushi.

What if you don't know the cuisine? (Domain-Aware Discrimination)

Sometimes, you get a dish and you don't know if it's Italian or Peruvian.

  • GraphKeeper's Trick: It uses a "magic magnifying glass" (High-Dimensional Random Mapping) to look at the dish. This magnifying glass stretches the features so that even similar-looking dishes look very different under the lens.
  • It then compares the dish to a "Prototype" (a perfect example) of every cuisine it knows. It picks the closest match and puts on the correct apron.

The Bottom Line

GraphKeeper is like a super-intelligent librarian who:

  1. Never shreds old books when adding new ones (No Forgetting).
  2. Keeps books from different genres on completely separate, organized shelves (No Confusion).
  3. Uses a special catalog system that updates without rewriting the whole library (Stable Decisions).

The paper shows that this method works incredibly well, beating other AI models by a huge margin (6.5% to 16.6% better) and allowing them to learn continuously across many different types of data without losing their mind. It's a major step toward AI that can truly learn and grow like a human, rather than just memorizing and then forgetting.