A LINDDUN-based Privacy Threat Modeling Framework for GenAI

This paper introduces a novel, LINDDUN-based privacy threat modeling framework specifically designed for Generative AI systems, which expands the existing threat taxonomy with new categories and examples derived from a systematic literature review and validated through a case study on an AI Agent system.

Qianying Liao, Jonah Bellemans, Laurens Sion, Xue Jiang, Dmitrii Usynin, Xuebing Zhou, Dimitri Van Landuyt, Lieven Desmet, Wouter Joosen

Published Mon, 09 Ma
📖 5 min read🧠 Deep dive

Imagine you've just invited a super-smart, incredibly chatty robot into your home. This robot, powered by Generative AI (GenAI), can write emails, plan your vacation, and even help you with your job. It's amazing, but it's also a bit like a child who has read the entire internet: it's brilliant, but it doesn't always know what's private, what's a secret, or when to stop talking.

This paper is about building a specialized "Privacy Safety Manual" for these new robots, because the old safety manuals didn't quite fit.

Here is the breakdown of the paper using simple analogies:

1. The Problem: The Old Map Doesn't Fit the New Territory

For years, software engineers used a standard map called LINDDUN to find privacy holes in their apps. Think of LINDDUN as a classic "Home Security Checklist." It tells you to check for unlocked doors (Data Disclosure) or people peeking through windows (Linking).

But GenAI is different. It's not just a house; it's a hallucinating, memory-having, conversational partner.

  • The Old Map: "Check if the door is locked."
  • The New Reality: The robot might accidentally tell a stranger your credit card number because it "remembered" it from a training book, or it might convince you to share too much personal info because it's too polite to say "no."

The authors realized that the old checklist was missing a whole new section on how these "thinking" machines behave.

2. The Solution: A Two-Pronged Approach

To fix this, the researchers didn't just guess. They used a "Top-Down" and "Bottom-Up" strategy, like building a new safety manual by reading every book on the subject and testing it in a real house.

  • Top-Down (The Library): They read hundreds of research papers (the "State of the Art") to see what hackers and scientists were already saying about AI privacy. They found 58 different ways AI can leak secrets.
  • Bottom-Up (The Lab): They built a fake HR Chatbot (a robot that helps employees with vacation days and salary info) and tried to break it. They asked: "Can we trick this robot into revealing someone's salary?" "Can we make it forget a secret?"

By combining the library research with the real-world testing, they created a new, expanded safety manual.

3. The New "Threats" (The Scary Stuff)

The paper identifies six main ways AI can leak secrets, which they call Common Attacker Models (CAMs). Here are the analogies:

  • CAM1 (User-to-System): You tell the robot a secret, and it writes it down in a notebook it shares with everyone. Analogy: You whisper a secret to a friend, but that friend is actually a spy.
  • CAM2 (System-to-User): The robot accidentally spills the beans about other people's secrets. Analogy: You ask the robot for the weather, and it replies, "It's sunny, just like John's birthday party last week," revealing John's location.
  • CAM3 & CAM4 (The Training Leak): The robot was trained on a secret book, and now it's trying to recite that book to you, or a new version of the robot is trying to remember the old book's secrets.
  • CAM5 (The Agent Leak): The robot has a "hand" (tools) that can open your files. It might accidentally hand your private diary to a stranger because it was confused.
  • CAM6 (The Ghost in the Machine): Even if you delete the data, the robot's "brain" (its internal math) still holds a ghostly echo of your secret that can be reconstructed.

4. The Three Big New Dangers

The paper highlights three specific behaviors of GenAI that old safety manuals missed:

  1. The "Stochastic" Surprise: GenAI is like a dice-rolling storyteller. Every time you ask the same question, it gives a slightly different answer. This makes it hard to predict what it will say next. It might accidentally invent a lie (hallucination) that sounds so real it becomes a privacy leak.
  2. The "AI Illiteracy" Trap: We treat AI like a human friend. We chat with it, trust it, and overshare. But it's not a friend; it's a database with a personality. The paper notes that because people don't understand how the robot works, they don't realize they are handing over their keys.
  3. The "Gaslighting" Risk: Because the robot is probabilistic, it might tell you "Yes, I approved your vacation" today, and tomorrow say, "I never said that." It can manipulate your memory of events, which is a huge privacy and trust issue.

5. The Result: A Better Safety Manual

The authors updated the LINDDUN framework. They didn't throw it away; they just added a new "GenAI Wing" to the building.

  • New Rules: They added rules about "Hallucinations" (fake data that looks real) and "Unintervenability" (you can't fix a lie the robot told you because the robot doesn't have a record of it).
  • The Knowledge Base: They created a massive list of 100 new examples of how AI can fail privacy-wise. Think of this as a "Hall of Shame" for AI privacy bugs, so engineers can check their own robots against it.

6. Why This Matters

The paper concludes by saying: "Don't reinvent the wheel; just upgrade the tires."

Instead of creating a brand-new, confusing system from scratch, they took the trusted LINDDUN system and specialized it for AI. They tested it on a complex "Multi-Agent" system (a team of robots working together) and proved it works.

In a nutshell:
This paper gives software engineers a specialized magnifying glass to find the unique privacy bugs in Generative AI. It warns us that these robots are not just tools; they are unpredictable, memory-having entities that need a new kind of security guard—one that understands how they think, how they lie, and how they might accidentally spill your secrets.