A Queueing-Theoretic Framework for Dynamic Attack Surfaces: Data-Integrated Risk Analysis and Adaptive Defense

This paper proposes a queueing-theoretic framework that models the temporal evolution of cyber-attack surfaces as a backlog of vulnerabilities, demonstrating how AI-driven automation amplifies exploit risks and validating a reinforcement learning-based adaptive defense strategy that reduces active vulnerabilities by over 90% while accounting for heavy-tailed patching times and resource constraints.

Original authors: Jihyeon Yun, Abdullah Yasin Etcibasi, Ming Shi, C. Emre Koksal

Published 2026-04-14
📖 5 min read🧠 Deep dive

This is an AI-generated explanation of the paper below. It is not written or endorsed by the authors. For technical accuracy, refer to the original paper. Read full disclaimer

Imagine your organization's computer network is a massive, bustling warehouse.

In this warehouse, "vulnerabilities" (security holes) are like broken boxes that keep arriving on the conveyor belt. Some arrive because someone found a crack in the design; others appear because a new supplier sent a defective product.

The "attack surface" is simply the pile of broken boxes sitting on the floor waiting to be fixed. If the pile gets too high, a thief (the hacker) can easily grab one, break it open, and steal your goods.

This paper proposes a new way to manage this warehouse using three main ideas: a Queueing Model, the AI Factor, and a Smart Robot Manager.

1. The Warehouse Analogy: A Queueing System

The authors treat the pile of broken boxes not as a static list, but as a dynamic line (queue).

  • Arrivals: New vulnerabilities keep showing up.
  • Departures: Boxes leave the line in two ways:
    1. The Good Way: Your team patches them (fixes the box).
    2. The Bad Way: Hackers exploit them (steal the box).

The key insight is that this isn't a steady stream. Sometimes, a hundred broken boxes arrive at once (a "burst"), and your team can only fix a few. This creates a backlog. The paper uses math (queueing theory) to prove that if you don't keep up with the arrivals, the pile grows dangerously large, and the risk of theft skyrockets.

2. The "AI Amplifier" Twist

The paper asks: What happens if both the thieves and the warehouse managers get AI superpowers?

  • Symmetric AI: Imagine both the hackers and your team get robots that work 4x faster. You might think, "Great! We are 4x faster, so we are safe!"
    • The Surprise: The paper shows that even if you are 4x faster, the number of successful thefts actually goes up. Why? Because the thieves are also 4x faster. They are grabbing the broken boxes before your team can even touch them. The "race" just happens in fast-forward, but the thieves still win more often because they are aggressive.
  • Asymmetric AI: If only the hackers get AI robots, and your team stays human-speed, the warehouse gets overwhelmed almost instantly. The pile of broken boxes explodes, and the theft rate goes through the roof.

3. The "Heavy-Tailed" Problem (The Stubborn Boxes)

The researchers looked at real data from open-source software (like the code behind Google's tools) and found something scary: Patching times are "heavy-tailed."

In a normal world, if you have a broken box, you fix it in 10 minutes. In the real world, most boxes are fixed quickly, but some boxes sit there for months or years because they are hard to fix, or no one is assigned to them.

The paper proves that these "stubborn boxes" create a Long-Range Dependence. This means that a vulnerability discovered today can still be a threat next year. The system doesn't "forget" past mistakes quickly. This creates a persistent, lingering risk that static security plans (which assume everything resets every day) completely miss.

4. The Solution: A Smart Robot Manager (Reinforcement Learning)

Since the arrival of broken boxes is unpredictable and the "stubborn boxes" linger, you can't just hire a fixed number of workers. You need a Smart Robot Manager (an AI algorithm) that watches the pile in real-time.

  • The Problem: You have a limited budget (a fixed number of workers). If you switch workers around too often, you waste time and money retraining them (this is called "switching cost").
  • The Solution: The authors built a Reinforcement Learning (RL) algorithm. Think of this as a robot that learns by trial and error.
    • It watches the pile grow.
    • It decides: "Should I send 5 workers now? Or wait until the pile gets bigger?"
    • It balances the cost of moving workers around against the risk of a theft.

The Results:
When they tested this robot manager on real-world data:

  • It reduced the average number of broken boxes (active vulnerabilities) by over 90% compared to standard methods.
  • It did this without spending more money; it just spent the existing budget smarter, focusing effort exactly when the pile was getting dangerous.
  • It smoothed out the chaos, preventing those massive, scary spikes in risk.

The Big Takeaway

Cybersecurity isn't about building a higher wall; it's about managing the flow.

  1. Vulnerabilities arrive in unpredictable bursts.
  2. Fixing them takes a long time for some, creating a lingering risk.
  3. AI speeds up both attacks and defenses, often making the situation worse if you aren't careful.
  4. The best defense isn't a static plan; it's an adaptive strategy that dynamically shifts resources to where the danger is highest, right now.

By treating the attack surface as a living, breathing queue rather than a static list, this paper gives defenders a mathematical way to stay one step ahead of the thieves, even when the thieves are using AI.

Drowning in papers in your field?

Get daily digests of the most novel papers matching your research keywords — with technical summaries, in your language.

Try Digest →