Where Do LLM-based Systems Break? A System-Level Security Framework for Risk Assessment and Treatment
This paper proposes a goal-driven, system-level security framework that integrates system modeling, Attack-Defense Trees, and CVSS scoring to assess and mitigate risks in LLM-based systems, demonstrating through a healthcare case study that diverse threats often converge on shared system choke points, enabling targeted defenses to effectively reduce exploitability.