Where Do LLM-based Systems Break? A System-Level Security Framework for Risk Assessment and Treatment

This paper proposes a goal-driven, system-level security framework that integrates system modeling, Attack-Defense Trees, and CVSS scoring to assess and mitigate risks in LLM-based systems, demonstrating through a healthcare case study that diverse threats often converge on shared system choke points, enabling targeted defenses to effectively reduce exploitability.

Neha Nagaraja, Hayretdin Bahsi2026-03-10💻 cs

Do Machines Fail Like Humans? A Human-Centred Out-of-Distribution Spectrum for Mapping Error Alignment

This paper proposes a human-centred out-of-distribution spectrum that redefines perceptual difficulty based on human accuracy to enable principled comparisons of model-human error alignment, revealing that while vision-language models show the most consistent alignment across conditions, the relative performance of CNNs and ViTs depends on the specific regime of perceptual challenge.

Binxia Xu, Xiaoliang Luo, Luke Dickens, Robert M. Mok2026-03-10💻 cs

Give Them an Inch and They Will Take a Mile:Understanding and Measuring Caller Identity Confusion in MCP-Based AI Systems

This paper reveals that MCP-based AI systems are fundamentally insecure due to a lack of caller identity authentication, which allows persistent authorization states and missing per-tool checks to enable unauthorized access to sensitive operations by untrusted callers.

Yuhang Huang, Boyang Ma, Biwei Yan, Xuelong Dai, Yechao Zhang, Minghui Xu, Kaidi Xu, Yue Zhang2026-03-10💻 cs

A Unified View of Drifting and Score-Based Models

This paper establishes a unified theoretical framework demonstrating that drifting models, which optimize kernel-based mean-shift discrepancies, are mathematically equivalent to score-matching objectives on kernel-smoothed distributions, thereby precisely connecting them to diffusion models and clarifying their relationship with Distribution Matching Distillation.

Chieh-Hsin Lai, Bac Nguyen, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yuki Mitsufuji, Stefano Ermon, Molei Tao2026-03-10🤖 cs.LG

SketchGraphNet: A Memory-Efficient Hybrid Graph Transformer for Large-Scale Sketch Corpora Recognition

This paper introduces SketchGraphNet, a memory-efficient hybrid graph transformer that models free-hand sketches as structured graphs to achieve state-of-the-art recognition accuracy on the newly constructed 3.44-million-sample SketchGraph benchmark while significantly reducing computational resource requirements.

Shilong Chen, Mingyuan Li, Zhaoyang Wang, Zhonglin Ye, Haixing Zhao2026-03-10💻 cs

Neural Dynamics-Informed Pre-trained Framework for Personalized Brain Functional Network Construction

This paper proposes a neural dynamics-informed pre-trained framework that overcomes the limitations of traditional atlas-based methods by extracting personalized neural activity representations to guide brain parcellation and correlation estimation, thereby achieving superior performance in constructing personalized brain functional networks across heterogeneous scenarios.

Hongjie Jiang, Yifei Tang, Shuqiang Wang2026-03-10🤖 cs.LG

How Long Can Unified Multimodal Models Generate Images Reliably? Taming Long-Horizon Interleaved Image Generation via Context Curation

This paper introduces UniLongGen, a training-free inference strategy that improves long-horizon interleaved image generation by dynamically curating context to discard accumulated visual noise, thereby overcoming the reliability collapse caused by dense visual token interference in unified multimodal models.

Haoyu Chen, Qing Liu, Yuqian Zhou, He Zhang, Zhaowen Wang, Mengwei Ren, Jingjing Ren, Xiang Wang, Zhe Lin, Lei Zhu2026-03-10💻 cs

DreamSAC: Learning Hamiltonian World Models via Symmetry Exploration

DreamSAC is a framework that enhances extrapolative generalization in physics simulations by combining an unsupervised symmetry exploration strategy, which actively probes conservation laws via a Hamiltonian-based curiosity bonus, with a Hamiltonian-based world model that learns invariant physical states from raw observations through a novel contrastive objective.

Jinzhou Tang, Fan Feng, Minghao Fu, Wenjun Lin, Biwei Huang, Keze Wang2026-03-10🤖 cs.LG

Targeted Speaker Poisoning Framework in Zero-Shot Text-to-Speech

This paper introduces a novel Speech Generation Speaker Poisoning (SGSP) framework to address privacy risks in zero-shot text-to-speech by modifying trained models to prevent the generation of specific speaker identities while maintaining utility for others, demonstrating effective protection for up to 15 speakers but revealing scalability challenges with larger sets due to identity overlap.

Thanapat Trachu, Thanathai Lertpetchpun, Sai Praneeth Karimireddy, Shrikanth Narayanan2026-03-10💻 cs