AttriGuard: Defeating Indirect Prompt Injection in LLM Agents via Causal Attribution of Tool Invocations

The paper proposes AttriGuard, a novel runtime defense that mitigates Indirect Prompt Injection in LLM agents by employing parallel counterfactual tests to causally attribute tool invocations to user intent rather than untrusted external observations, thereby achieving near-perfect attack success rate reduction with minimal utility loss.

Yu He, Haozhe Zhu, Yiming Li, Shuo Shao, Hongwei Yao, Zhihao Liu, Zhan QinThu, 12 Ma💻 cs

Layered Performance Analysis of TLS 1.3 Handshakes: Classical, Hybrid, and Pure Post-Quantum Key Exchange

This paper presents a laboratory study analyzing the performance impact of traditional, hybrid, and pure post-quantum TLS 1.3 key exchange algorithms across multiple layers of stateful HTTP transactions, utilizing a load-balanced architecture to statistically evaluate latency and throughput variations under different response sizes.

David Gómez-Cambronero, Daniel Munteanu, Ana Isabel González-TablasThu, 12 Ma💻 cs

SPARK: Jailbreaking T2V Models by Synergistically Prompting Auditory and Recontextualized Knowledge

This paper introduces SPARK, a jailbreak framework that exploits cross-modal associations in text-to-video models by combining neutral scene anchors, latent auditory triggers, and stylistic modulators to generate semantically unsafe videos that bypass safety guardrails while maintaining a benign appearance.

Zonghao Ying, Moyang Chen, Nizhang Li, Zhiqiang Wang, Wenxin Zhang, Quanchen Zou, Zonglei Jing, Aishan Liu, Xianglong LiuMon, 09 Ma💻 cs

Agent Tools Orchestration Leaks More: Dataset, Benchmark, and Mitigation

This paper identifies and systematically studies "Tools Orchestration Privacy Risk" (TOP-R), a novel vulnerability where autonomous agents inadvertently synthesize sensitive information from non-sensitive tool fragments, and addresses it by introducing the TOP-Bench benchmark, the H-Score metric, and effective mitigation strategies that significantly improve the safety-utility trade-off.

Yuxuan Qiao, Dongqin Liu, Hongchang Yang, Wei Zhou, Songlin HuMon, 09 Ma🤖 cs.AI

Traversal-as-Policy: Log-Distilled Gated Behavior Trees as Externalized, Verifiable Policies for Safe, Robust, and Efficient Agents

This paper proposes "Traversal-as-Policy," a framework that distills sandboxed execution logs into verifiable Gated Behavior Trees to replace implicit LLM policies with explicit, state-conditioned macro traversals, thereby significantly improving success rates, eliminating safety violations, and reducing computational costs across diverse autonomous agent benchmarks.

Peiran Li, Jiashuo Sun, Fangzhou Lin, Shuo Xing, Tianfu Fu, Suofei Feng, Chaoqun Ni, Zhengzhong TuMon, 09 Ma🤖 cs.AI

Privacy-Preserving Collaborative Medical Image Segmentation Using Latent Transform Networks

This paper introduces PPCMI-SF, a privacy-preserving collaborative framework that utilizes client-specific latent transforms and server-side mapping to achieve high-accuracy, real-time medical image segmentation across heterogeneous institutions while effectively resisting inversion and membership inference attacks without sharing raw data.

Saheed Ademola Bello, Muhammad Shahid Jabbar, Muhammad Sohail Ibrahim, Shujaat KhanMon, 09 Ma💻 cs