Poisoning with A Pill: Circumventing Detection in Federated Learning

This paper introduces "Poisoning with A Pill," a three-stage augmentation framework that enhances the stealth and effectiveness of federated learning poisoning attacks by strategically injecting malicious updates into a tiny, novel subnet structure, thereby bypassing existing detection defenses and significantly increasing model error rates across diverse FL scenarios.

Hanxi Guo, Hao Wang, Tao Song, Tianhang Zheng, Yang Hua, Haibing Guan, Xiangyu Zhang2026-04-14🤖 cs.LG

A Multiparty Homomorphic Encryption Approach to Confidential Federated Kaplan Meier Survival Analysis

This paper introduces a privacy-preserving federated framework using threshold CKKS homomorphic encryption to enable multi-institutional Kaplan-Meier survival analysis that achieves high-fidelity results comparable to centralized data while preventing the reconstruction of sensitive individual records through encrypted aggregation and threshold decryption.

Narasimha Raghavan Veeraragavan, Svetlana Boudko, Jan Franz Nygård2026-04-14📊 stat

Property-Preserving Hashing for 1\ell_1-Distance Predicates: Applications to Countering Adversarial Input Attacks

This paper introduces the first property-preserving hashing construction for 1\ell_1-distance predicates, offering a highly efficient and robust method to detect perceptually similar images under adversarial attacks by forcing attackers to introduce significant noise that degrades image quality to evade detection.

Hassan Asghar, Chenhan Zhang, Dali Kaafar2026-04-14🤖 cs.LG

Look Twice before You Leap: A Rational Framework for Localized Adversarial Anonymization

This paper introduces Rational Localized Adversarial Anonymization (RLAA), a fully localized and training-free framework that employs an Attacker-Arbitrator-Anonymizer architecture to overcome the privacy paradox of remote APIs and the utility collapse of local models by replacing greedy adversarial strategies with a rationality-gated mechanism that optimizes the trade-off between privacy gain and utility cost.

Donghang Duan, Xu Zheng, Yuefeng He, Chong Mu, Leyi Cai, Lizong Zhang2026-04-14💬 cs.CL

ADAM: A Systematic Data Extraction Attack on Agent Memory via Adaptive Querying

This paper introduces ADAM, a novel privacy attack that leverages data distribution estimation and entropy-guided querying to systematically extract sensitive information from LLM agent memory, achieving significantly higher success rates than existing methods and highlighting critical vulnerabilities in current agent designs.

Xingyu Lyu, Jianfeng He, Ning Wang, Yidan Hu, Tao Li, Danjue Chen, Shixiong Li, Yimin Chen2026-04-14🤖 cs.AI

Backdoors in RLVR: Jailbreak Backdoors in LLMs From Verifiable Reward

This paper introduces a novel backdoor attack on Reinforcement Learning with Verifiable Rewards (RLVR) frameworks, demonstrating that injecting a small amount of poisoned training data with asymmetric reward signals can effectively implant a trigger that forces large language models to generate harmful responses while maintaining benign task performance.

Weiyang Guo, Zesheng Shi, Zeen Zhu, Yuan Zhou, Min Zhang, Jing Li2026-04-14🤖 cs.AI

Like a Hammer, It Can Build, It Can Break: Large Language Model Uses, Perceptions, and Adoption in Cybersecurity Operations on Reddit

This mixed-methods study analyzes Reddit discussions to reveal that while cybersecurity practitioners increasingly adopt large language models for productivity and efficiency, their autonomy remains constrained by persistent concerns regarding reliability, verification overhead, and security risks.

Souradip Nath, Chih-Yi Huang, Aditi Ganapathi, Kashyap Thimmaraju, Jaron Mink, Gail-Joon Ahn2026-04-14🤖 cs.AI

Encrypted clones can leak: Classification of informative subsets in Quantum Encrypted Cloning

This paper classifies subsets of encrypted-clone storage registers into authorized, non-informative, and partially informative categories, revealing that intermediate unauthorized subsets can leak parity-dependent information about the input state, thereby exposing a structural confidentiality limitation in quantum encrypted cloning.

Gabriele Gianini, Omar Hasan, Corrrado Mio, Stelvio Cimato, Ernesto Damiani2026-04-14⚛️ quant-ph