Certifying the Right to Be Forgotten: Primal-Dual Optimization for Sample and Label Unlearning in Vertical Federated Learning

This paper proposes FedORA, a primal-dual optimization framework that enables efficient and theoretically certified sample and label unlearning in Vertical Federated Learning by introducing a novel uncertainty-promoting loss function and adaptive strategies to minimize computational overhead while preserving model utility.

Yu Jiang, Xindi Tong, Ziyao Liu, Xiaoxi Zhang, Kwok-Yan Lam, Chee Wei TanTue, 10 Ma🤖 cs.LG

Latent Sculpting for Zero-Shot Generalization: A Manifold Learning Approach to Out-of-Distribution Anomaly Detection

The paper proposes "Latent Sculpting," a hierarchical two-stage architecture that combines a Transformer-based encoder with a Binary Latent Sculpting loss and a Masked Autoregressive Flow to enforce explicit geometric boundaries on benign data, achieving robust zero-shot generalization and high detection rates for out-of-distribution cyberattacks on the CIC-IDS-2017 benchmark.

Rajeeb Thapa Chhetri, Saurab Thapa, Avinash Kumar, Zhixiong ChenTue, 10 Ma🤖 cs.LG

Revisiting the LiRA Membership Inference Attack Under Realistic Assumptions

This paper re-evaluates the state-of-the-art LiRA membership inference attack under realistic conditions, demonstrating that its effectiveness is significantly overestimated in prior studies due to overconfident models, improper threshold calibration, and unrealistic priors, thereby revealing that reliable privacy auditing requires protocols that reflect practical training practices and reproducibility constraints.

Najeeb Jebreel, Mona Khalil, David Sánchez, Josep Domingo-FerrerTue, 10 Ma🤖 cs.LG

Trusting What You Cannot See: Auditable Fine-Tuning and Inference for Proprietary AI

The paper introduces AFTUNE, a framework that ensures the integrity of cloud-based large language model fine-tuning and inference by employing a lightweight recording and spot-check mechanism to generate verifiable execution traces, thereby enabling clients to practically audit proprietary AI processes without prohibitive overhead.

Heng Jin, Chaoyu Zhang, Hexuan Yu, Shanghao Shi, Ning Zhang, Y. Thomas Hou, Wenjing LouTue, 10 Ma🤖 cs.LG

Explainable and Hardware-Efficient Jamming Detection for 5G Networks Using the Convolutional Tsetlin Machine

This paper proposes and validates a hardware-efficient, explainable Convolutional Tsetlin Machine (CTM) for real-time 5G jamming detection that achieves comparable accuracy to convolutional neural networks while significantly reducing training time, memory usage, and enabling deterministic FPGA deployment.

Vojtech Halenka, Mohammadreza Amini, Per-Arne Andersen, Ole-Christoffer Granmo, Burak KantarciTue, 10 Ma🤖 cs.LG

Enhanced Rényi Entropy-Based Post-Quantum Key Agreement with Provable Security and Information-Theoretic Guarantees

This paper introduces an enhanced post-quantum key agreement protocol that achieves provable, information-theoretic security against quantum adversaries through Rényi entropy-based techniques, distributed polynomial commitments, and quantum-resistant commitments, offering $2^{128}$ security guarantees without relying on computational hardness assumptions.

Ruopengyu Xu, Chenglian LiuTue, 10 Ma⚛️ quant-ph

A Bipartite Quantum Key Distribution Protocol Based on Indefinite Causal Order

The paper proposes a bipartite quantum key distribution protocol that leverages indefinite causal order, specifically utilizing a process matrix resource to achieve an 85.35% raw bit-matching probability between Alice and Bob, which is sufficient for standard error correction and practical implementation.

Mateusz Lesniak, Ryszard Kukulski, Paulina Lewandowska, Grzegorz Rajchel-Mieldzioc, Michał WronskiTue, 10 Ma⚛️ quant-ph

Kraken: Higher-order EM Side-Channel Attacks on DNNs in Near and Far Field

This paper introduces "Kraken," a novel higher-order electromagnetic side-channel attack that successfully extracts DNN parameters from specialized GPU Tensor Cores in the near field and demonstrates significant weight leakage from Large Language Models at a distance of 100 cm, marking the first time such specialized hardware has been compromised via physical side channels.

Peter Horvath, Ilia Shumailov, Lukasz Chmielewski, Lejla Batina, Yuval YaromThu, 12 Ma💻 cs

Forging the Unforgeable: On the Feasibility of Counterfeit Watermarks in Backdoor-Based Dataset Ownership Verification

This paper challenges the reliability of backdoor-based dataset ownership verification by demonstrating that attackers can forge statistically indistinguishable watermarks using a lightweight variational autoencoder, thereby undermining the validity of current watermarking schemes as standalone evidence for copyright infringement claims.

Zhiying Li, Zhi Liu, Dongjie Liu, Shengda Zhuo, Guanggang Geng, Zhaoxin Fan, Shanxiang Lyu, Xiaobo Jin, Jian WengThu, 12 Ma💻 cs