SCITUNE: Aligning Large Language Models with Human-Curated Scientific Multimodal Instructions

The paper introduces SciTune, a framework that aligns large language models with human-curated scientific multimodal instructions, resulting in a model (LLaMA-SciTune) that significantly outperforms state-of-the-art systems on scientific visual and language benchmarks, even surpassing human performance in certain categories.

Sameera Horawalavithana, Sai Munikoti, Ian Stewart, Henry Kvinge, Karl Pazdernik2026-04-14💬 cs.CL

CROP: Conservative Reward for Model-based Offline Policy Optimization

This paper proposes CROP, a model-based offline reinforcement learning algorithm that introduces a conservative reward estimator to mitigate distribution shift and overestimation by minimizing both estimation error and the rewards of random actions, achieving competitive performance through a streamlined objective.

Hao Li, Xiao-Hu Zhou, Shu-Hai Li, Mei-Jiang Gui, Xiao-Liang Xie, Shi-Qi Liu, Shuang-Yi Wang, Zhen-Qiu Feng, Zeng-Guang Hou2026-04-14🤖 cs.LG

Training-Free Multi-User Generative Semantic Communications via Null-Space Diffusion Sampling

This paper proposes a training-free, multi-user generative semantic communication framework that leverages null-space diffusion sampling to transmit only essential bits for receivers to regenerate missing information, thereby optimizing OFDMA systems for next-generation GenAI-based communications.

Eleonora Grassucci, Jinho Choi, Jihong Park, Riccardo F. Gramaccioni, Giordano Cicchetti, Danilo Comminiello2026-04-14⚡ eess

Poisoning with A Pill: Circumventing Detection in Federated Learning

This paper introduces "Poisoning with A Pill," a three-stage augmentation framework that enhances the stealth and effectiveness of federated learning poisoning attacks by strategically injecting malicious updates into a tiny, novel subnet structure, thereby bypassing existing detection defenses and significantly increasing model error rates across diverse FL scenarios.

Hanxi Guo, Hao Wang, Tao Song, Tianhang Zheng, Yang Hua, Haibing Guan, Xiangyu Zhang2026-04-14🤖 cs.LG

FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual Teacher

FedQUIT is a novel on-device federated unlearning algorithm that leverages a quasi-competent virtual teacher-student framework to effectively remove client data contributions from a global model without requiring additional assumptions beyond standard FedAvg, while significantly reducing communication and computational overhead compared to retraining.

Alessio Mora, Lorenzo Valerio, Paolo Bellavista, Andrea Passarella2026-04-14🤖 cs.LG