Effective Resistance Rewiring: A Simple Topological Correction for Over-Squashing

This paper introduces Effective Resistance Rewiring (ERR), a parameter-free topological correction method that iteratively optimizes graph connectivity by adding and removing edges based on global effective resistance to alleviate over-squashing in Graph Neural Networks, while demonstrating that combining this approach with normalization techniques effectively balances the trade-off between improved long-range signal propagation and oversmoothing.

Bertran Miquel-Oliver, Manel Gil-Sorribes, Victor Guallar, Alexis Molina2026-03-13🤖 cs.LG

Delayed Backdoor Attacks: Exploring the Temporal Dimension as a New Attack Surface in Pre-Trained Models

This paper introduces Delayed Backdoor Attacks (DBA), a novel threat paradigm that decouples trigger exposure from malicious activation via a temporal dimension, enabling the use of common words as triggers and demonstrating the feasibility of the DND prototype which remains dormant before achieving near-perfect attack success rates while evading current defenses.

Zikang Ding, Haomiao Yang, Meng Hao, Wenbo Jiang, Kunlan Xiang, Runmeng Du, Yijing Liu, Ruichen Zhang, Dusit Niyato2026-03-13🤖 cs.AI

Learning Transferable Sensor Models via Language-Informed Pretraining

This paper introduces SLIP, an open-source framework that leverages language-informed pretraining with a flexible patch-embedder and cross-attention mechanism to learn transferable sensor representations capable of handling diverse configurations and achieving superior zero-shot performance in classification, captioning, and question answering across 11 datasets.

Yuliang Chen, Arvind Pillai, Yu Yvonne Wu, Tess Z. Griffin, Lisa Marsch, Michael V. Heinz, Nicholas C. Jacobson, Andrew Campbell2026-03-13🤖 cs.AI

Normative Common Ground Replication (NormCoRe): Replication-by-Translation for Studying Norms in Multi-agent AI

This paper introduces NormCoRe, a novel methodological framework that systematically translates human subject experiments into multi-agent AI environments to study normative coordination, demonstrating through a distributive justice replication that AI agents' normative judgments differ from human baselines and are sensitive to foundation model and persona choices.

Luca Deck, Simeon Allmendinger, Lucas Müller, Niklas Kühl2026-03-13🤖 cs.AI

HomeSafe-Bench: Evaluating Vision-Language Models on Unsafe Action Detection for Embodied Agents in Household Scenarios

This paper introduces HomeSafe-Bench, a comprehensive benchmark for evaluating unsafe action detection in household scenarios using 438 diverse cases, and proposes HD-Guard, a hierarchical dual-brain architecture that effectively balances real-time inference efficiency with deep multimodal reasoning safety monitoring.

Jiayue Pu, Zhongxiang Sun, Zilu Zhang, Xiao Zhang, Jun Xu2026-03-13🤖 cs.AI

LABSHIELD: A Multimodal Benchmark for Safety-Critical Reasoning and Planning in Scientific Laboratories

This paper introduces LABSHIELD, a multimodal benchmark grounded in OSHA and GHS standards that evaluates the safety awareness and reasoning capabilities of large language models in laboratory settings, revealing a significant performance gap in hazard identification and safety-critical planning compared to general-domain tasks.

Qianpu Sun, Xiaowei Chi, Yuhan Rui, Ying Li, Kuangzhi Ge, Jiajun Li, Sirui Han, Shanghang Zhang2026-03-13🤖 cs.AI

BTZSC: A Benchmark for Zero-Shot Text Classification Across Cross-Encoders, Embedding Models, Rerankers and LLMs

This paper introduces BTZSC, a comprehensive benchmark of 22 datasets designed to systematically evaluate and compare the zero-shot text classification capabilities of NLI cross-encoders, embedding models, rerankers, and instruction-tuned LLMs, revealing that modern rerankers currently achieve state-of-the-art performance while embedding models offer the best accuracy-latency trade-off.

Ilias Aarab2026-03-13💬 cs.CL

Can RL Improve Generalization of LLM Agents? An Empirical Study

This paper empirically demonstrates that while Reinforcement Fine-Tuning (RFT) enables LLM agents to generalize well across varying task difficulties within a single environment, it struggles with cross-environment transfer due to interface and semantic shifts, though sequential and mixture training strategies can effectively mitigate forgetting and improve overall generalization.

Zhiheng Xi, Xin Guo, Jiaqi Liu, Jiazheng Zhang, Yutao Fan, Zhihao Zhang, Shichun Liu, Mingxu Chai, Xiaowei Shi, Yitao Zhai, Xunliang Cai, Tao Gui, Qi Zhang, Xuanjing Huang2026-03-13🤖 cs.AI

An Intent of Collaboration: On Agencies between Designers and Emerging (Intelligent) Technologies

This paper argues that to maintain creative agency while collaborating with emerging intelligent technologies like LLMs, designers must engage in introspection, develop a structural understanding of the technology's capabilities, and deliberately adjust the human-technology working relationship.

Pei-Ying Lin, Julie Heij, Iris Borst, Britt Joosten, Kristina Andersen, Wijnand IJsselsteijn2026-03-13🤖 cs.AI

Slow-Fast Inference: Training-Free Inference Acceleration via Within-Sentence Support Stability

The paper proposes Slow-Fast Inference (SFI), a training-free framework that accelerates long-context autoregressive decoding by dynamically alternating between low-cost fast steps using stable sparse memory and occasional slow steps that refresh context at semantic boundaries, achieving significant throughput gains without compromising generation quality.

Xingyu Xie, Zhaochen Yu, Yue Liao, Tao Wang, Kim-Chuan Toh, Shuicheng Yan2026-03-13🤖 cs.LG