When Robots Obey the Patch: Universal Transferable Patch Attacks on Vision-Language-Action Models

This paper introduces UPA-RFAS, a unified framework that generates universal and transferable physical adversarial patches to effectively attack diverse Vision-Language-Action (VLA) models across unknown architectures, finetuned variants, and sim-to-real shifts by leveraging robust feature alignment, a two-phase min-max optimization, and VLA-specific attention and semantic losses.

Hui Lu, Yi Yu, Yiming Yang, Chenyu Yi, Qixin Zhang, Bingquan Shen, Alex C. Kot, Xudong JiangWed, 11 Ma🤖 cs.AI

Structured Matrix Scaling for Multi-Class Calibration

This paper proposes a structured matrix scaling approach for multi-class calibration that leverages theoretical insights from logistic regression, combined with structured regularization and robust optimization, to effectively manage the bias-variance tradeoff and achieve substantial performance gains over existing methods while providing an open-source implementation.

Eugène Berta, David Holzmüller, Michael I. Jordan, Francis BachWed, 11 Ma🤖 cs.AI

GraphKeeper: Graph Domain-Incremental Learning via Knowledge Disentanglement and Preservation

The paper proposes GraphKeeper, a novel framework for Graph Domain-Incremental Learning that addresses catastrophic forgetting through knowledge disentanglement and deviation-free preservation, achieving state-of-the-art performance across multiple graph domains while remaining compatible with various graph foundation models.

Zihao Guo, Qingyun Sun, Ziwei Zhang, Haonan Yuan, Huiping Zhuang, Xingcheng Fu, Jianxin LiWed, 11 Ma🤖 cs.AI

SynHLMA:Synthesizing Hand Language Manipulation for Articulated Object with Discrete Human Object Interaction Representation

This paper introduces SynHLMA, a novel framework that synthesizes hand manipulation sequences for articulated objects by aligning natural language instructions with a discrete human-object interaction representation, thereby enabling robust grasp generation, prediction, and interpolation for applications in embodied AI and robotics.

Wang zhi, Yuyan Liu, Liu Liu, Li Zhang, Ruixuan Lu, Dan GuoWed, 11 Ma🤖 cs.AI

From Spatial to Actions: Grounding Vision-Language-Action Model in Spatial Foundation Priors

FALCON addresses the spatial reasoning limitations of existing 2D-based vision-language-action models by leveraging spatial foundation models to inject rich 3D geometric priors directly into the action head, achieving state-of-the-art performance across diverse simulation and real-world tasks without requiring architectural changes or specialized sensors.

Zhengshen Zhang, Hao Li, Yalun Dai, Zhengbang Zhu, Lei Zhou, Chenchen Liu, Dong Wang, Francis E. H. Tay, Sijin Chen, Ziwei Liu, Yuxiao Liu, Xinghang Li, Pan ZhouWed, 11 Ma🤖 cs.AI

RL-100: Performant Robotic Manipulation with Real-World Reinforcement Learning

RL-100 is a unified real-world reinforcement learning framework that combines diffusion visuomotor policies with a clipped PPO objective and consistency distillation to achieve 100% success across 1,000 diverse robotic manipulation trials, matching or surpassing human experts while demonstrating robust zero-shot generalization and continuous deployment in dynamic environments.

Kun Lei, Huanyu Li, Dongjie Yu, Zhenyu Wei, Lingxiao Guo, Zhennan Jiang, Ziyu Wang, Shiyu Liang, Huazhe XuWed, 11 Ma🤖 cs.AI

RECODE: Reasoning Through Code Generation for Visual Question Answering

The paper introduces RECODE, an agentic framework that enhances visual question answering by reverse-engineering structured visuals into executable code through iterative generation and selection, thereby transforming ambiguous perceptual tasks into verifiable symbolic reasoning problems that significantly outperform existing methods.

Junhong Shen, Mu Cai, Bo Hu, Ameet Talwalkar, David A Ross, Cordelia Schmid, Alireza FathiWed, 11 Ma🤖 cs.AI

NavSpace: How Navigation Agents Follow Spatial Intelligence Instructions

This paper introduces the NavSpace benchmark to systematically evaluate the spatial intelligence of navigation agents through six task categories and 1,228 trajectory-instruction pairs, revealing limitations in current models and proposing SNav, a new spatially intelligent navigation model that outperforms existing agents on both the benchmark and real robot tests.

Haolin Yang, Yuxing Long, Zhuoyuan Yu, Zihan Yang, Minghan Wang, Jiapeng Xu, Yihan Wang, Ziyan Yu, Wenzhe Cai, Lei Kang, Hao DongWed, 11 Ma🤖 cs.AI

Latent Speech-Text Transformer

The Latent Speech-Text Transformer (LST) improves the efficiency and performance of auto-regressive speech-text models by aggregating speech tokens into latent patches, which aligns sequence granularity with text, reduces computational costs, and achieves significant accuracy gains across speech and text benchmarks.

Yen-Ju Lu, Yashesh Gaur, Wei Zhou, Benjamin Muller, Jesus Villalba, Najim Dehak, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Srinivasan Iyer, Duc LeWed, 11 Ma🤖 cs.AI

VSSFlow: Unifying Video-conditioned Sound and Speech Generation via Joint Learning

VSSFlow introduces a unified flow-matching framework that seamlessly integrates Video-to-Sound and Visual Text-to-Speech generation through a disentangled condition aggregation mechanism, demonstrating that joint learning can surpass specialized state-of-the-art baselines without performance degradation.

Xin Cheng, Yuyue Wang, Xihua Wang, Yihan Wu, Kaisi Guan, Yijing Chen, Peng Zhang, Xiaojiang Liu, Meng Cao, Ruihua SongWed, 11 Ma🤖 cs.AI