DeReCo: Decoupling Representation and Coordination Learning for Object-Adaptive Decentralized Multi-Robot Cooperative Transport

This paper introduces DeReCo, a novel multi-agent reinforcement learning framework that decouples representation and coordination learning through a three-stage training strategy to overcome bidirectional interference, thereby enabling sample-efficient and robust decentralized cooperative transport across objects with diverse shapes and physical properties.

Kazuki Shibata, Ryosuke Sota, Shandil Dhiresh Bosch, Yuki Kadokawa, Tsurumine Yoshihisa, Takamitsu MatsubaraTue, 10 Ma💻 cs

Adaptive Vision-Based Control of Redundant Robots with Null-Space Interaction for Human-Robot Collaboration

This paper proposes a novel adaptive vision-based control scheme with null-space interaction for redundant robots that ensures stable, safe, and effective human-robot collaboration in unknown environments by decoupling primary task execution from interactive adjustments, as validated through augmented reality experiments and Lyapunov stability analysis.

Xiangjie Yan, Chen Chen, Xiang LiTue, 10 Ma💻 cs

Vector Field Augmented Differentiable Policy Learning for Vision-Based Drone Racing

This paper introduces DiffRacing, a novel framework that enhances differentiable policy learning for vision-based drone racing by integrating vector fields to provide stable gradient signals for balancing high-speed gate traversal with obstacle avoidance, while employing a differentiable Delta Action Model to enable robust sim-to-real transfer without explicit system identification.

Yang Su, Feng Yu, Yu Hu, Xinze Niu, Linzuo Zhang, Fangyu Sun, Danping ZouTue, 10 Ma💻 cs

Dual-Horizon Hybrid Internal Model for Low-Gravity Quadrupedal Jumping with Hardware-in-the-Loop Validation

This paper introduces a Dual-Horizon Hybrid Internal Model that enables stable, continuous quadrupedal jumping under lunar gravity using only proprioceptive sensing, validated through the MATRIX hardware-in-the-loop testbed which emulates reduced gravity and lunar terrain in real time.

Haozhe Xu, Yifei Zhao, Wenhao Feng, Zhipeng Wang, Hongrui Sang, Cheng Cheng, Xiuxian Li, Zhen Yin, Bin HeTue, 10 Ma💻 cs

VORL-EXPLORE: A Hybrid Learning Planning Approach to Multi-Robot Exploration in Dynamic Environments

VORL-EXPLORE is a hybrid learning and planning framework for multi-robot exploration in dynamic environments that couples task allocation with motion execution via a shared navigability fidelity signal, enabling adaptive arbitration between global and reactive policies to prevent bottlenecks and ensure robust, collision-free coverage.

Ning Liu, Sen Shen, Zheng Li, Sheng Liu, Dongkun Han, Shangke Lyu, Thomas BraunlTue, 10 Ma💻 cs

RAPID: Redundancy-Aware and Compatibility-Optimal Edge-Cloud Partitioned Inference for Diverse VLA models

The paper introduces RAPID, a novel Edge-Cloud Collaborative inference framework designed to optimize the deployment of Vision Language Action models by addressing visual noise interference and step-wise task redundancy, thereby achieving up to a 1.73x speedup with minimal overhead.

Zihao Zheng, Sicheng Tian, Hangyu Cao, Chenyue Li, Jiayu Chen, Maoliang Li, Xinhao Sun, Hailong Zou, Guojie Luo, Xiang ChenTue, 10 Ma💻 cs

RoboRouter: Training-Free Policy Routing for Robotic Manipulation

RoboRouter is a training-free framework that enhances robotic manipulation performance by intelligently routing diverse, off-the-shelf policies to the most suitable one for each task based on semantic representations and historical execution data, achieving significant success rate improvements in both simulation and real-world settings without requiring additional model training.

Yiteng Chen, Zhe Cao, Hongjia Ren, Chenjie Yang, Wenbo Li, Shiyi Wang, Yemin Wang, Li Zhang, Yanming Shao, Zhenjun Zhao, Huiping Zhuang, Qingyao WuTue, 10 Ma💻 cs

Choose What to Observe: Task-Aware Semantic-Geometric Representations for Visuomotor Policy

This paper proposes a task-aware observation interface that canonicalizes raw RGB inputs into unified semantic-geometric representations using segmentation and depth injection, thereby significantly enhancing the robustness of visuomotor policies to out-of-distribution appearance shifts without requiring policy retraining.

Haoran Ding, Liang Ma, Yaxun Yang, Wen Yang, Tianyu Liu, Anqing Duan, Xiaodan Liang, Dezhen Song, Ivan Laptev, Yoshihiko NakamuraTue, 10 Ma💻 cs

Relating Reinforcement Learning to Dynamic Programming-Based Planning

This paper bridges the gap between dynamic programming-based planning and reinforcement learning by developing a derandomized RL variant, mathematically analyzing the conditions under which their differing formulations (such as cost minimization versus reward maximization and goal termination versus infinite-horizon discounting) are equivalent, and advocating for the optimization of true cost over arbitrary parameters.

Filip V. Georgiev, Kalle G. Timperi, Basak Sakçak, Steven M. LaValleTue, 10 Ma💻 cs

Reasoning Knowledge-Gap in Drone Planning via LLM-based Active Elicitation

This paper introduces MINT, a novel framework that enhances human-AI drone collaboration by using large language models to actively elicit minimal, targeted information from operators to resolve environmental uncertainties, thereby significantly improving task success rates while reducing the need for frequent human intervention.

Zeyu Fang, Beomyeol Yu, Cheng Liu, Zeyuan Yang, Rongqian Chen, Yuxin Lin, Mahdi Imani, Tian LanTue, 10 Ma💻 cs

Uncertainty Mitigation and Intent Inference: A Dual-Mode Human-Machine Joint Planning System

This paper proposes a dual-mode human-robot joint planning system that combines an LLM-assisted active elicitation mechanism with real-time intent inference to effectively mitigate task-relevant knowledge gaps and latent human intent, significantly reducing interaction costs and execution time in open-world environments.

Zeyu Fang, Yuxin Lin, Cheng Liu, Beomyeol Yu, Zeyuan Yang, Rongqian Chen, Taeyoung Lee, Mahdi Imani, Tian LanTue, 10 Ma💻 cs