Distilled Circuits: A Mechanistic Study of Internal Restructuring in Knowledge Distillation

This paper employs mechanistic interpretability to reveal that knowledge distillation not only compresses teacher models into smaller students but also fundamentally restructures their internal circuits by reorganizing, discarding, and relying more heavily on fewer components, necessitating new metrics to quantify these internal functional shifts beyond mere output similarity.

Reilly Haskins, Benjamin Adams2026-03-10🤖 cs.LG

EgoDex: Learning Dexterous Manipulation from Large-Scale Egocentric Video

To address the data scarcity in dexterous manipulation imitation learning, this paper introduces EgoDex, the largest and most diverse dataset of its kind featuring 829 hours of Apple Vision Pro-captured egocentric videos with precise, native 3D hand and finger tracking, alongside established benchmarks for training and evaluating manipulation policies.

Ryan Hoque, Peide Huang, David J. Yoon, Mouli Sivapurapu, Jian Zhang2026-03-10🤖 cs.LG

Online Decision-Focused Learning

This paper introduces the first provably convergent online algorithms for decision-focused learning in dynamic environments by regularizing non-differentiable objectives and employing perturbation techniques to handle non-convexity, thereby establishing static and dynamic regret bounds and demonstrating superior performance over standard benchmarks.

Aymeric Capitaine, Maxime Haddouche, Eric Moulines, Michael I. Jordan, Etienne Boursier, Alain Durmus2026-03-10🤖 cs.LG

MAS-ZERO: Designing Multi-Agent Systems with Zero Supervision

MAS-ZERO is a novel, self-evolved inference-time framework that automatically designs, critiques, and refines multi-agent system configurations for specific tasks without requiring a validation set, achieving significant performance improvements over manual and existing automatic baselines across reasoning, coding, and agentic benchmarks.

Zixuan Ke, Austin Xu, Yifei Ming, Xuan-Phi Nguyen, Ryan Chin, Caiming Xiong, Shafiq Joty2026-03-10🤖 cs.LG

HDLxGraph: Bridging Large Language Models and HDL Repositories via HDL Graph Databases

The paper proposes HDLxGraph, a novel framework that integrates Abstract Syntax Trees and Data Flow Graphs into Retrieval Augmented Generation to overcome structural and vocabulary mismatches in Hardware Description Language tasks, while also introducing the HDLSearch benchmark to demonstrate significant improvements in search, debugging, and code completion accuracy over existing baselines.

Pingqing Zheng (Katie), Jiayin Qin (Katie), Fuqi Zhang (Katie), Niraj Chitla (Katie), Zishen Wan (Katie), Shang Wu (Katie), Yu Cao (Katie), Caiwen Ding (Katie), Yang (Katie), Zhao2026-03-10🤖 cs.LG

The Cell Must Go On: Agar.io for Continual Reinforcement Learning

This paper introduces AgarCL, a research platform based on the non-episodic game Agar.io designed to advance continual reinforcement learning by providing a complex, dynamic environment where standard algorithms and existing continual learning methods face significant challenges beyond the traditional stability-plasticity dilemma.

Mohamed A. Mohamed, Kateryna Nekhomiazh, Vedant Vyas, Marcos M. Jose, Andrew Patterson, Marlos C. Machado2026-03-10🤖 cs.LG

X-MethaneWet: A Cross-scale Global Wetland Methane Emission Benchmark Dataset for Advancing Science Discovery with AI

This paper introduces X-MethaneWet, the first cross-scale global wetland methane benchmark dataset combining physics-based simulations and real-world observations, and demonstrates how deep learning models enhanced by transfer learning can significantly improve methane flux prediction and climate modeling.

Yiming Sun, Shuo Chen, Shengyu Chen, Chonghao Qiu, Licheng Liu, Youmi Oh, Sparkle L. Malone, Gavin McNicol, Qianlai Zhuang, Chris Smith, Yiqun Xie, Xiaowei Jia2026-03-10🤖 cs.LG

VISTA: Vision-Language Inference for Training-Free Stock Time-Series Analysis

The paper introduces VISTA, a novel training-free framework that leverages Vision-Language Models to predict stock prices by jointly analyzing textual data and line charts through zero-shot prompting, achieving significant performance improvements over traditional statistical and text-only baselines.

Tina Khezresmaeilzadeh, Parsa Razmara, Seyedarmin Azizi, Mohammad Erfan Sadeghi, Erfan Baghaei Potraghloo2026-03-10🤖 cs.LG

ViTaPEs: Visuotactile Position Encodings for Cross-Modal Alignment in Multimodal Transformers

The paper introduces ViTaPEs, a transformer-based architecture that employs a novel two-stage positional encoding strategy to effectively fuse visual and tactile modalities, achieving state-of-the-art performance and zero-shot generalization across diverse recognition and robotic grasping tasks without relying on pre-trained vision-language models.

Fotios Lygerakis, Ozan Özdenizci, Elmar Rückert2026-03-10🤖 cs.LG