CauKer: Classification Time Series Foundation Models Can Be Pretrained on Synthetic Data

The paper introduces CauKer, a novel algorithm that combines Gaussian Process kernel composition with Structural Causal Models to generate diverse, causally coherent synthetic time series, enabling sample-efficient pre-training of classification foundation models that exhibit clear scaling laws across varying dataset sizes and model capacities.

Shifeng Xie, Vasilii Feofanov, Ambroise Odonnat, Lei Zan, Marius Alonso, Jianfeng Zhang, Themis Palpanas, Lujia Pan, Keli Zhang, Ievgen Redko2026-03-10🤖 cs.LG

GraphProp: Training the Graph Foundation Models using Graph Properties

GraphProp is a two-phase framework for training graph foundation models that first learns structural generalization by predicting graph invariants and then leverages these representations as positional encodings to enhance cross-domain performance in graph-level tasks, particularly outperforming existing methods in scenarios with limited data or missing node attributes.

Ziheng Sun, Qi Feng, Lehao Lin, Chris Ding, Jicong Fan2026-03-10🤖 cs.LG

Video-EM: Event-Centric Episodic Memory for Long-Form Video Understanding

Video-EM introduces a training-free, event-centric episodic memory framework that enhances long-form video understanding by orchestrating an LLM to localize, segment, and refine query-relevant moments into a compact, temporally coherent event timeline, thereby overcoming the context limitations of existing Video-LLMs without requiring architectural changes.

Yun Wang, Long Zhang, Jingren Liu, Jiaqi Yan, Zhanjie Zhang, Jiahao Zheng, Ao Ma, Run Ling, Xun Yang, Dapeng Wu, Xiangyu Chen, Xuelong Li2026-03-10💻 cs

Entropy-Driven Curriculum for Multi-Task Training in Human Mobility Prediction

This paper proposes a unified training framework that combines entropy-driven curriculum learning, which sequences training from simple to complex trajectories based on Lempel-Ziv compression, with multi-task learning to simultaneously optimize location, distance, and direction predictions, thereby achieving state-of-the-art performance and significantly faster convergence in human mobility prediction.

Tianye Fang, Xuanshu Luo, Martin Werner2026-03-10🤖 cs.LG

Improving the Resilience of Quadrotors in Underground Environments by Combining Learning-based and Safety Controllers

This paper proposes a hybrid control framework that enhances quadrotor resilience in underground environments by using a normalizing flow-based prior as a runtime monitor to dynamically switch between a learning-based controller for efficiency and a safety controller for collision avoidance when encountering out-of-distribution scenarios.

Isaac Ronald Ward, Mark Paral, Kristopher Riordan + 1 more2026-03-10⚡ eess

Efficient Construction of Implicit Surface Models From a Single Image for Motion Generation

This paper introduces Fast Image-to-Neural Surface (FINS), a lightweight framework that efficiently reconstructs high-fidelity implicit surfaces and SDF fields from a single image within seconds by leveraging multi-resolution hash grids and pre-trained foundation models, outperforming existing methods in speed and accuracy for robotics applications.

Wei-Teng Chu, Tianyi Zhang, Matthew Johnson-Roberson, Weiming Zhi2026-03-10💻 cs

Generative Evolutionary Meta-Solver (GEMS): Scalable Surrogate-Free Multi-Agent Reinforcement Learning

The paper introduces Generative Evolutionary Meta-Solver (GEMS), a scalable, surrogate-free multi-agent reinforcement learning framework that replaces explicit policy populations with a compact generator and latent anchors to achieve significantly faster training, lower memory usage, and higher rewards than traditional methods like PSRO while maintaining game-theoretic guarantees.

Alakh Sharma, Gaurish Trivedi, Kartikey Singh Bhandari, Yash Sinha, Dhruv Kumar, Pratik Narang, Jagat Sesh Challa2026-03-10🤖 cs.LG

Mapping Overlaps in Benchmarks through Perplexity in the Wild

This paper introduces "benchmark signatures"—sets of salient tokens from in-the-wild corpora whose perplexity predicts model performance—to reveal nuanced overlaps and distinct capacities across 89 LLM benchmarks, offering a robust alternative to raw performance correlations for understanding the landscape of LLM abilities and the divergence between machine and human semantic organization.

Siyang Wu, Honglin Bao, Sida Li, Ari Holtzman, James A. Evans2026-03-10💬 cs.CL

Your Agent May Misevolve: Emergent Risks in Self-evolving LLM Agents

This paper introduces and empirically validates the concept of "misevolution," demonstrating that self-evolving LLM agents face widespread, emergent risks across model, memory, tool, and workflow pathways that can lead to safety degradation and unintended vulnerabilities, thereby highlighting an urgent need for new safety paradigms.

Shuai Shao, Qihan Ren, Chen Qian, Boyi Wei, Dadi Guo, Jingyi Yang, Xinhao Song, Linfeng Zhang, Weinan Zhang, Dongrui Liu, Jing Shao2026-03-10🤖 cs.LG