How Far Can Unsupervised RLVR Scale LLM Training?

This paper provides a comprehensive theoretical and empirical analysis of unsupervised reinforcement learning with verifiable rewards (URLVR), revealing that intrinsic reward methods are fundamentally limited by a confidence-correctness alignment ceiling that causes model collapse, while suggesting that external rewards grounded in computational asymmetries may offer a scalable alternative.

Bingxiang He, Yuxin Zuo, Zeyuan Liu, Shangziqi Zhao, Zixuan Fu, Junlin Yang, Cheng Qian, Kaiyan Zhang, Yuchen Fan, Ganqu Cui, Xiusi Chen, Youbang Sun, Xingtai Lv, Xuekai Zhu, Li Sheng, Ran Li, Huan-ang Gao, Yuchen Zhang, Bowen Zhou, Zhiyuan Liu, Ning Ding2026-03-10🤖 cs.LG

Impermanent: A Live Benchmark for Temporal Generalization in Time Series Forecasting

This paper introduces Impermanent, a live benchmark that evaluates time-series forecasting models on continuously updated GitHub activity data to assess their robustness against temporal shifts and distributional changes, addressing the limitations of static evaluation protocols that risk data contamination and inflated performance claims.

Azul Garza, Renée Rosillo, Rodrigo Mendoza-Smith, David Salinas, Andrew Robert Williams, Arjun Ashok, Mononito Goswami, José Martín Juárez2026-03-10🤖 cs.LG

Expert-Aided Causal Discovery of Ancestral Graphs

This paper introduces Ancestral GFlowNet (AGFN), a diversity-seeking reinforcement learning algorithm that enables distributional inference over ancestral graphs by iteratively refining its policy through Bayesian aggregation of both ex-ante and uncertain ex-post expert feedback, ultimately converging to the true causal structure even when expert responses are noisy or conflicting.

Tiago da Silva, Bruna Bazaluk, Eliezer de Souza da Silva, António Góis, Salem Lahlou, Dominik Heider, Samuel Kaski, Diego Mesquita, Adèle Helena Ribeiro2026-03-09🤖 cs.LG

Estimation of Energy-dissipation Lower-bounds for Neuromorphic Learning-in-memory

This paper derives model-agnostic theoretical lower-bounds for the energy-to-solution metric of ideal neuromorphic learning-in-memory optimizers by analyzing their out-of-equilibrium thermodynamics, demonstrating how matching memory dynamics to optimization processes can overcome energy bottlenecks associated with memory writes and consolidation in large-scale AI workloads.

Zihao Chen, Faiek Ahsan, Johannes Leugering, Gert Cauwenberghs, Shantanu Chakrabartty2026-03-09🤖 cs.AI

Make VLM Recognize Visual Hallucination on Cartoon Character Image with Pose Information

This paper proposes a pose-aware in-context visual learning (PA-ICVL) framework that enhances Vision-Language Models' ability to detect semantic structural visual hallucinations in non-photorealistic cartoon images by integrating pose information alongside RGB data, achieving significant performance improvements over RGB-only baselines.

Bumsoo Kim, Wonseop Shin, Kyuchul Lee, Yonghoon Jung, Sanghyun Seo2026-03-09🤖 cs.AI

Combining X-Vectors and Bayesian Batch Active Learning: Two-Stage Active Learning Pipeline for Speech Recognition

This paper proposes a novel two-stage active learning pipeline for automatic speech recognition that combines unsupervised x-vector clustering with a supervised Bayesian batch selection method to efficiently identify diverse and informative samples, thereby significantly reducing labeling effort while improving model performance across various test conditions.

Ognjen Kundacina, Vladimir Vincan, Dragisa Miskovic2026-03-09⚡ eess