Multi-Agent DRL for V2X Resource Allocation: Disentangling Challenges and Benchmarking Solutions

This paper addresses the lack of systematic evaluation in Multi-Agent Deep Reinforcement Learning for C-V2X resource allocation by introducing a disentangled benchmark suite of interference games and diverse datasets to isolate specific challenges, ultimately identifying policy robustness and generalization across vehicular topologies as the primary hurdle and demonstrating the superiority of actor-critic methods over value-based approaches.

Siyuan Wang, Lei Lei, Pranav Maheshwari, Sam Bellefeuille, Kan Zheng, Dusit Niyato2026-03-10🤖 cs.LG

Scaling Strategy, Not Compute: A Stand-Alone, Open-Source StarCraft II Benchmark for Accessible Reinforcement Learning Research

To address the complexity gap between StarCraft II's full game and its mini-games, this paper introduces the Two-Bridge Map Suite, an open-source, lightweight benchmark that isolates tactical navigation and combat skills to enable accessible reinforcement learning research under realistic compute budgets.

Sourav Panda, Shreyash Kale, Tanmay Ambadkar, Abhinav Verma, Jonathan Dodge2026-03-10🤖 cs.LG

Consensus is Not Verification: Why Crowd Wisdom Strategies Fail for LLM Truthfulness

The paper demonstrates that unlike in domains with external verifiers, scaling inference compute through crowd wisdom strategies fails to improve LLM truthfulness in unverified settings because correlated model errors and the inability to distinguish social prediction from truth verification cause aggregation to reinforce shared misconceptions rather than identify correct answers.

Yegor Denisov-Blanch, Joshua Kazdan, Jessica Chudnovsky, Rylan Schaeffer, Sheng Guan, Soji Adeshina, Sanmi Koyejo2026-03-10🤖 cs.LG

Annealed Co-Generation: Disentangling Variables via Progressive Pairwise Modeling

This paper proposes Annealed Co-Generation (ACG), a framework that replaces high-dimensional joint diffusion modeling with a low-dimensional, pairwise approach coupled through a three-stage annealing process to achieve efficient and consistent multivariate co-generation for scientific applications like flow-field completion and antibody generation.

Hantao Zhang, Jieke Wu, Mingda Xu, Xiao Hu, Yingxuan You, Pascal Fua2026-03-10🤖 cs.LG

Evo: Autoregressive-Diffusion Large Language Models with Evolving Balance

The paper introduces Evo, a novel large language model that unifies autoregressive and diffusion-based generation within a continuous evolutionary latent framework, enabling adaptive balancing of planning and refinement to achieve state-of-the-art performance across diverse benchmarks while maintaining fast inference speeds.

Junde Wu, Minhao Hu, Jiayuan Zhu, Yuyuan Liu, Tianyi Zhang, Kang Li, Jingkun Chen, Jiazhen Pan, Min Xu, Yueming Jin2026-03-10🤖 cs.LG

Distilling and Adapting: A Topology-Aware Framework for Zero-Shot Interaction Prediction in Multiplex Biological Networks

This paper proposes a novel topology-aware framework that leverages domain-specific foundation models, a graph tokenizer for multiplex connectivity, and knowledge distillation to achieve robust zero-shot interaction prediction in multiplex biological networks, outperforming state-of-the-art methods.

Alana Deng, Sugitha Janarthanan, Yan Sun, Zihao Jing, Pingzhao Hu2026-03-10🤖 cs.LG

Reward Under Attack: Analyzing the Robustness and Hackability of Process Reward Models

This paper reveals that state-of-the-art Process Reward Models (PRMs) are systematically exploitable by adversarial optimization, functioning primarily as fluency detectors rather than reasoning verifiers due to a critical dissociation between stylistic changes and ground-truth accuracy, prompting the release of a diagnostic framework and benchmark to address these vulnerabilities.

Rishabh Tiwari, Aditya Tomar, Udbhav Bamba, Monishwaran Maheswaran, Heng Yang, Michael W. Mahoney, Kurt Keutzer, Amir Gholami2026-03-10🤖 cs.LG

From ARIMA to Attention: Power Load Forecasting Using Temporal Deep Learning

This paper empirically demonstrates that a Transformer model utilizing self-attention mechanisms outperforms traditional ARIMA and recurrent neural network approaches (LSTM, BiLSTM) in short-term power load forecasting on PJM data, achieving a superior 3.8% MAPE and highlighting the effectiveness of attention-based architectures for capturing complex temporal patterns.

Suhasnadh Reddy Veluru, Sai Teja Erukude, Viswa Chaitanya Marella2026-03-10🤖 cs.LG