Real-Time Aligned Reward Model beyond Semantics

This paper introduces R2M, a novel lightweight RLHF framework that mitigates reward overoptimization by leveraging real-time policy hidden states to dynamically align the reward model with the policy's evolving distribution, rather than relying solely on static semantic representations.

Zixuan Huang, Xin Xia, Yuxi Ren, Jianbin Zheng, Xuefeng Xiao, Hongyan Xie, Li Huaqiu, Songshi Liang, Zhongxiang Dai, Fuzhen Zhuang, Jianxin Li, Yikun Ban, Deqing Wang2026-03-10💻 cs

Impact of LLMs news Sentiment Analysis on Stock Price Movement Prediction

This paper evaluates the impact of LLM-based news sentiment analysis on stock price prediction, demonstrating that DeBERTa outperforms other models and that an ensemble approach achieves 80% accuracy, while sentiment features provide modest improvements to various time-series forecasting architectures.

Walid Siala (SnT, University of Luxembourg, Luxembourg), Ahmed Khanfir (RIADI, ENSI, University of Manouba, Tunisia, SnT, University of Luxembourg, Luxembourg), Mike Papadakis (SnT, University of Luxembourg, Luxembourg)2026-03-10💻 cs

Do Schwartz Higher-Order Values Help Sentence-Level Human Value Detection? A Study of Hierarchical Gating and Calibration

This paper investigates whether Schwartz higher-order values improve sentence-level human value detection, finding that while hierarchical gating offers limited benefits, calibration techniques and hybrid ensembles significantly boost performance, suggesting the value hierarchy is more effective as an inductive bias than a rigid routing mechanism.

Víctor Yeste, Paolo Rosso2026-03-10🤖 cs.LG

Diffusion-Guided Pretraining for Brain Graph Foundation Models

This paper proposes a unified diffusion-guided pretraining framework for brain graph foundation models that overcomes the limitations of existing methods by using diffusion to preserve semantic connectivity patterns during augmentation and to enable topology-aware global reconstruction, thereby achieving robust and transferable representations across diverse neuroimaging datasets.

Xinxu Wei, Rong Zhou, Lifang He, Yu Zhang2026-03-10🤖 cs.LG

SToRM: Supervised Token Reduction for Multi-modal LLMs toward efficient end-to-end autonomous driving

This paper proposes SToRM, a novel framework that employs a lightweight importance predictor, supervised training with pseudo-labels, and an anchor-context merging module to significantly reduce visual token redundancy in multi-modal LLMs for autonomous driving, achieving up to 30x computational savings while maintaining end-to-end performance comparable to using all tokens.

Seo Hyun Kim, Jin Bok Park, Do Yeon Koo, Hogun Park, Il Yong Chun2026-03-10💻 cs

Accelerating Robotic Reinforcement Learning with Agent Guidance

This paper introduces Agent-guided Policy Search (AGPS), a framework that replaces human supervisors with a multimodal agent acting as a semantic world model to provide precise corrective guidance, thereby significantly improving sample efficiency and scalability in robotic reinforcement learning compared to traditional Human-in-the-Loop methods.

Haojun Chen, Zili Zou, Chengdong Ma, Yaoxiang Pu, Haotong Zhang, Yuanpei Chen, Yaodong Yang2026-03-10💻 cs

To Mix or To Merge: Toward Multi-Domain Reinforcement Learning for Large Language Models

This paper introduces M2RL, a comprehensive study comparing mixed multi-task training versus separate training with model merging for multi-domain Reinforcement Learning with Verifiable Rewards (RLVR), revealing that reasoning-intensive domains exhibit synergistic effects with minimal interference and providing mechanistic insights through extensive experiments.

Haoqing Wang, Xiang Long, Ziheng Li, Yilong Xu, Tingguang Li, Yehui Tang2026-03-10💻 cs

SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks

The paper introduces SkillsBench, a comprehensive benchmark demonstrating that while curated agent skills significantly boost LLM performance across diverse domains—often allowing smaller models to match larger ones—self-generated skills offer no benefit and effects vary widely by task.

Xiangyi Li, Wenbo Chen, Yimin Liu, Shenghan Zheng, Xiaokun Chen, Yifeng He, Yubo Li, Bingran You, Haotian Shen, Jiankai Sun, Shuyi Wang, Binxu Li, Qunhong Zeng, Di Wang, Xuandong Zhao, Yuanli Wang, Roey Ben Chaim, Zonglin Di, Yipeng Gao, Junwei He, Yizhuo He, Liqiang Jing, Luyang Kong, Xin Lan, Jiachen Li, Songlin Li, Yijiang Li, Yueqian Lin, Xinyi Liu, Xuanqing Liu, Haoran Lyu, Ze Ma, Bowei Wang, Runhui Wang, Tianyu Wang, Wengao Ye, Yue Zhang, Hanwen Xing, Yiqi Xue, Steven Dillmann, Han-chung Lee2026-03-10💻 cs

A Geometric Taxonomy of Hallucinations in LLMs

This paper proposes a geometric taxonomy of LLM hallucinations into three distinct types (unfaithfulness, confabulation, and factual error) and introduces corresponding detection metrics, the Semantic Grounding Index and Directional Grounding Index, which effectively identify unfaithful and confabulated outputs while revealing that apparent signals for factual errors in existing benchmarks often stem from stylistic annotation confounds rather than genuine geometric distinctions.

Javier Marín2026-03-10💬 cs.CL

Can a Lightweight Automated AI Pipeline Solve Research-Level Mathematical Problems?

This paper demonstrates that a lightweight, automated AI pipeline integrating next-generation large language models with citation-based verification can successfully generate and solve sophisticated, research-grade mathematical problems, including previously unpublished questions, with verified results and open-sourced tools.

Lve Meng (University of Science,Technology of China, Zhongguancun Academy), Weilong Zhao (Université Paris Cité), Yanzhi Zhang (Zhongguancun Academy), Haoxiang Guan (Zhongguancun Academy), Jiyan He (Zhongguancun Academy)2026-03-10🔢 math