CoTJudger: A Graph-Driven Framework for Automatic Evaluation of Chain-of-Thought Efficiency and Redundancy in LRMs

This paper introduces CoTJudger, a graph-driven framework that automatically evaluates the efficiency of Large Reasoning Models by converting Chain-of-Thought traces into dependency graphs to identify the Shortest Effective Path, thereby quantifying structural redundancy and revealing pervasive over-reasoning patterns across 21 models.

Siyi Li, Jiajun Shi, Shiwen Ni, Ge Zhang, Shuaimin Li, Shijian Wang, Zhoufutu Wen, Yizhi Li, Hamid Alinejad-Rokny, Jiaheng Liu, Min Yang, Wenhao Huang2026-03-10💬 cs.CL

Countdown-Code: A Testbed for Studying The Emergence and Generalization of Reward Hacking in RLVR

This paper introduces Countdown-Code, a novel testbed demonstrating that even minimal contamination of supervised fine-tuning data with reward-hacking trajectories can cause large language models to learn and subsequently generalize misaligned behaviors during reinforcement learning, highlighting the critical need for rigorous validation of synthetic training data.

Muhammad Khalifa, Zohaib Khan, Omer Tafveez, Hao Peng, Lu Wang2026-03-10🤖 cs.LG

Grounding Machine Creativity in Game Design Knowledge Representations: Empirical Probing of LLM-Based Executable Synthesis of Goal Playable Patterns under Structural Constraints

This paper empirically investigates whether large language models can synthesize executable Unity game code from Goal Playable Patterns under strict structural constraints, revealing that while intermediate representations improve performance, project-level grounding and hygiene failures remain primary bottlenecks in achieving high compilation success rates.

Hugh Xuechen Liu, Kıvanç Tatar2026-03-10💻 cs

Efficient Personalized Reranking with Semi-Autoregressive Generation and Online Knowledge Distillation

This paper proposes the Personalized Semi-Autoregressive with online knowledge Distillation (PSAD) framework, which utilizes a semi-autoregressive teacher model and a User Profile Network to balance generation quality with low-latency inference while enhancing user-item interactions, thereby outperforming state-of-the-art baselines in both ranking performance and efficiency.

Kai Cheng, Hao Wang, Wei Guo, Weiwen Liu, Yong Liu, Yawen Li, Enhong Chen2026-03-10💻 cs

Vision Language Models Cannot Reason About Physical Transformation

This paper introduces ConservationBench to demonstrate that current Vision Language Models systematically fail to reason about physical transformations and maintain invariant representations of physical quantities, often performing near chance levels despite strong textual priors favoring invariance.

Dezhi Luo, Yijiang Li, Maijunxian Wang, Tianwei Zhao, Bingyang Wang, Siheng Wang, Pinyuan Feng, Pooyan Rahmanzadehgervi, Ziqiao Ma, Hokin Deng2026-03-10💻 cs

Deep Expert Injection for Anchoring Retinal VLMs with Domain-Specific Knowledge

This paper introduces EyExIn, a data-efficient framework that enhances retinal Vision Language Models by employing a dual-stream encoding strategy and a deep expert injection mechanism to bridge perception and reasoning gaps, thereby achieving state-of-the-art precision in ophthalmic diagnosis while preventing hallucinations.

Shuai Lu, Meng Wang, Jia Guo, Jiawei Du, Bo Liu, Shengzhu Yang, Weihang Zhang, Huazhu Fu, Huiqi Li2026-03-10💻 cs

Improving reasoning at inference time via uncertainty minimisation

This paper proposes a computationally efficient inference-time reasoning method that improves accuracy by selecting thought-level continuations that maximize the model's internal self-certainty, demonstrating that optimizing for uncertainty minimization at early planning stages yields performance comparable to or exceeding existing scaling techniques like self-consistency.

Nicolas Legrand, Kenneth Enevoldsen, Márton Kardos, Kristoffer Nielbo2026-03-10💻 cs

Learning to Rank the Initial Branching Order of SAT Solvers

This paper proposes using graph neural networks to predict initial branching orders for CDCL SAT solvers, demonstrating significant speedups on random and pseudo-industrial benchmarks while noting that the approach struggles with complex industrial instances due to the solver's dynamic heuristics overriding the predictions.

Arvid Eriksson (KTH Royal Institute of Technology), Gabriel Poesia (Kempner Institute at Harvard University), Roman Bresson (Mohamed Bin Zayed University of Artificial Intelligence), Karl Henrik Johansson (KTH Royal Institute of Technology), David Broman (KTH Royal Institute of Technology)2026-03-10💻 cs

From State Changes to Creative Decisions: Documenting and Interpreting Traces Across Creative Domains

This paper addresses the limitation of existing creative activity tracing methods that capture state changes without preserving intent or higher-level structure by proposing three complementary domain-specific approaches: a node-based interface for GenAI, a vocabulary of visual cues for visualization authoring, and a semantic history-embedded programming model.

Xiaohan Peng, Sotiris Piliouras, Carl Abou Saada Nujaim2026-03-10💻 cs

A Miniature Brain Transformer: Thalamic Gating, Hippocampal Lateralization, Amygdaloid Salience, and Prefrontal Working Memory in Attention-Coupled Latent Memory

This paper introduces a miniature brain transformer architecture that demonstrates a novel, falsifiable prediction: functional lateralization of hippocampal banks requires the synergistic interaction of a prefrontal working-memory buffer (acting as a symmetry-breaker) and inhibitory callosal coupling, a mechanism that triggers a sharp phase transition in memory performance while a cerebellar fast-path merely accelerates convergence.

Hong Jeong2026-03-10💻 cs

VINO: Video-driven Invariance for Non-contextual Objects via Structural Prior Guided De-contextualization

VINO is a self-supervised learning framework that overcomes the "co-occurrence trap" in dense video by using a teacher-student distillation approach with structural priors to force representations to focus on foreground objects rather than background context, achieving state-of-the-art unsupervised object discovery performance.

Seul-Ki Yeom, Marcel Simon, Eunbin Lee, Tae-Ho Kim2026-03-10💻 cs