Let's Verify Math Questions Step by Step

This paper introduces MathQ-Verify, a novel five-stage pipeline that rigorously filters ill-posed or under-specified mathematical questions through format validation, formalization, contradiction detection, and completeness checks, achieving state-of-the-art performance in curating reliable datasets for training large language models.

Chengyu Shen, Zhen Hao Wong, Runming He, Hao Liang, Meiyi Qiang, Zimo Meng, Zhengyang Zhao, Bohan Zeng, Zhengzhou Zhu, Bin Cui, Wentao Zhang2026-03-11🤖 cs.AI

UltraEdit: Training-, Subject-, and Memory-Free Lifelong Editing in Language Models

The paper introduces UltraEdit, a training-, subject-, and memory-free approach for lifelong language model editing that achieves unprecedented scalability and efficiency by computing parameter shifts in a single step, enabling 7B models to be edited on consumer GPUs with over 2 million updates while outperforming existing methods in speed, memory usage, and accuracy.

Xiaojie Gu, Ziying Huang, Jia-Chen Gu, Kai Zhang2026-03-11🤖 cs.AI

Embodied AI with Foundation Models for Mobile Service Robots: A Systematic Review

This paper presents the first systematic review of integrating foundation models into mobile service robotics, analyzing how these technologies address core challenges in perception and control, enabling applications in domestic and healthcare settings while discussing ethical implications and outlining future directions for safe, scalable, and trustworthy deployment.

Matthew Lisondra, Beno Benhabib, Goldie Nejat2026-03-11💬 cs.CL

Cooperative Game-Theoretic Credit Assignment for Multi-Agent Policy Gradients via the Core

This paper proposes CORA, a cooperative game-theoretic credit assignment method that utilizes core allocation and coalition sampling to effectively distribute global advantages among agents in multi-agent reinforcement learning, thereby overcoming the limitations of uniform sharing and enhancing coordinated optimal behavior.

Mengda Ji, Genjiu Xu, Keke Jia, Zekun Duan, Yong Qiu, Jianjun Ge, Mingqiang Li2026-03-11🤖 cs.AI

OPENXRD: A Comprehensive Benchmark Framework for LLM/MLLM XRD Question Answering

The paper introduces OPENXRD, a comprehensive benchmark framework featuring 217 expert-curated X-ray diffraction questions that evaluates how large language and multimodal models assimilate domain-specific context, revealing that mid-sized models benefit most from high-quality reference materials while very large models often exhibit saturation or interference.

Ali Vosoughi, Ayoub Shahnazari, Yufeng Xi, Zeliang Zhang, Griffin Hess, Chenliang Xu, Niaz Abdolrahim2026-03-11🤖 cs.AI

On the mechanical creation of mathematical concepts

The paper proposes a model of mathematical problem-solving as a belief-update loop that distinguishes between implicit concept formation, which optimizes search within a fixed vocabulary, and explicit concept creation, which introduces new moves to resolve unsolvable problems and argues that while current AI excels at the former, achieving the latter is essential for machines to replicate the distinctive nature of mathematical discovery.

Asvin G2026-03-11🤖 cs.AI

Debiasing International Attitudes: LLM Agents for Simulating US-China Perception Changes

This study introduces an LLM-agent framework to simulate U.S. citizens' attitudes toward China from 2005 to 2025, demonstrating that while subjective news framing has a modest impact on negative attitudes, a "devil's advocate" agent is the most effective mechanism for debiasing opinions and producing more human-like cognitive outcomes.

Nicholas Sukiennik, Yichuan Xu, Yuqing Kan, Jinghua Piao, Yuwei Yan, Chen Gao, Yong Li2026-03-11🤖 cs.AI

Personalized Feature Translation for Expression Recognition: An Efficient Source-Free Domain Adaptation Method

This paper proposes SFDA-PFT, a lightweight source-free domain adaptation method that utilizes a pretrained translator to map subject-specific style features in the latent space, enabling effective facial expression recognition on unlabeled neutral target data without requiring source data or unstable image synthesis.

Masoumeh Sharafi, Soufiane Belharbi, Muhammad Osama Zeeshan, Houssem Ben Salem, Ali Etemad, Alessandro Lameiras Koerich, Marco Pedersoli, Simon Bacon, Eric Granger2026-03-11🤖 cs.AI

EgoCross: Benchmarking Multimodal Large Language Models for Cross-Domain Egocentric Video Question Answering

This paper introduces EgoCross, a comprehensive benchmark comprising 1,000 QA pairs across four challenging domains (surgery, industry, extreme sports, and animal perspective) to evaluate and expose the poor cross-domain generalization capabilities of current Multimodal Large Language Models in egocentric video question answering.

Yanjun Li, Yuqian Fu, Tianwen Qian, Qi'ao Xu, Silong Dai, Danda Pani Paudel, Luc Van Gool, Xiaoling Wang2026-03-11🤖 cs.AI

TaoSR1: The Thinking Model for E-commerce Relevance Search

TaoSR1 is a novel framework that enables the direct deployment of Large Language Models for e-commerce relevance search by employing a three-stage training pipeline—incorporating Chain-of-Thought fine-tuning, DPO, and GRPO—to overcome reasoning errors and hallucinations while achieving superior performance in both offline benchmarks and online human evaluations.

Chenhe Dong, Shaowei Yao, Pengkun Jiao, Jianhui Yang, Yiming Jin, Zerui Huang, Xiaojiang Zhou, Dan Ou, Haihong Tang, Bo Zheng2026-03-11🤖 cs.AI