COACH meets QUORUM: A Framework and Pipeline for Aligning User, Expert and Developer Perspectives in LLM-generated Health Counselling

This paper introduces QUORUM, a unified evaluation framework, and COACH, an LLM-driven pipeline, to generate and assess personalized health counseling for cancer patients, demonstrating that while stakeholders converge on the system's relevance and quality, they diverge on nuances like tone and error sensitivity, thereby highlighting the critical need for multi-perspective evaluation in trustworthy patient-centered NLP systems.

Yee Man Ng, Bram van Dijk, Pieter Beynen, Otto Boekesteijn, Joris Jansen, Gerard van Oortmerssen, Max van Duijn, Marco Spruit2026-03-10💬 cs.CL

Revealing Behavioral Plasticity in Large Language Models: A Token-Conditional Perspective

This paper introduces Token-Conditioned Reinforcement Learning (ToCoRL), a framework that leverages the intrinsic behavioral plasticity of Large Language Models to internalize and stabilize inference-time adaptations, enabling precise control over behavioral modes like switching from reasoning to direct answering without degrading overall capabilities.

Liyuan Mao, Le Yu, Jing Zhou, Chujie Zheng, Bowen Yu, Chang Gao, Shixuan Liu, An Yang, Weinan Zhang, JunYang Lin2026-03-10🤖 cs.LG

Sandpiper: Orchestrated AI-Annotation for Educational Discourse at Scale

The paper introduces Sandpiper, a mixed-initiative system that integrates interactive researcher dashboards with agentic LLMs to enable scalable, privacy-preserving, and rigorous qualitative analysis of large-scale educational discourse while mitigating hallucinations and ensuring methodological consistency.

Daryl Hedley, Doug Pietrzak, Jorge Dias, Ian Burden, Bakhtawar Ahtisham, Zhuqian Zhou, Kirk Vanacore, Josh Marland, Rachel Slama, Justin Reich, Kenneth Koedinger, René Kizilcec2026-03-10💬 cs.CL

A prospective clinical feasibility study of a conversational diagnostic AI in an ambulatory primary care clinic

This prospective feasibility study demonstrates that a conversational AI system (AMIE) can safely and effectively conduct clinical history-taking and generate diagnostic suggestions in a real-world urgent care setting, achieving high patient satisfaction and diagnostic accuracy comparable to primary care providers while requiring no real-time human intervention.

Peter Brodeur, Jacob M. Koshy, Anil Palepu, Khaled Saab, Ava Homiar, Roma Ruparel, Charles Wu, Ryutaro Tanno, Joseph Xu, Amy Wang, David Stutz, Hannah M. Ferrera, David Barrett, Lindsey Crowley, Jihyeon Lee, Spencer E. Rittner, Ellery Wulczyn, Selena K. Zhang, Elahe Vedadi, Christine G. Kohn, Kavita Kulkarni, Vinay Kadiyala, Sara Mahdavi, Wendy Du, Jessica Williams, David Feinbloom, Renee Wong, Tao Tu, Petar Sirkovic, Alessio Orlandi, Christopher Semturs, Yun Liu, Juraj Gottweis, Dale R. Webster, Joëlle Barral, Katherine Chou, Pushmeet Kohli, Avinatan Hassidim, Yossi Matias, James Manyika, Rob Fields, Jonathan X. Li, Marc L. Cohen, Vivek Natarajan, Mike Schaekermann, Alan Karthikesalingam, Adam Rodman2026-03-10🤖 cs.LG

Fanar-Sadiq: A Multi-Agent Architecture for Grounded Islamic QA

Fanar-Sadiq is a bilingual multi-agent system that addresses hallucination and source misattribution in Islamic queries by routing diverse requests to specialized modules for grounded retrieval, exact scripture lookup, and deterministic legal calculations, demonstrating high effectiveness and widespread public adoption.

Ummar Abbas, Mourad Ouzzani, Mohamed Y. Eltabakh, Omar Sinan, Gagan Bhatia, Hamdy Mubarak, Majd Hawasly, Mohammed Qusay Hashim, Kareem Darwish, Firoj Alam2026-03-10💬 cs.CL

Drift-to-Action Controllers: Budgeted Interventions with Online Risk Certificates

The paper introduces Drift2Act, a controller that reframes distribution drift monitoring as constrained decision-making by combining sensing with online risk certificates to dynamically select cost-effective interventions or safety-preserving escalations, thereby achieving near-zero safety violations and rapid recovery under realistic resource constraints.

Ismail Lamaakal, Chaymae Yahyati, Khalid El Makkaoui, Ibrahim Ouahbi, Yassine Maleh2026-03-10🤖 cs.LG

OfficeQA Pro: An Enterprise Benchmark for End-to-End Grounded Reasoning

The paper introduces OfficeQA Pro, a challenging enterprise benchmark using a massive corpus of U.S. Treasury Bulletins to demonstrate that current frontier AI agents struggle significantly with grounded, multi-document reasoning, achieving low accuracy even with direct document access and benefiting notably from structured document representations.

Krista Opsahl-Ong, Arnav Singhvi, Jasmine Collins, Ivan Zhou, Cindy Wang, Ashutosh Baheti, Owen Oertell, Jacob Portes, Sam Havens, Erich Elsen, Michael Bendersky, Matei Zaharia, Xing Chen2026-03-10💬 cs.CL

How Far Can Unsupervised RLVR Scale LLM Training?

This paper provides a comprehensive theoretical and empirical analysis of unsupervised reinforcement learning with verifiable rewards (URLVR), revealing that intrinsic reward methods are fundamentally limited by a confidence-correctness alignment ceiling that causes model collapse, while suggesting that external rewards grounded in computational asymmetries may offer a scalable alternative.

Bingxiang He, Yuxin Zuo, Zeyuan Liu, Shangziqi Zhao, Zixuan Fu, Junlin Yang, Cheng Qian, Kaiyan Zhang, Yuchen Fan, Ganqu Cui, Xiusi Chen, Youbang Sun, Xingtai Lv, Xuekai Zhu, Li Sheng, Ran Li, Huan-ang Gao, Yuchen Zhang, Bowen Zhou, Zhiyuan Liu, Ning Ding2026-03-10🤖 cs.LG