Nw\=ach\=a Mun\=a: A Devanagari Speech Corpus and Proximal Transfer Benchmark for Nepal Bhasha ASR

This paper introduces Nw\=ach\=a Mun\=a, the first manually transcribed Devanagari speech corpus for the endangered Nepal Bhasha, and demonstrates that proximal cross-lingual transfer from Nepali achieves competitive automatic speech recognition performance comparable to large multilingual models while being significantly more computationally efficient.

Rishikesh Kumar Sharma, Safal Narshing Shrestha, Jenny Poudel, Rupak Tiwari, Arju Shrestha, Rupak Raj Ghimire, Bal Krishna Bal2026-03-10💬 cs.CL

KCoEvo: A Knowledge Graph Augmented Framework for Evolutionary Code Generation

KCoEvo is a knowledge graph-augmented framework that addresses the challenges of API-driven code evolution by decomposing migration into path retrieval and informed generation stages, significantly improving accuracy and execution success over standard LLM baselines through structured reasoning and synthetic supervision.

Jiazhen Kang, Yuchen Lu, Chen Jiang, Jinrui Liu, Tianhao Zhang, Bo Jiang, Ningyuan Sun, Tongtong Wu, Guilin Qi2026-03-10💬 cs.CL

StyleBench: Evaluating Speech Language Models on Conversational Speaking Style Control

This paper introduces StyleBench, a multi-turn dialogue benchmark designed to systematically evaluate and quantify the ability of speech language models to control conversational speaking styles across emotion, speed, volume, and pitch dimensions, revealing performance gaps between current models and highlighting directions for future improvement.

Haishu Zhao, Aokai Hao, Yuan Ge, Zhenqiang Hong, Tong Xiao, Jingbo Zhu2026-03-10💬 cs.CL

KohakuRAG: A simple RAG framework with hierarchical document indexing

KohakuRAG is an open-source, hierarchical RAG framework that achieves state-of-the-art performance on the WattBot 2025 Challenge by preserving document structure through a four-level tree representation, enhancing retrieval via LLM-powered query planning, and stabilizing outputs with ensemble voting, thereby outperforming existing methods in precision and citation accuracy.

Shih-Ying Yeh, Yueh-Feng Ku, Ko-Wei Huang, Buu-Khang Tu2026-03-10💬 cs.CL

Scalable Training of Mixture-of-Experts Models with Megatron Core

This paper presents Megatron Core, a scalable and production-ready open-source framework that addresses the coupled memory, communication, and computation challenges of Mixture-of-Experts (MoE) training through integrated system-level optimizations, enabling high-performance training of models ranging from billions to trillions of parameters on large-scale GPU clusters.

Zijie Yan (NVIDIA), Hongxiao Bai (NVIDIA), Xin Yao (NVIDIA), Dennis Liu (NVIDIA), Tong Liu (NVIDIA), Hongbin Liu (NVIDIA), Pingtian Li (NVIDIA), Evan Wu (NVIDIA), Shiqing Fan (NVIDIA), Li Tao (NVIDIA), Robin Zhang (NVIDIA), Yuzhong Wang (NVIDIA), Shifang Xu (NVIDIA), Jack Chang (NVIDIA), Xuwen Chen (NVIDIA), Kunlun Li (NVIDIA), Yan Bai (NVIDIA), Gao Deng (NVIDIA), Nan Zheng (NVIDIA), Vijay Anand Korthikanti (NVIDIA), Abhinav Khattar (NVIDIA), Ethan He (NVIDIA), Soham Govande (NVIDIA), Sangkug Lym (NVIDIA), Zhongbo Zhu (NVIDIA), Qi Zhang (NVIDIA), Haochen Yuan (NVIDIA), Xiaowei Ren (NVIDIA), Deyu Fu (NVIDIA), Tailai Ma (NVIDIA), Shunkang Zhang (NVIDIA), Jiang Shao (NVIDIA), Ray Wang (NVIDIA), Santosh Bhavani (NVIDIA), Xipeng Li (NVIDIA), Chandler Zhou (NVIDIA), David Wu (NVIDIA), Yingcan Wei (NVIDIA), Ashwath Aithal (NVIDIA), Michael Andersch (NVIDIA), Mohammad Shoeybi (NVIDIA), Jiajie Yao (NVIDIA), June Yang (NVIDIA)2026-03-10🤖 cs.LG

Large Language Model for Discrete Optimization Problems: Evaluation and Step-by-step Reasoning

This paper evaluates the capabilities of various large language models, including Llama-3 and ChatGPT, in solving diverse discrete optimization problems using natural language datasets, revealing that while stronger models generally perform better, Chain-of-Thought reasoning is not universally effective and data augmentation can improve performance on simpler tasks despite introducing instability.

Tianhao Qian, Guilin Qi, Z. Y. Wu, Ran Gu, Xuanyi Liu, Canchen Lyu2026-03-10💬 cs.CL

3ViewSense: Spatial and Mental Perspective Reasoning from Orthographic Views in Vision-Language Models

To address the "spatial intelligence gap" where Vision-Language Models struggle with elementary 3D tasks despite strong logical reasoning, the paper introduces 3ViewSense, a framework that leverages an engineering-inspired "Simulate-and-Reason" mechanism to ground spatial understanding in orthographic views, significantly improving performance on occlusion-heavy counting and view-consistent reasoning benchmarks.

Shaoxiong Zhan, Yanlin Lai, Zheng Liu, Hai Lin, Shen Li, Xiaodong Cai, Zijian Lin, Wen Huang, Hai-Tao Zheng2026-03-10💬 cs.CL

Whitening Reveals Cluster Commitment as the Geometric Separator of Hallucination Types

This paper demonstrates that applying PCA-whitening to GPT-2-small embeddings reveals cluster commitment as the geometric separator distinguishing hallucination types, specifically resolving the previously indistinguishable "wrong-well convergence" and "coverage gap" failures while identifying the inability to separate "center-drift" from "wrong-well convergence" as a model capacity limitation rather than a measurement artifact.

Matic Korun2026-03-10💬 cs.CL

QuadAI at SemEval-2026 Task 3: Ensemble Learning of Hybrid RoBERTa and LLMs for Dimensional Aspect-Based Sentiment Analysis

The QuadAI system for SemEval-2026 Task 3 achieves superior performance in dimensional aspect-based sentiment regression by employing an ensemble learning framework that combines a hybrid RoBERTa encoder with large language models, leveraging the complementary strengths of both architectures to significantly reduce RMSE and improve correlation scores.

A. J. W. de Vink, Filippos Karolos Ventirozos, Natalia Amat-Lefort, Lifeng Han2026-03-10💬 cs.CL

Breaking Training Bottlenecks: Effective and Stable Reinforcement Learning for Coding Models

This paper introduces MicroCoder-GRPO, an enhanced Group Relative Policy Optimization framework featuring innovations like conditional truncation masking and diversity-driven temperature selection, alongside a challenging new dataset and robust evaluator, to overcome training bottlenecks in modern coding models and achieve significant performance gains on LiveCodeBench v6.

Zongqian Li, Shaohan Huang, Zewen Chi, Yixuan Su, Lexin Zhou, Li Dong, Nigel Collier, Furu Wei2026-03-10🤖 cs.LG

Scaling Data Difficulty: Improving Coding Models via Reinforcement Learning on Fresh and Challenging Problems

This paper introduces MicroCoder, a high-quality dataset of curated, recent, and challenging competitive programming problems processed through a four-stage framework with automatic difficulty filtering, which significantly boosts coding model performance on unseen hard tasks compared to existing baselines.

Zongqian Li, Tengchao Lv, Shaohan Huang, Yixuan Su, Qinzheng Sun, Qiufeng Yin, Ying Xin, Scarlett Li, Lei Cui, Nigel Collier, Furu Wei2026-03-10🤖 cs.LG

Dual-Metric Evaluation of Social Bias in Large Language Models: Evidence from an Underrepresented Nepali Cultural Context

This study evaluates seven state-of-the-art large language models in the underrepresented Nepali cultural context using a Dual-Metric Bias Assessment framework, revealing that while explicit agreement with biased statements is measurable, implicit generative bias is distinct, follows a non-linear relationship with temperature, and is poorly predicted by agreement metrics, thereby highlighting the critical need for culturally grounded datasets and evaluation strategies.

Ashish Pandey, Tek Raj Chhetri2026-03-10💬 cs.CL

Benchmarking Large Language Models for Quebec Insurance: From Closed-Book to Retrieval-Augmented Generation

This paper introduces the AEPC-QA benchmark to evaluate 51 large language models on Quebec insurance regulations, revealing that while inference-time reasoning and retrieval-augmented generation can significantly boost accuracy, the latter's tendency to cause context distraction and the superior performance of generalist models over specialized French fine-tuned ones highlight critical stability challenges for autonomous deployment.

David Beauchemin, Richard Khoury2026-03-10💬 cs.CL

AI Steerability 360: A Toolkit for Steering Large Language Models

The paper introduces AI Steerability 360, an open-source, Hugging Face-native Python toolkit that provides a unified interface for composing, evaluating, and comparing diverse large language model steering methods across input, structural, state, and output control surfaces.

Erik Miehling, Karthikeyan Natesan Ramamurthy, Praveen Venkateswaran, Irene Ko, Pierre Dognin, Moninder Singh, Tejaswini Pedapati, Avinash Balakrishnan, Matthew Riemer, Dennis Wei, Inge Vejsbjerg, Elizabeth M. Daly, Kush R. Varshney2026-03-10💬 cs.CL

SynPlanResearch-R1: Encouraging Tool Exploration for Deep Research with Synthetic Plans

The paper introduces SynPlanResearch-R1, a framework that synthesizes tool-use trajectories to encourage deeper exploration during supervised fine-tuning, thereby overcoming the limitations of reinforcement learning with verifiable rewards and significantly improving research agent performance across multiple benchmarks.

Hansi Zeng, Zoey Li, Yifan Gao, Chenwei Zhang, Xiaoman Pan, Tao Yang, Fengran Mo, Jiacheng Lin, Xian Li, Jingbo Shang2026-03-10💬 cs.CL