GNNs for Time Series Anomaly Detection: An Open-Source Framework and a Critical Evaluation

This paper introduces an open-source framework for Graph Neural Network-based Time Series Anomaly Detection to enable reproducible experimentation and critical evaluation, demonstrating that GNNs enhance both detection performance and interpretability while highlighting the need for standardized metrics and thresholding strategies.

Federico Bello, Gonzalo Chiarlone, Marcelo Fiori, Gastón García González, Federico LarrocaWed, 11 Ma🤖 cs.AI

EsoLang-Bench: Evaluating Genuine Reasoning in Large Language Models via Esoteric Programming Languages

The paper introduces EsoLang-Bench, a novel benchmark utilizing esoteric programming languages to expose the limitations of large language models' genuine reasoning capabilities by revealing a dramatic performance gap between their high scores on standard benchmarks and near-zero accuracy on tasks requiring the acquisition of new languages through documentation and experimentation rather than memorization.

Aman Sharma, Paras ChopraWed, 11 Ma🤖 cs.AI

ActiveUltraFeedback: Efficient Preference Data Generation using Active Learning

The paper introduces ActiveUltraFeedback, an efficient active learning pipeline that leverages uncertainty estimates and novel selection strategies like Double Reverse Thompson Sampling to generate high-quality preference data, enabling Large Language Models to achieve superior alignment performance with as little as one-sixth of the annotated data required by static baselines.

Davit Melikidze, Marian Schneider, Jessica Lam, Martin Wertich, Ido Hakimi, Barna Pásztor, Andreas KrauseWed, 11 Ma🤖 cs.AI

A Multi-Prototype-Guided Federated Knowledge Distillation Approach in AI-RAN Enabled Multi-Access Edge Computing System

This paper proposes a Multi-Prototype-Guided Federated Knowledge Distillation (MP-FedKD) approach for AI-RAN enabled Multi-Access Edge Computing systems, which addresses non-IID data challenges and mitigates information loss from single-prototype averaging by integrating self-knowledge distillation, a conditional hierarchical agglomerative clustering strategy, and a novel loss function to outperform state-of-the-art baselines in accuracy and error metrics.

Luyao Zou, Hayoung Oh, Chu Myaet Thwal, Apurba Adhikary, Seohyeon Hong, Zhu HanWed, 11 Ma🤖 cs.LG

What is Missing? Explaining Neurons Activated by Absent Concepts

This paper identifies that deep neural networks frequently encode the absence of concepts to drive neuron activation—a phenomenon largely overlooked by standard explainable AI methods—and proposes simple extensions to attribution and feature visualization techniques to effectively reveal and leverage these "missing" concepts for better model interpretation and debiasing.

Robin Hesse, Simone Schaub-Meyer, Janina Hesse, Bernt Schiele, Stefan RothWed, 11 Ma🤖 cs.LG

Good Reasoning Makes Good Demonstrations: Implicit Reasoning Quality Supervision via In-Context Reinforcement Learning

This paper introduces In-Context RLVR, a method that leverages a model's own in-context learning ability to measure "Demonstration Utility" via Evidence Gain, thereby implicitly reweighting rewards to prioritize high-quality reasoning traces over merely correct but flawed solutions during Reinforcement Learning with Verifiable Rewards training.

Tiehua Mei, Minxuan Lv, Leiyu Pan, Zhenpeng Su, Hongru Hou, Hengrui Chen, Ao Xu, Deqing YangWed, 11 Ma🤖 cs.LG

A Unified Hierarchical Multi-Task Multi-Fidelity Framework for Data-Efficient Surrogate Modeling in Manufacturing

This paper proposes a novel hierarchical multi-task multi-fidelity (H-MT-MF) framework for Gaussian process-based surrogate modeling that unifies inter-task information sharing and fidelity-dependent uncertainty handling to significantly improve prediction accuracy and data efficiency in manufacturing systems with heterogeneous data sources.

Manan Mehta, Zhiqiao Dong, Yuhang Yang, Chenhui ShaoWed, 11 Ma🤖 cs.LG

GAST: Gradient-aligned Sparse Tuning of Large Language Models with Data-layer Selection

The paper proposes GAST, a novel Parameter-Efficient Fine-Tuning method that unifies data-layer selection and layer-sparse strategies to adaptively match impactful data points with specific model layers, thereby overcoming the limitations of existing single-dimension approaches and achieving superior performance.

Kai Yao, Zhenghan Song, Kaixin Wu, Mingjie Zhong, Danzhao Cheng, Zhaorui Tan, Yixin Ji, Penglei GaoWed, 11 Ma🤖 cs.LG