DevBench: A Realistic, Developer-Informed Benchmark for Code Generation Models

DevBench is a realistic, telemetry-driven benchmark comprising 1,800 instances across six languages that evaluates LLMs on code completion tasks with a focus on ecological validity, contamination-free assessment, and detailed diagnostic insights to guide practical model selection and development.

Pareesa Ameneh Golnari, Adarsh Kumarappan, Wen Wen, Xiaoyu Liu, Gabriel Ryan, Yuting Sun, Shengyu Fu, Elsie Nallipogu2026-03-10🤖 cs.LG

ELSA: Efficient LLM-Centric Split Aggregation for Privacy-Aware Hierarchical Federated Learning over the Network Edge

ELSA is a novel framework that integrates split learning and hierarchical federated learning with client clustering, dynamic model splitting, and privacy-preserving communication sketches to enable efficient, robust, and privacy-aware fine-tuning of large language models on resource-constrained edge networks.

Xiaohong Yang, Tong Xie, Minghui Liwang, Chikai Shang, Yang Lu, Zhenzhen Jiao, Liqun Fu, Seyyedali Hosseinalipour2026-03-10🤖 cs.LG

Continuous-Flow Data-Rate-Aware CNN Inference on FPGA

This paper proposes a novel data-rate-aware continuous-flow architecture for CNN inference on FPGAs that mitigates hardware underutilization caused by data reduction in pooling and strided convolution layers by interleaving signals and sharing resources, thereby enabling the high-throughput implementation of complex models like MobileNet on a single device.

Tobias Habermann, Michael Mecik, Zhenyu Wang, César David Vera, Martin Kumm, Mario Garrido2026-03-10🤖 cs.LG

MeanCache: From Instantaneous to Average Velocity for Accelerating Flow Matching Inference

MeanCache is a training-free framework that accelerates Flow Matching inference by replacing instantaneous velocity caching with an average-velocity approach using cached Jacobian-vector products and a trajectory-stability scheduling strategy, achieving significant speedups (up to 4.56X) while maintaining high generation quality across models like FLUX.1 and HunyuanVideo.

Huanlin Gao, Ping Chen, Fuyuan Shi, Ruijia Wu, Li YanTao, Qiang Hui, Yuren You, Ting Lu, Chao Tan, Shaoan Zhao, Zhaoxiang Liu, Fang Zhao, Kai Wang, Shiguo Lian2026-03-10🤖 cs.LG

Transferable Graph Condensation from the Causal Perspective

This paper proposes TGCC, a novel causal-invariance-based graph dataset condensation method that extracts domain-invariant features and injects them via spectral contrastive learning to significantly improve performance in cross-task and cross-domain scenarios while maintaining state-of-the-art results in single-task settings.

Huaming Du, Yijie Huang, Su Yao, Yiying Wang, Yueyang Zhou, Jingwen Yang, Jinshi Zhang, Han Ji, Yu Zhao, Guisong Liu, Hegui Zhang, Carl Yang, Gang Kou2026-03-10🤖 cs.LG

FlowSymm: Physics Aware, Symmetry Preserving Graph Attention for Network Flow Completion

FlowSymm is a novel graph attention architecture that recovers missing network flows by combining a physics-aware, symmetry-preserving group-action framework with a lightweight Tikhonov refinement, ensuring exact adherence to local conservation laws while outperforming state-of-the-art methods across transportation, energy, and mobility benchmarks.

Ege Demirci, Francesco Bullo, Ananthram Swami, Ambuj Singh2026-03-10🤖 cs.LG

Do Schwartz Higher-Order Values Help Sentence-Level Human Value Detection? A Study of Hierarchical Gating and Calibration

This paper investigates whether Schwartz higher-order values improve sentence-level human value detection, finding that while hierarchical gating offers limited benefits, calibration techniques and hybrid ensembles significantly boost performance, suggesting the value hierarchy is more effective as an inductive bias than a rigid routing mechanism.

Víctor Yeste, Paolo Rosso2026-03-10🤖 cs.LG

LatentMem: Customizing Latent Memory for Multi-Agent Systems

This paper introduces LatentMem, a learnable multi-agent memory framework that addresses memory homogenization and information overload by using an experience bank and a memory composer to generate customized, token-efficient latent memories, further optimized via Latent Memory Policy Optimization (LMPO) to significantly enhance multi-agent system performance.

Muxin Fu, Xiangyuan Xue, Yafu Li, Zefeng He, Siyuan Huang, Xiaoye Qu, Yu Cheng, Yang Yang2026-03-10🤖 cs.LG

Radial Müntz-Szász Networks: Neural Architectures with Learnable Power Bases for Multidimensional Singularities

This paper introduces Radial Müntz-Szász Networks (RMN), a highly parameter-efficient neural architecture that utilizes learnable radial power bases and a log-primitive to accurately model multidimensional singular fields like $1/rand and \log r$, achieving significantly lower error rates than standard MLPs and SIREN on benchmark tasks while providing closed-form gradients for physics-informed learning.

Gnankan Landry Regis N'guessan, Bum Jun Kim2026-03-10🤖 cs.LG

SDFed: Bridging Local Global Discrepancy via Subspace Refinement and Divergence Control in Federated Prompt Learning

SDFed is a heterogeneous federated prompt learning framework that addresses local-global discrepancies by combining a fixed-length global prompt with variable-length local prompts, enhanced by subspace refinement and divergence control strategies to improve performance and robustness in privacy-sensitive, resource-constrained multi-party settings.

Yicheng Di, Wei Yuan, Tieke He, Yuan Liu, Hongzhi Yin2026-03-10🤖 cs.LG