Transferable Graph Condensation from the Causal Perspective

This paper proposes TGCC, a novel causal-invariance-based graph dataset condensation method that extracts domain-invariant features and injects them via spectral contrastive learning to significantly improve performance in cross-task and cross-domain scenarios while maintaining state-of-the-art results in single-task settings.

Huaming Du, Yijie Huang, Su Yao, Yiying Wang, Yueyang Zhou, Jingwen Yang, Jinshi Zhang, Han Ji, Yu Zhao, Guisong Liu, Hegui Zhang, Carl Yang, Gang Kou2026-03-10🤖 cs.LG

FlowSymm: Physics Aware, Symmetry Preserving Graph Attention for Network Flow Completion

FlowSymm is a novel graph attention architecture that recovers missing network flows by combining a physics-aware, symmetry-preserving group-action framework with a lightweight Tikhonov refinement, ensuring exact adherence to local conservation laws while outperforming state-of-the-art methods across transportation, energy, and mobility benchmarks.

Ege Demirci, Francesco Bullo, Ananthram Swami, Ambuj Singh2026-03-10🤖 cs.LG

Do Schwartz Higher-Order Values Help Sentence-Level Human Value Detection? A Study of Hierarchical Gating and Calibration

This paper investigates whether Schwartz higher-order values improve sentence-level human value detection, finding that while hierarchical gating offers limited benefits, calibration techniques and hybrid ensembles significantly boost performance, suggesting the value hierarchy is more effective as an inductive bias than a rigid routing mechanism.

Víctor Yeste, Paolo Rosso2026-03-10🤖 cs.LG

LatentMem: Customizing Latent Memory for Multi-Agent Systems

This paper introduces LatentMem, a learnable multi-agent memory framework that addresses memory homogenization and information overload by using an experience bank and a memory composer to generate customized, token-efficient latent memories, further optimized via Latent Memory Policy Optimization (LMPO) to significantly enhance multi-agent system performance.

Muxin Fu, Xiangyuan Xue, Yafu Li, Zefeng He, Siyuan Huang, Xiaoye Qu, Yu Cheng, Yang Yang2026-03-10🤖 cs.LG

Radial Müntz-Szász Networks: Neural Architectures with Learnable Power Bases for Multidimensional Singularities

This paper introduces Radial Müntz-Szász Networks (RMN), a highly parameter-efficient neural architecture that utilizes learnable radial power bases and a log-primitive to accurately model multidimensional singular fields like $1/rand and \log r$, achieving significantly lower error rates than standard MLPs and SIREN on benchmark tasks while providing closed-form gradients for physics-informed learning.

Gnankan Landry Regis N'guessan, Bum Jun Kim2026-03-10🤖 cs.LG

SDFed: Bridging Local Global Discrepancy via Subspace Refinement and Divergence Control in Federated Prompt Learning

SDFed is a heterogeneous federated prompt learning framework that addresses local-global discrepancies by combining a fixed-length global prompt with variable-length local prompts, enhanced by subspace refinement and divergence control strategies to improve performance and robustness in privacy-sensitive, resource-constrained multi-party settings.

Yicheng Di, Wei Yuan, Tieke He, Yuan Liu, Hongzhi Yin2026-03-10🤖 cs.LG

Retrieval Pivot Attacks in Hybrid RAG: Measuring and Mitigating Amplified Leakage from Vector Seeds to Graph Expansion

This paper identifies and formalizes "Retrieval Pivot Attacks" in Hybrid RAG systems, demonstrating how vector-retrieved seeds can inadvertently pivot through knowledge graph links to cause cross-tenant data leakage, and proves that enforcing authorization specifically at the graph expansion boundary effectively mitigates this risk with minimal overhead.

Scott Thornton2026-03-10🤖 cs.LG

Diffusion-Guided Pretraining for Brain Graph Foundation Models

This paper proposes a unified diffusion-guided pretraining framework for brain graph foundation models that overcomes the limitations of existing methods by using diffusion to preserve semantic connectivity patterns during augmentation and to enable topology-aware global reconstruction, thereby achieving robust and transferable representations across diverse neuroimaging datasets.

Xinxu Wei, Rong Zhou, Lifang He, Yu Zhang2026-03-10🤖 cs.LG

Discovering Semantic Latent Structures in Psychological Scales: A Response-Free Pathway to Efficient Simplification

This paper introduces a response-free framework that leverages natural language processing and topic modeling to automatically simplify psychological scales by identifying semantic latent structures, achieving an average 60.5% reduction in item count while preserving psychometric validity and construct alignment.

Bo Wang, Yuxuan Zhang, Yueqin Hu, Hanchao Hou, Kaiping Peng, Shiguang Ni2026-03-10🤖 cs.LG

Benchmark Leakage Trap: Can We Trust LLM-based Recommendation?

This paper identifies and validates the critical issue of benchmark data leakage in LLM-based recommendation systems, demonstrating that exposure to evaluation data during training can artificially inflate performance metrics for domain-relevant leaks while degrading accuracy for irrelevant ones, thereby undermining the reliability of current evaluation practices.

Mingqiao Zhang, Qiyao Peng, Yumeng Wang, Chunyuan Liu, Hongtao Liu2026-03-10🤖 cs.LG

Mean Flow Policy with Instantaneous Velocity Constraint for One-step Action Generation

This paper introduces the Mean Velocity Policy (MVP), a novel one-step generative policy that employs an Instantaneous Velocity Constraint (IVC) to theoretically guarantee high expressiveness while achieving state-of-the-art performance and significantly faster training and inference speeds on challenging robotic manipulation tasks compared to existing flow-based baselines.

Guojian Zhan, Letian Tao, Pengcheng Wang, Yixiao Wang, Yiheng Li, Yuxin Chen, Hongyang Li, Masayoshi Tomizuka, Shengbo Eben Li2026-03-10🤖 cs.LG