Hide and Find: A Distributed Adversarial Attack on Federated Graph Learning

The paper proposes FedShift, a novel two-stage "Hide and Find" distributed adversarial attack for Federated Graph Learning that injects hidden shifters to stealthily guide poisoned data toward a target boundary and efficiently generates perturbations, achieving superior effectiveness, robustness against defenses, and a 90% reduction in convergence time compared to existing methods.

Jinshan Liu, Ken Li, Jiazhe Wei, Bin Shi, Bo Dong2026-03-10🤖 cs.LG

Breaking Training Bottlenecks: Effective and Stable Reinforcement Learning for Coding Models

This paper introduces MicroCoder-GRPO, an enhanced Group Relative Policy Optimization framework featuring innovations like conditional truncation masking and diversity-driven temperature selection, alongside a challenging new dataset and robust evaluator, to overcome training bottlenecks in modern coding models and achieve significant performance gains on LiveCodeBench v6.

Zongqian Li, Shaohan Huang, Zewen Chi, Yixuan Su, Lexin Zhou, Li Dong, Nigel Collier, Furu Wei2026-03-10🤖 cs.LG

Lindbladian Learning with Neural Differential Equations

This paper introduces a Lindbladian learning method that combines maximum-likelihood estimation on transient Pauli measurements with a neural differential equation framework to robustly infer open-system quantum dynamics, including dissipative mechanisms, across various hardware platforms and noise conditions with high efficiency.

Timothy Heightman, Roman Aseguinolaza Gallo, Edward Jiang, JRM Saavedra, Antonio Acín, Marcin Płodzien2026-03-10⚛️ quant-ph

Scaling Data Difficulty: Improving Coding Models via Reinforcement Learning on Fresh and Challenging Problems

This paper introduces MicroCoder, a high-quality dataset of curated, recent, and challenging competitive programming problems processed through a four-stage framework with automatic difficulty filtering, which significantly boosts coding model performance on unseen hard tasks compared to existing baselines.

Zongqian Li, Tengchao Lv, Shaohan Huang, Yixuan Su, Qinzheng Sun, Qiufeng Yin, Ying Xin, Scarlett Li, Lei Cui, Nigel Collier, Furu Wei2026-03-10🤖 cs.LG

Fusion Complexity Inversion: Why Simpler Cross View Modules Outperform SSMs and Cross View Attention Transformers for Pasture Biomass Regression

This study demonstrates that for pasture biomass regression on scarce agricultural data, prioritizing high-quality backbone pretraining and utilizing simple local fusion modules significantly outperforms complex global architectures like SSMs and cross-view attention transformers, a phenomenon termed "fusion complexity inversion."

Mridankan Mandal2026-03-10🤖 cs.LG

Gradient Iterated Temporal-Difference Learning

This paper introduces Gradient Iterated Temporal-Difference (GTD) learning, a novel algorithm that modifies iterated TD by computing gradients over moving targets to achieve the stability of gradient methods while matching the competitive learning speed of semi-gradient methods across diverse benchmarks like Atari games.

Théo Vincent, Kevin Gerhardt, Yogesh Tripathi, Habib Maraqten, Adam White, Martha White, Jan Peters, Carlo D'Eramo2026-03-10🤖 cs.LG

Viewpoint-Agnostic Grasp Pipeline using VLM and Partial Observations

This paper presents an end-to-end, viewpoint-agnostic grasping pipeline for mobile legged manipulators that leverages vision-language models and partial observation compensation to achieve robust, language-guided object selection and safe execution in cluttered environments, outperforming view-dependent baselines with a 90% success rate.

Dilermando Almeida, Juliano Negri, Guilherme Lazzarini, Thiago H. Segreto, Ranulfo Bezerra, Ricardo V. Godoy, Marcelo Becker2026-03-10🤖 cs.LG