Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling

This paper proposes a novel Variational Learning framework for Gaussian Process Latent Variable Models that utilizes Stochastic Gradient Annealed Importance Sampling to overcome proposal distribution challenges in high-dimensional spaces, achieving tighter variational bounds and superior performance compared to state-of-the-art methods.

Jian Xu, Shian Du, Junmei Yang, Qianli Ma, Delu Zeng, John Paisley2026-03-10🤖 cs.LG

Input-to-State Stable Coupled Oscillator Networks for Closed-form Model-based Control in Latent Space

This paper introduces a novel Coupled Oscillator Network (CON) model that overcomes key limitations in latent-space control by ensuring Lagrangian structure, global input-to-state stability, and an invertible input-force mapping, thereby enabling efficient closed-form control strategies for complex mechanical systems using only raw visual feedback.

Maximilian Stölzle, Cosimo Della Santina2026-03-10🤖 cs.LG

Neural delay differential equations: learning non-Markovian closures for partially known dynamical systems

This paper introduces a constant-lag Neural Delay Differential Equations (NDDEs) framework, inspired by the Mori-Zwanzig formalism, to effectively learn non-Markovian dynamics from partially observed data by identifying memory effects through time delays, demonstrating superior performance over existing methods like LSTMs and ANODEs across synthetic, chaotic, and experimental datasets.

Thibault Monsel, Onofrio Semeraro, Lionel Mathelin, Guillaume Charpiat2026-03-10🤖 cs.LG

From Pixels to Predicates: Learning Symbolic World Models via Pretrained Vision-Language Models

This paper proposes a method that leverages pretrained vision-language models to learn compact, abstract symbolic world models from limited visual demonstrations, enabling zero-shot generalization and long-horizon planning for complex robotic tasks across novel objects, environments, and goals.

Ashay Athalye, Nishanth Kumar, Tom Silver, Yichao Liang, Jiuguang Wang, Tomás Lozano-Pérez, Leslie Pack Kaelbling2026-03-10🤖 cs.LG

Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs

This paper demonstrates that integrating Low-Rank Adaptation (LoRA) into Federated Learning for Large Language Models significantly reduces unintended memorization of sensitive training data across diverse model sizes and domains, while maintaining performance and offering compatibility with other privacy-preserving techniques.

Thierry Bossy, Julien Vignoud, Tahseen Rabbani, Juan R. Troncoso Pastoriza, Martin Jaggi2026-03-10🤖 cs.LG