Variational Learning of Gaussian Process Latent Variable Models through Stochastic Gradient Annealed Importance Sampling

This paper proposes a novel Variational Learning framework for Gaussian Process Latent Variable Models that utilizes Stochastic Gradient Annealed Importance Sampling to overcome proposal distribution challenges in high-dimensional spaces, achieving tighter variational bounds and superior performance compared to state-of-the-art methods.

Jian Xu, Shian Du, Junmei Yang, Qianli Ma, Delu Zeng, John PaisleyTue, 10 Ma🤖 cs.LG

Input-to-State Stable Coupled Oscillator Networks for Closed-form Model-based Control in Latent Space

This paper introduces a novel Coupled Oscillator Network (CON) model that overcomes key limitations in latent-space control by ensuring Lagrangian structure, global input-to-state stability, and an invertible input-force mapping, thereby enabling efficient closed-form control strategies for complex mechanical systems using only raw visual feedback.

Maximilian Stölzle, Cosimo Della SantinaTue, 10 Ma🤖 cs.LG

Neural delay differential equations: learning non-Markovian closures for partially known dynamical systems

This paper introduces a constant-lag Neural Delay Differential Equations (NDDEs) framework, inspired by the Mori-Zwanzig formalism, to effectively learn non-Markovian dynamics from partially observed data by identifying memory effects through time delays, demonstrating superior performance over existing methods like LSTMs and ANODEs across synthetic, chaotic, and experimental datasets.

Thibault Monsel, Onofrio Semeraro, Lionel Mathelin, Guillaume CharpiatTue, 10 Ma🤖 cs.LG

From Pixels to Predicates: Learning Symbolic World Models via Pretrained Vision-Language Models

This paper proposes a method that leverages pretrained vision-language models to learn compact, abstract symbolic world models from limited visual demonstrations, enabling zero-shot generalization and long-horizon planning for complex robotic tasks across novel objects, environments, and goals.

Ashay Athalye, Nishanth Kumar, Tom Silver, Yichao Liang, Jiuguang Wang, Tomás Lozano-Pérez, Leslie Pack KaelblingTue, 10 Ma🤖 cs.LG

Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs

This paper demonstrates that integrating Low-Rank Adaptation (LoRA) into Federated Learning for Large Language Models significantly reduces unintended memorization of sensitive training data across diverse model sizes and domains, while maintaining performance and offering compatibility with other privacy-preserving techniques.

Thierry Bossy, Julien Vignoud, Tahseen Rabbani, Juan R. Troncoso Pastoriza, Martin JaggiTue, 10 Ma🤖 cs.LG

Language in the Flow of Time: Time-Series-Paired Texts Weaved into a Unified Temporal Narrative

This paper introduces Texts as Time Series (TaTS), a novel framework that leverages the periodic alignment between paired texts and time series data to enhance multimodal forecasting and imputation performance in existing numerical-only models without requiring architectural changes.

Zihao Li, Xiao Lin, Zhining Liu, Jiaru Zou, Ziwei Wu, Lecheng Zheng, Dongqi Fu, Yada Zhu, Hendrik Hamann, Hanghang Tong, Jingrui HeTue, 10 Ma🤖 cs.LG