From Pixels to Predicates: Learning Symbolic World Models via Pretrained Vision-Language Models

This paper proposes a method that leverages pretrained vision-language models to learn compact, abstract symbolic world models from limited visual demonstrations, enabling zero-shot generalization and long-horizon planning for complex robotic tasks across novel objects, environments, and goals.

Ashay Athalye, Nishanth Kumar, Tom Silver, Yichao Liang, Jiuguang Wang, Tomás Lozano-Pérez, Leslie Pack Kaelbling2026-03-10🤖 cs.LG

Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs

This paper demonstrates that integrating Low-Rank Adaptation (LoRA) into Federated Learning for Large Language Models significantly reduces unintended memorization of sensitive training data across diverse model sizes and domains, while maintaining performance and offering compatibility with other privacy-preserving techniques.

Thierry Bossy, Julien Vignoud, Tahseen Rabbani, Juan R. Troncoso Pastoriza, Martin Jaggi2026-03-10🤖 cs.LG

Language in the Flow of Time: Time-Series-Paired Texts Weaved into a Unified Temporal Narrative

This paper introduces Texts as Time Series (TaTS), a novel framework that leverages the periodic alignment between paired texts and time series data to enhance multimodal forecasting and imputation performance in existing numerical-only models without requiring architectural changes.

Zihao Li, Xiao Lin, Zhining Liu, Jiaru Zou, Ziwei Wu, Lecheng Zheng, Dongqi Fu, Yada Zhu, Hendrik Hamann, Hanghang Tong, Jingrui He2026-03-10🤖 cs.LG