Inference-time optimization for experiment-grounded protein ensemble generation

This paper introduces a general inference-time optimization framework that generates experiment-grounded protein ensembles by optimizing latent representations and employing novel sampling schemes, thereby overcoming the limitations of current diffusion-based methods to produce thermodynamically plausible structures with improved agreement to experimental data while exposing vulnerabilities in existing confidence metrics.

Advaith Maddipatla, Anar Rzayev, Marco Pegoraro + 5 more2026-03-06💻 cs

DiffusionHarmonizer: Bridging Neural Reconstruction and Photorealistic Simulation with Online Diffusion Enhancer

DiffusionHarmonizer is an online, single-step generative framework that leverages a custom data curation pipeline to transform imperfect neural reconstruction renderings into temporally consistent, photorealistic simulations, effectively resolving artifacts and harmonizing inserted dynamic objects for autonomous robot development.

Yuxuan Zhang, Katarína Tóthová, Zian Wang + 7 more2026-03-06💻 cs

AOI: Turning Failed Trajectories into Training Signals for Autonomous Cloud Diagnosis

AOI is a secure, trainable multi-agent framework that automates Site Reliability Engineering by leveraging Group Relative Policy Optimization and a read-write separated architecture to distill expert knowledge into local models and convert failed trajectories into corrective signals, achieving state-of-the-art performance on the AIOpsLab benchmark while ensuring data privacy and safe execution.

Pei Yang, Wanyi Chen, Asuka Yuxi Zheng + 11 more2026-03-06💻 cs

Why Are Linear RNNs More Parallelizable?

This paper establishes a theoretical foundation for the superior parallelizability of linear RNNs by demonstrating that they correspond to log-depth arithmetic circuits (NC1\mathsf{NC}^1-complete), whereas nonlinear RNNs are fundamentally limited by their ability to solve L\mathsf{L}- and P\mathsf{P}-complete problems, thereby explaining why linear variants can be efficiently parallelized like transformers while traditional nonlinear RNNs cannot.

William Merrill, Hongjian Jiang, Yanhong Li + 2 more2026-03-06💻 cs