Regression Models Meet Foundation Models: A Hybrid-AI Approach to Practical Electricity Price Forecasting

This paper introduces FutureBoosting, a hybrid-AI framework that enhances electricity price forecasting by integrating forecasted features from a frozen time series foundation model into a regression model, thereby achieving significant accuracy improvements over state-of-the-art baselines while maintaining interpretability.

Yunzhong Qiu, Binzhu Li, Hao Wei, Shenglin Weng, Chen Wang, Zhongyi Pei, Mingsheng Long, Jianmin Wang2026-03-10🤖 cs.LG

Safe Transformer: An Explicit Safety Bit For Interpretable And Controllable Alignment

The paper proposes Safe Transformer, a modular approach that inserts an explicit, interpretable safety bit into pre-trained language models to achieve controllable alignment and near-zero attack success rates through lightweight fine-tuning, addressing the opacity of traditional implicit safety methods.

Jingyuan Feng, Andrew Gambardella, Gouki Minegishi, Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo2026-03-10🤖 cs.LG

Don't Freeze, Don't Crash: Extending the Safe Operating Range of Neural Navigation in Dense Crowds

This paper proposes a reinforcement learning approach for dense crowd navigation that achieves zero-shot generalization to higher crowd densities by combining density-invariant observation encoding, density-randomized training, and physics-informed proxemic reward shaping, thereby significantly outperforming existing learning-based and analytical methods in success rate and collision avoidance without freezing.

Jiefu Zhang, Yang Xu, Vaneet Aggarwal2026-03-10🤖 cs.LG

Rank-Factorized Implicit Neural Bias: Scaling Super-Resolution Transformer with FlashAttention

This paper proposes Rank-factorized Implicit Neural Bias (RIB), a novel positional bias mechanism that enables the use of hardware-efficient FlashAttention in Super-Resolution Transformers, allowing for significantly larger window sizes and training patches that achieve state-of-the-art performance (35.63 dB PSNR) while reducing training and inference times by 2.1×\times and 2.9×\times, respectively.

Dongheon Lee, Seokju Yun, Jaegyun Im, Youngmin Ro2026-03-10🤖 cs.LG

Stabilizing Reinforcement Learning for Diffusion Language Models

This paper identifies that applying Group Relative Policy Optimization (GRPO) to diffusion language models causes reward collapse due to noisy importance ratio estimates and formulation mismatches, and proposes StableDRL, a reformulated algorithm featuring unconditional clipping and self-normalization to stabilize training and prevent policy drift.

Jianyuan Zhong, Kaibo Wang, Ding Ding, Zijin Feng, Haoli Bai, Yang Xiang, Jiacheng Sun, Qiang Xu2026-03-10🤖 cs.LG

Property-driven Protein Inverse Folding With Multi-Objective Preference Alignment

This paper introduces ProtAlign, a multi-objective preference alignment framework that fine-tunes pretrained inverse folding models to simultaneously optimize diverse developability properties like solubility and thermostability while preserving structural designability, resulting in the enhanced MoMPNN model for practical protein sequence design.

Xiaoyang Hou, Junqi Liu, Chence Shi, Xin Liu, Zhi Yang, Jian Tang2026-03-10🤖 cs.LG

Robotic Foundation Models for Industrial Control: A Comprehensive Survey and Readiness Assessment Framework

This paper surveys the landscape of robotic foundation models, identifies eleven key industrial implications to establish a 149-criteria assessment framework, and evaluates 324 models to reveal that current industrial readiness is limited and uneven, necessitating a shift from isolated benchmark successes to systematic integration of safety, real-time performance, and robust system deployment.

David Kube, Simon Hadwiger, Tobias Meisen2026-03-10💻 cs

Learning Unbiased Cluster Descriptors for Interpretable Imbalanced Concept Drift Detection

This paper proposes ICD3, an interpretable and robust approach for detecting concept drift in imbalanced streaming data by employing multi-distribution-granular search to identify small concepts and training independent One-Cluster Classifiers for each, thereby overcoming the masking effect of dominant large clusters.

Yiqun Zhang, Zhanpei Huang, Mingjie Zhao, Chuyao Zhang, Yang Lu, Yuzhu Ji, Fangqing Gu, An Zeng2026-03-10🤖 cs.LG

Gradient-based Nested Co-Design of Aerodynamic Shape and Control for Winged Robots

This paper introduces a general-purpose, gradient-based nested co-design framework that jointly optimizes the aerodynamic shape and motion planner of winged robots using neural surrogate models for complex flow conditions, demonstrating superior performance and efficiency over evolutionary baselines in tasks like perching and short landing.

Daniele Affinita, Mingda Xu, Benoît Valentin Gherardi, Pascal Fua2026-03-10💻 cs