Unmixing microinfrared spectroscopic images of cross-sections of historical oil paintings

This paper proposes an unsupervised CNN autoencoder with a novel weighted spectral angle distance loss to enable blind, automated unmixing of complex ATR-μ\muFTIR hyperspectral images from historical oil painting cross-sections, significantly improving the interpretability and scalability of material analysis compared to traditional manual methods.

Shivam Pande, Nicolas Nadisic, Francisco Mederos-Henry, Aleksandra Pizurica2026-03-10🤖 cs.LG

High-Resolution Image Reconstruction with Unsupervised Learning and Noisy Data Applied to Ion-Beam Dynamics for Particle Accelerators

This paper presents an unsupervised learning framework utilizing convolutional filtering and neural networks with optimized early-stopping to achieve robust, high-fidelity reconstruction of ion-beam emittance images from noisy data, enabling unprecedented halo resolution beyond seven standard deviations for particle accelerator diagnostics.

Francis Osswald (IPHC), Mohammed Chahbaoui (UNISTRA), Xinyi Liang (SU)2026-03-10🤖 cs.LG

Soft Equivariance Regularization for Invariant Self-Supervised Learning

This paper proposes Soft Equivariance Regularization (SER), a lightweight, plug-in method that decouples invariance and equivariance objectives by enforcing equivariance on intermediate spatial features while preserving invariance on the final embedding, thereby improving both linear evaluation accuracy and robustness to geometric perturbations without requiring auxiliary heads or transformation labels.

Joohyung Lee, Changhun Kim, Hyunsu Kim, Kwanhyung Lee, Juho Lee2026-03-10🤖 cs.LG

On the Generalization Capacities of MLLMs for Spatial Intelligence

This paper argues that RGB-only Multimodal Large Language Models fail to generalize across different cameras due to entangled perspective and object properties, and proposes a Camera-Aware MLLM framework that integrates camera intrinsics, augmented data, and 3D geometric priors to achieve robust, generalizable spatial intelligence.

Gongjie Zhang, Wenhao Li, Quanhao Qian, Jiuniu Wang, Deli Zhao, Shijian Lu, Ran Xu2026-03-10🤖 cs.LG

Scaling Agentic Capabilities, Not Context: Efficient Reinforcement Finetuning for Large Toolspaces

The paper introduces ATLAS, a reinforcement finetuning framework that enables small language models to effectively navigate large toolspaces by learning adaptive context acquisition and execution strategies, thereby achieving frontier-level performance with significantly reduced parameter and context budgets.

Karan Gupta, Pranav Vajreshwari, Yash Pandya, Raghav Magazine, Akshay Nambi, Ahmed Awadallah2026-03-10🤖 cs.LG

From Statistical Fidelity to Clinical Consistency: Scalable Generation and Auditing of Synthetic Patient Trajectories

This paper presents an integrated pipeline combining knowledge-grounded generative modeling with automated LLM-based auditing to produce clinically consistent, privacy-preserving synthetic patient trajectories that overcome the limitations of existing methods by eliminating clinical inconsistencies while maintaining high statistical fidelity and downstream utility.

Guanglin Zhou, Armin Catic, Motahare Shabestari, Matthew Young, Chaiquan Li, Katrina Poppe, Sebastiano Barbieri2026-03-10🤖 cs.LG

Regression Models Meet Foundation Models: A Hybrid-AI Approach to Practical Electricity Price Forecasting

This paper introduces FutureBoosting, a hybrid-AI framework that enhances electricity price forecasting by integrating forecasted features from a frozen time series foundation model into a regression model, thereby achieving significant accuracy improvements over state-of-the-art baselines while maintaining interpretability.

Yunzhong Qiu, Binzhu Li, Hao Wei, Shenglin Weng, Chen Wang, Zhongyi Pei, Mingsheng Long, Jianmin Wang2026-03-10🤖 cs.LG

Safe Transformer: An Explicit Safety Bit For Interpretable And Controllable Alignment

The paper proposes Safe Transformer, a modular approach that inserts an explicit, interpretable safety bit into pre-trained language models to achieve controllable alignment and near-zero attack success rates through lightweight fine-tuning, addressing the opacity of traditional implicit safety methods.

Jingyuan Feng, Andrew Gambardella, Gouki Minegishi, Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo2026-03-10🤖 cs.LG

Don't Freeze, Don't Crash: Extending the Safe Operating Range of Neural Navigation in Dense Crowds

This paper proposes a reinforcement learning approach for dense crowd navigation that achieves zero-shot generalization to higher crowd densities by combining density-invariant observation encoding, density-randomized training, and physics-informed proxemic reward shaping, thereby significantly outperforming existing learning-based and analytical methods in success rate and collision avoidance without freezing.

Jiefu Zhang, Yang Xu, Vaneet Aggarwal2026-03-10🤖 cs.LG

Rank-Factorized Implicit Neural Bias: Scaling Super-Resolution Transformer with FlashAttention

This paper proposes Rank-factorized Implicit Neural Bias (RIB), a novel positional bias mechanism that enables the use of hardware-efficient FlashAttention in Super-Resolution Transformers, allowing for significantly larger window sizes and training patches that achieve state-of-the-art performance (35.63 dB PSNR) while reducing training and inference times by 2.1×\times and 2.9×\times, respectively.

Dongheon Lee, Seokju Yun, Jaegyun Im, Youngmin Ro2026-03-10🤖 cs.LG