Approximate Nearest Neighbor Search for Modern AI: A Projection-Augmented Graph Approach

This paper introduces Projection-Augmented Graph (PAG), a novel Approximate Nearest Neighbor Search framework that integrates projection techniques into graph indexing to simultaneously achieve high query efficiency, fast indexing, low memory usage, and robust scalability across modern AI workloads, outperforming existing methods like HNSW by up to 5x in speed while supporting online insertions.

Kejing Lu, Zhenpeng Pan, Jianbin Qin, Yoshiharu Ishikawa, Chuan Xiao2026-03-10🤖 cs.LG

Unmixing microinfrared spectroscopic images of cross-sections of historical oil paintings

This paper proposes an unsupervised CNN autoencoder with a novel weighted spectral angle distance loss to enable blind, automated unmixing of complex ATR-μ\muFTIR hyperspectral images from historical oil painting cross-sections, significantly improving the interpretability and scalability of material analysis compared to traditional manual methods.

Shivam Pande, Nicolas Nadisic, Francisco Mederos-Henry, Aleksandra Pizurica2026-03-10🤖 cs.LG

High-Resolution Image Reconstruction with Unsupervised Learning and Noisy Data Applied to Ion-Beam Dynamics for Particle Accelerators

This paper presents an unsupervised learning framework utilizing convolutional filtering and neural networks with optimized early-stopping to achieve robust, high-fidelity reconstruction of ion-beam emittance images from noisy data, enabling unprecedented halo resolution beyond seven standard deviations for particle accelerator diagnostics.

Francis Osswald (IPHC), Mohammed Chahbaoui (UNISTRA), Xinyi Liang (SU)2026-03-10🤖 cs.LG

Soft Equivariance Regularization for Invariant Self-Supervised Learning

This paper proposes Soft Equivariance Regularization (SER), a lightweight, plug-in method that decouples invariance and equivariance objectives by enforcing equivariance on intermediate spatial features while preserving invariance on the final embedding, thereby improving both linear evaluation accuracy and robustness to geometric perturbations without requiring auxiliary heads or transformation labels.

Joohyung Lee, Changhun Kim, Hyunsu Kim, Kwanhyung Lee, Juho Lee2026-03-10🤖 cs.LG

On the Generalization Capacities of MLLMs for Spatial Intelligence

This paper argues that RGB-only Multimodal Large Language Models fail to generalize across different cameras due to entangled perspective and object properties, and proposes a Camera-Aware MLLM framework that integrates camera intrinsics, augmented data, and 3D geometric priors to achieve robust, generalizable spatial intelligence.

Gongjie Zhang, Wenhao Li, Quanhao Qian, Jiuniu Wang, Deli Zhao, Shijian Lu, Ran Xu2026-03-10🤖 cs.LG

Scaling Agentic Capabilities, Not Context: Efficient Reinforcement Finetuning for Large Toolspaces

The paper introduces ATLAS, a reinforcement finetuning framework that enables small language models to effectively navigate large toolspaces by learning adaptive context acquisition and execution strategies, thereby achieving frontier-level performance with significantly reduced parameter and context budgets.

Karan Gupta, Pranav Vajreshwari, Yash Pandya, Raghav Magazine, Akshay Nambi, Ahmed Awadallah2026-03-10🤖 cs.LG

From Statistical Fidelity to Clinical Consistency: Scalable Generation and Auditing of Synthetic Patient Trajectories

This paper presents an integrated pipeline combining knowledge-grounded generative modeling with automated LLM-based auditing to produce clinically consistent, privacy-preserving synthetic patient trajectories that overcome the limitations of existing methods by eliminating clinical inconsistencies while maintaining high statistical fidelity and downstream utility.

Guanglin Zhou, Armin Catic, Motahare Shabestari, Matthew Young, Chaiquan Li, Katrina Poppe, Sebastiano Barbieri2026-03-10🤖 cs.LG

Regression Models Meet Foundation Models: A Hybrid-AI Approach to Practical Electricity Price Forecasting

This paper introduces FutureBoosting, a hybrid-AI framework that enhances electricity price forecasting by integrating forecasted features from a frozen time series foundation model into a regression model, thereby achieving significant accuracy improvements over state-of-the-art baselines while maintaining interpretability.

Yunzhong Qiu, Binzhu Li, Hao Wei, Shenglin Weng, Chen Wang, Zhongyi Pei, Mingsheng Long, Jianmin Wang2026-03-10🤖 cs.LG

Safe Transformer: An Explicit Safety Bit For Interpretable And Controllable Alignment

The paper proposes Safe Transformer, a modular approach that inserts an explicit, interpretable safety bit into pre-trained language models to achieve controllable alignment and near-zero attack success rates through lightweight fine-tuning, addressing the opacity of traditional implicit safety methods.

Jingyuan Feng, Andrew Gambardella, Gouki Minegishi, Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo2026-03-10🤖 cs.LG