Clinical-Injection Transformer with Domain-Adapted MAE for Lupus Nephritis Prognosis Prediction

This paper proposes a novel multimodal framework, the Clinical-Injection Transformer with a domain-adapted MAE, which integrates routine PAS-stained histopathology images and clinical data to achieve high-accuracy three-class prognosis prediction for pediatric lupus nephritis, addressing previous limitations in data availability and modality integration.

Yuewen Huang, Zhitao Ye, Guangnan Feng, Fudan Zheng, Xia Gao, Yutong Lu2026-03-09🤖 cs.LG

JAWS: Enhancing Long-term Rollout of Neural Operators via Spatially-Adaptive Jacobian Regularization

The paper introduces JAWS, a probabilistic regularization strategy that dynamically modulates Jacobian constraints based on local physical complexity to resolve the contraction-dissipation dilemma, thereby enabling memory-efficient, short-horizon optimization to achieve superior long-term stability and accuracy in neural operator rollouts for dynamical systems.

Fengxiang Nie, Yasuhiro Suzuki2026-03-09🤖 cs.AI

Attention Meets Reachability: Structural Equivalence and Efficiency in Grammar-Constrained LLM Decoding

This paper establishes that while language-equivalent context-free grammars yield identical token masks in grammar-constrained decoding, their structural differences significantly impact computational efficiency by introducing variable state-space blowups and ambiguity costs, leading to fundamental lower bounds on decoding work and new distortion metrics for masked sampling.

Faruk Alpay, Bilge Senturk2026-03-09🤖 cs.LG

An intuitive rearranging of the Yates covariance decomposition for probabilistic verification of forecasts with the Brier score

This paper proposes a simple algebraic rearrangement of the Yates covariance decomposition for the Brier score that decomposes forecast error into three non-negative terms—variance mismatch, correlation deficit, and calibration-in-the-large—thereby making the conditions for optimal probabilistic forecasting transparent.

Bruno Hebling Vieira (Methods of Plasticity Research, Department of Psychology, University of Zurich, Zurich, Switzerland)2026-03-09🤖 cs.LG

IntSeqBERT: Learning Arithmetic Structure in OEIS via Modulo-Spectrum Embeddings

The paper introduces IntSeqBERT, a dual-stream Transformer model that combines continuous log-scale magnitude embeddings with modulo-spectrum embeddings to effectively learn the arithmetic structure of OEIS integer sequences, significantly outperforming standard tokenized baselines in both sequence modeling accuracy and next-term prediction via a probabilistic Chinese Remainder Theorem solver.

Kazuhisa Nakasho2026-03-09🤖 cs.LG

Autocorrelation effects in a stochastic-process model for decision making via time series

This study employs a stochastic-process model to demonstrate that the optimal autocorrelation of time-series signals for solving multi-armed bandit problems depends on the reward environment, with negative autocorrelation being advantageous in reward-rich settings and positive autocorrelation in reward-poor ones, while performance remains independent of autocorrelation when the sum of winning probabilities equals one.

Tomoki Yamagami, Mikio Hasegawa, Takatomo Mihana, Ryoichi Horisaki, Atsushi Uchida2026-03-09🔬 physics.optics

Towards Efficient and Stable Ocean State Forecasting: A Continuous-Time Koopman Approach

This paper demonstrates that the Continuous-Time Koopman Autoencoder (CT-KAE) serves as a lightweight, stable, and efficient surrogate model for long-horizon ocean state forecasting, outperforming autoregressive Transformer baselines by maintaining bounded errors and consistent large-scale statistics over 2083-day rollouts while enabling resolution-invariant predictions.

Rares Grozavescu, Pengyu Zhang, Mark Girolami, Etienne Meunier2026-03-09🔬 physics.app-ph

When AI Levels the Playing Field: Skill Homogenization, Asset Concentration, and Two Regimes of Inequality

This paper presents a task-based model demonstrating that while generative AI homogenizes individual skills, it can simultaneously increase aggregate inequality by shifting economic value toward concentrated complementary assets, with the net outcome determined by the technology's structure and labor market institutions rather than a single universal verdict.

Xupeng Chen, Shuchen Meng2026-03-09🤖 cs.AI

Learning Optimal Distributionally Robust Individualized Treatment Rules Integrating Multi-Source Data

This paper proposes a prior information-based distributionally robust individualized treatment rule (PDRO-ITR) that integrates multi-source data to address posterior shift by maximizing the worst-case policy value over a covariate-dependent uncertainty set, thereby ensuring robust performance and achieving superior results in simulations and real-world applications.

Wenhai Cui, Wen Su, Xingqiu Zhao2026-03-09🤖 cs.LG

Machine Learning for analysis of Multiple Sclerosis cross-tissue bulk and single-cell transcriptomics data

This study presents an end-to-end machine learning pipeline utilizing XGBoost and SHAP explainability to integrate bulk and single-cell transcriptomic data from multiple sclerosis patients, successfully identifying high-performance biomarkers and novel mechanistic pathways involving immune activation, non-canonical checkpoints, and Epstein-Barr virus-related processes.

Francesco Massafra, Samuele Punzo, Silvia Giulia Galfré, Alessandro Maglione, Simone Pernice, Stefano Forti, Simona Rolla, Marco Beccuti, Marinella Clerico, Corrado Priami, Alina Sîrbu2026-03-09🤖 cs.LG

Why Depth Matters in Parallelizable Sequence Models: A Lie Algebraic View

This paper utilizes a Lie-algebraic control perspective to demonstrate that increasing the depth of parallelizable sequence models exponentially reduces approximation error by expanding their expressivity through a tower of Lie algebra extensions, a finding validated by experiments on symbolic and continuous state-tracking tasks.

Gyuryang Heo, Timothy Ngotiaoco, Kazuki Irie, Samuel J. Gershman, Bernardo Sabatini2026-03-09🤖 cs.LG

Koopman Regularized Deep Speech Disentanglement for Speaker Verification

This paper introduces the Deep Koopman Speech Disentanglement Autoencoder (DKSD-AE), a scalable and efficient architecture that leverages Koopman operators and instance normalization to effectively disentangle speaker identity from linguistic content for robust speaker verification without relying on textual supervision or large pretrained models.

Nikos Chazaridis, Mohammad Belal, Rafael Mestre, Timothy J. Norman, Christine Evers2026-03-09🤖 cs.LG

Spatiotemporal Heterogeneity of AI-Driven Traffic Flow Patterns and Land Use Interaction: A GeoAI-Based Analysis of Multimodal Urban Mobility

This study proposes and validates a GeoAI hybrid framework integrating MGWR, Random Forest, and ST-GCN to effectively model the spatiotemporal heterogeneity of multimodal traffic flows and their interaction with land use, demonstrating superior predictive accuracy and revealing distinct urban traffic typologies that underscore the critical role of local morphological context in mobility planning.

Olaf Yunus Laitinen Imanov2026-03-09🤖 cs.AI