Data Collaboration Analysis with Orthonormal Basis Selection and Alignment

This paper introduces Orthonormal Data Collaboration (ODC), a method that enforces orthonormal bases to transform the alignment challenge into a closed-form Orthogonal Procrustes problem, thereby achieving orthogonal concordance, significantly reducing computational complexity, and improving accuracy without compromising privacy or communication efficiency.

Keiyu Nosaka, Yamato Suetake, Yuichi Takano + 1 more2026-03-06🔢 math

Localized Distributional Robustness in Submodular Multi-Task Subset Selection

This paper proposes a novel multi-task subset selection framework that achieves localized distributional robustness by introducing a relative-entropy regularization term, which is proven equivalent to maximizing a monotone composition of submodular functions and can be efficiently solved via greedy algorithms, as validated by experiments on satellite sensor selection and image summarization.

Ege C. Kaya, Abolfazl Hashemi2026-03-06🔢 math

Variational inequalities and smooth-fit principle for singular stochastic control problems in Hilbert spaces

This paper establishes that the value function of infinite-dimensional singular stochastic control problems in Hilbert spaces is a C1,Lip(H)C^{1,\mathrm{Lip}}(H)-viscosity solution to a variational inequality and satisfies a second-order smooth-fit principle in the controlled direction under specific spectral conditions, by leveraging connections to optimal stopping and techniques from convex and viscosity theory.

Salvatore Federico, Giorgio Ferrari, Frank Riedel + 1 more2026-03-06🔢 math

Lyapunov Characterization for ISS of Impulsive Switched Systems

This paper establishes necessary and sufficient conditions for the input-to-state stability (ISS) of impulsive switched systems with both stable and unstable modes by introducing time-varying ISS-Lyapunov functions under relaxed mode-dependent average dwell and leave time constraints, while also providing methods to construct decreasing Lyapunov functions and guarantee ISS even with unknown switching signals.

Saeed Ahmed, Patrick Bachmann, Stephan Trenn2026-03-06🔢 math

Curse of Dimensionality in Neural Network Optimization

This paper demonstrates that training shallow neural networks with Lipschitz continuous activation functions to approximate smooth target functions suffers from the curse of dimensionality, as the population risk decays at a rate bounded by a power of time that depends inversely on the input dimension, regardless of whether the optimization is analyzed via empirical or population risk or through 2-Wasserstein gradient flow dynamics.

Sanghoon Na, Haizhao Yang2026-03-06🔢 math

Kernel Based Maximum Entropy Inverse Reinforcement Learning for Mean-Field Games

This paper proposes a kernel-based maximum causal entropy inverse reinforcement learning framework for infinite-horizon stationary mean-field games that models unknown rewards in a reproducing kernel Hilbert space to capture nonlinear structures, proves the algorithm's theoretical consistency via Fréchet differentiability, and demonstrates superior policy recovery performance over linear baselines in traffic routing scenarios while extending the approach to finite-horizon non-stationary settings.

Berkay Anahtarci, Can Deha Kariksiz, Naci Saldi2026-03-06🔢 math

Inertial accelerated primal-dual algorithms for non-smooth convex optimization problems with linear equality constraints

This paper proposes an inertial accelerated primal-dual algorithm derived from a time-scaled second-order differential system to solve non-smooth convex optimization problems with linear equality constraints, establishing fast convergence rates for the primal-dual gap, feasibility violation, and objective residual, and validating its performance through numerical experiments.

Huan Zhang, Xiangkai Sun, Shengjie Li + 1 more2026-03-06🔢 math

A Proximal Stochastic Gradient Method with Adaptive Step Size and Variance Reduction for Convex Composite Optimization

This paper proposes a proximal stochastic gradient algorithm with adaptive step size and variance reduction for convex composite optimization, establishing its strong convergence, gradient error convergence, and an O(1/k)O(\sqrt{1/k}) convergence rate under Lipschitz continuity, while validating its effectiveness through numerical experiments on Logistic and Lasso regression.

Changjie Fang, Hao Yang, Shenglan Chen2026-03-06🔢 math

An Accelerated Primal Dual Algorithm with Backtracking for Decentralized Constrained Optimization

This paper proposes D-APDB, a distributed accelerated primal-dual algorithm with backtracking that achieves optimal O(1/K)\mathcal{O}(1/K) convergence for decentralized constrained optimization over undirected networks without requiring prior knowledge of Lipschitz constants, making it the first method of its kind to handle private nonlinear constraints efficiently.

Qiushui Xu, Necdet Serhat Aybat, Mert Gürbüzbalaban2026-03-06🔢 math