An Accelerated Primal Dual Algorithm with Backtracking for Decentralized Constrained Optimization

This paper proposes D-APDB, a distributed accelerated primal-dual algorithm with backtracking that achieves optimal O(1/K)\mathcal{O}(1/K) convergence for decentralized constrained optimization over undirected networks without requiring prior knowledge of Lipschitz constants, making it the first method of its kind to handle private nonlinear constraints efficiently.

Qiushui Xu, Necdet Serhat Aybat, Mert Gürbüzbalaban2026-03-06🔢 math

How Does the ReLU Activation Affect the Implicit Bias of Gradient Descent on High-dimensional Neural Network Regression?

This paper demonstrates that for high-dimensional random data, gradient descent on shallow ReLU networks exhibits an implicit bias that approximates the minimum L2L_2-norm solution with high probability, bridging the gap between worst-case non-existence and exact orthogonality results through a novel primal-dual analysis.

Kuo-Wei Lai, Guanghui Wang, Molei Tao + 1 more2026-03-06🔢 math

U-OBCA: Uncertainty-Aware Optimization-Based Collision Avoidance via Wasserstein Distributionally Robust Chance Constraints

This paper introduces U-OBCA, an uncertainty-aware optimization-based collision avoidance framework that utilizes Wasserstein distributionally robust chance constraints to handle polygon-shaped robots and obstacles without geometric simplification, thereby significantly reducing conservatism and improving navigation efficiency in narrow, cluttered environments compared to existing methods.

Zehao Wang, Yuxuan Tang, Han Zhang + 2 more2026-03-06🔢 math

An Efficient Stochastic First-Order Algorithm for Nonconvex-Strongly Concave Minimax Optimization beyond Lipschitz Smoothness

This paper proposes the NSGDA-M algorithm, a stochastic first-order method with momentum and normalization, to solve nonconvex-strongly concave minimax problems under generalized smoothness conditions, achieving an O(ϵ4)\mathcal{O}(\epsilon^{-4}) stochastic gradient complexity for finding an ϵ\epsilon-stationary point.

Yan Gao, Yongchao Liu2026-03-06🔢 math

Formal Entropy-Regularized Control of Stochastic Systems

This paper presents a formal control synthesis framework for continuous-state stochastic systems that minimizes a linear combination of system entropy (measured by KL divergence to uniform) and cumulative cost by deriving novel bounds on the entropy difference between continuous distributions and their finite-state abstractions, thereby enabling entropy-aware controllers with rigorous performance guarantees.

Menno van Zutphen, Giannis Delimpaltadakis, Duarte J. Antunes2026-03-06🔢 math

A Second-Order Algorithm Based on Affine Scaling Interior-Point Methods for nonlinear minimization with bound constraints

This paper extends the homogeneous second-order descent method (HSODM) to bound-constrained nonlinear optimization by proposing the SOBASIP algorithm, which utilizes affine scaling and homogenization techniques to achieve a global iteration complexity of O(ϵ3/2)O(\epsilon^{-3/2}) for finding ϵ\epsilon-approximate second-order stationary points and exhibits local superlinear convergence.

Yonggang Pei, Yubing Lin2026-03-06🔢 math