Exponential Convergence of hphp-FEM for the Integral Fractional Laplacian on cuboids

This paper proves and numerically validates that tensor-product hphp-finite element approximations for the Dirichlet integral fractional Laplacian on a 3D cuboid with analytic forcing achieve root exponential convergence in the energy norm, specifically bounded by exp(bN6)\exp(-b\sqrt[6]{N}), by leveraging analytic regularity in weighted Sobolev spaces and geometrically refined meshes.

Björn Bahr, Markus Faustmann, Carlo Marcati, Jens Markus Melenk, Christoph SchwabWed, 11 Ma🔢 math

An accelerated direct solver for scalar wave scattering by multiple transmissive inclusions in two dimensions

This paper presents an accelerated direct solver based on boundary integral equations and low-rank proxy approximations that efficiently handles scalar wave scattering by multiple 2D transmissive inclusions, achieving O(N1.5)O(N^{1.5}) computational complexity and demonstrating superior performance with the PMCHWT formulation over the Burton-Miller approach.

Yasuhiro MatsumotoWed, 11 Ma🔢 math

Four-field mixed finite elements for incompressible nonlinear elasticity

This paper introduces a stable, unconditionally robust four-field mixed finite element method for incompressible nonlinear elasticity that utilizes a discontinuous displacement field to eliminate the need for stabilization in both 2D and 3D, while providing theoretical well-posedness, error estimates, and an efficient postprocessing technique to recover accurate continuous solutions.

Santiago Badia, Wei Li, Ricardo Ruiz-BaierWed, 11 Ma🔢 math

Overlapping Schwarz Preconditioners for Pose-Graph SLAM in Robotics

This paper investigates the use of additive overlapping Schwarz domain decomposition methods as scalable preconditioners for solving the large sparse linear systems arising in pose-graph SLAM optimization, demonstrating through numerical experiments and structural analogies to finite element problems that these techniques ensure the convergence of the preconditioned conjugate gradient method remains independent of problem size.

Stephan Köhler, Oliver Rheinbach, Yue Xiang Tee, Sebastian ZugWed, 11 Ma🔢 math

A finite element continuous data assimilation framework for a Navier--Stokes--Cahn--Hilliard system

This paper develops and analyzes a continuous data assimilation framework using a nudging-based approach and a capped finite element splitting scheme to recover trajectories of a coupled Navier-Stokes-Cahn-Hilliard system with an auxiliary field from coarse spatial observations, demonstrating its effectiveness in synchronizing mismatched initial conditions through numerical experiments.

Tianyu SunWed, 11 Ma🔢 math

XConv: Low-memory stochastic backpropagation for convolutional layers

XConv is a drop-in replacement for standard convolutional layers that significantly reduces memory usage during training by storing compressed activations and approximating weight gradients via randomized trace estimation, while maintaining performance comparable to exact gradient methods without imposing architectural constraints or requiring codebase modifications.

Anirudh Thatipelli, Jeffrey Sam, Mathias Louboutin, Ali Siahkoohi, Rongrong Wang, Felix J. HerrmannWed, 11 Ma🤖 cs.LG

Robust Parameter and State Estimation in Multiscale Neuronal Systems Using Physics-Informed Neural Networks

This paper presents a physics-informed neural network (PINN) framework that robustly reconstructs hidden state variables and estimates biophysical parameters in multiscale neuronal models using only partial, noisy voltage observations, effectively overcoming the convergence failures and sensitivity issues common in traditional numerical methods.

Changliang Wei, Yangyang Wang, Xueyu ZhuWed, 11 Ma🤖 cs.LG

On the Width Scaling of Neural Optimizers Under Matrix Operator Norms I: Row/Column Normalization and Hyperparameter Transfer

This paper introduces a family of mean-normalized matrix operator norms to derive width-independent smoothness bounds for deep neural networks, leading to the development of MOGA, a row/column-normalized optimizer that enables stable hyperparameter transfer across model widths and outperforms Muon in speed while maintaining competitive performance.

Ruihan Xu, Jiajin Li, Yiping LuWed, 11 Ma🤖 cs.LG

Robust Training of Neural Networks at Arbitrary Precision and Sparsity

This paper introduces a unified framework that models quantization and sparsification as additive noise to derive a principled, noise-corrective gradient path, enabling the stable training of neural networks at arbitrary low precisions and sparsity levels without relying on heuristic estimators like the Straight-Through Estimator.

Chengxi Ye, Grace Chu, Yanfeng Liu, Yichi Zhang, Lukasz Lew, Li Zhang, Mark Sandler, Andrew HowardWed, 11 Ma🤖 cs.AI