Quantitative convergence of trained single layer neural networks to Gaussian processes
This paper establishes explicit upper bounds on the quadratic Wasserstein distance between trained single-layer neural networks and their Gaussian process limits, demonstrating that the approximation error decays polynomially with network width while accounting for the influence of architectural parameters and training dynamics.