Radial Müntz-Szász Networks: Neural Architectures with Learnable Power Bases for Multidimensional Singularities
This paper introduces Radial Müntz-Szász Networks (RMN), a highly parameter-efficient neural architecture that utilizes learnable radial power bases and a log-primitive to accurately model multidimensional singular fields like $1/r\log r$, achieving significantly lower error rates than standard MLPs and SIREN on benchmark tasks while providing closed-form gradients for physics-informed learning.