Beyond Cross-Validation: Adaptive Parameter Selection for Kernel-Based Gradient Descents
This paper proposes a novel, implementable adaptive parameter selection strategy for kernel-based gradient descent that integrates bias-variance analysis with the splitting method and empirical effective dimension to achieve optimal generalization error bounds across diverse kernels, target functions, and error metrics.