SPPCSO: Adaptive Penalized Estimation Method for High-Dimensional Correlated Data

This paper proposes SPPCSO, an adaptive penalized estimation method that integrates single-parametric principal component regression with L1L_1 regularization to achieve stable variable selection and robust coefficient estimation in high-dimensional, highly correlated, and noisy datasets, outperforming traditional approaches in both theoretical bounds and practical applications such as gene discovery.

Ying Hu, Hu YangMon, 09 Ma🤖 cs.LG

Certified and accurate computation of function space norms of deep neural networks

This paper presents a framework for the certified and accurate computation of deep neural network function space norms (including LpL^p, W1,pW^{1,p}, and W2,pW^{2,p}) by combining interval arithmetic, adaptive refinement, and quadrature to derive guaranteed global bounds from local certificates, thereby enabling reliable error control for PDE applications like PINNs.

Johannes Gründler, Moritz Maibaum, Philipp PetersenMon, 09 Ma🤖 cs.LG

Co-Diffusion: An Affinity-Aware Two-Stage Latent Diffusion Framework for Generalizable Drug-Target Affinity Prediction

Co-Diffusion is a novel two-stage latent diffusion framework that enhances generalizable drug-target affinity prediction by aligning embeddings in an affinity-steered manifold and employing modality-specific diffusion as a regularizer, thereby achieving superior zero-shot performance on unseen molecular scaffolds and protein families compared to state-of-the-art methods.

Yining Qian, Pengjie Wang, Yixiao Li, An-Yang Lu, Cheng Tan, Shuang Li, Lijun LiuFri, 13 Ma📊 stat

Efficient Approximation to Analytic and LpL^p functions by Height-Augmented ReLU Networks

This paper introduces a three-dimensional height-augmented ReLU network architecture that achieves significantly more efficient exponential approximation rates for analytic functions and provides the first quantitative, non-asymptotic high-order approximation for general LpL^p functions, thereby advancing the theoretical foundation for designing parameter-efficient neural networks.

ZeYu Li, FengLei Fan, TieYong ZengFri, 13 Ma📊 stat

RIE-Greedy: Regularization-Induced Exploration for Contextual Bandits

This paper proposes RIE-Greedy, a contextual bandit algorithm that leverages the inherent stochasticity of cross-validation-based regularization in black-box models to induce effective exploration, theoretically matching Thompson Sampling in simple cases and empirically outperforming existing methods in large-scale applications.

Tong Li, Thiago de Queiroz Casanova, Eric M. Schwartz, Victor Kostyuk, Dehan Kong, Joseph J. WilliamsFri, 13 Ma📊 stat