Scalable multitask Gaussian processes for complex mechanical systems with functional covariates

This paper introduces a scalable multitask Gaussian process model with a fully separable kernel structure that effectively handles functional covariates and correlated tasks, demonstrating superior accuracy and computational efficiency over single-task approaches in predicting the behavior of complex mechanical systems like riveted assemblies with limited data.

Razak Christophe Sabi Gninkou (UPHF, INSA Hauts-De-France, CERAMATHS), Andrés F. López-Lopera (IMAG, LEMON, UM), Franck Massa (LAMIH, INSA Hauts-De-France, UPHF), Rodolphe Le Riche (LIMOS, UCA [2017-2020], ENSM ST-ETIENNE, CNRS)Tue, 10 Ma🔢 math

Nuisance Function Tuning and Sample Splitting for Optimally Estimating a Doubly Robust Functional

This paper demonstrates that by strategically combining sample splitting with specific nuisance function tuning strategies (such as undersmoothing or oversmoothing), both plug-in and first-order bias-corrected estimators can achieve minimax rates of convergence for doubly robust functionals across all Hölder smoothness classes, overcoming limitations of existing literature.

Sean McGrath, Rajarshi MukherjeeTue, 10 Ma🔢 math

Kernel Methods for Some Transport Equations with Application to Learning Kernels for the Approximation of Koopman Eigenfunctions: A Unified Approach via Variational Methods, Green's Functions and the Method of Characteristics

This paper presents a unified framework that proves the equivalence of variational, Green's function, and characteristic-based methods for constructing reproducing kernels, enabling a data-driven, mesh-free approach to learning kernels that accurately approximate Koopman eigenfunctions and solve various linear transport equations.

Boumediene Hamzi, Houman Owhadi, Umesh VaidyaTue, 10 Ma🔢 math

From Mice to Trains: Amortized Bayesian Inference on Graph Data

This paper proposes an amortized Bayesian inference framework for graph-structured data that combines permutation-invariant summary networks with neural posterior estimators to enable fast, likelihood-free inference on node, edge, and graph-level parameters, demonstrating its effectiveness through evaluations on synthetic benchmarks and real-world applications in biology and logistics.

Svenja Jedhoff, Elizaveta Semenova, Aura Raulo, Anne Meyer, Paul-Christian BürknerTue, 10 Ma🤖 cs.LG

Sparse Offline Reinforcement Learning with Corruption Robustness

This paper proposes actor-critic methods with sparse robust estimator oracles to achieve the first non-vacuous guarantees for learning near-optimal policies in high-dimensional sparse offline reinforcement learning under strong data corruption and single-policy concentrability, overcoming the limitations of traditional Least Square Value Iteration approaches in such regimes.

Nam Phuong Tran, Andi Nika, Goran Radanovic, Long Tran-Thanh, Debmalya MandalTue, 10 Ma🤖 cs.LG

Faster Gradient Methods for Highly-Smooth Stochastic Bilevel Optimization

This paper proposes the F²SA-pp method, which utilizes pp-th order finite differences to achieve a nearly optimal O~(pϵ4p/2)\tilde{\mathcal{O}}(p \epsilon^{-4-p/2}) complexity for finding ϵ\epsilon-stationary points in stochastic bilevel optimization with highly smooth objectives, thereby improving upon previous first-order bounds and matching the fundamental lower limit.

Lesi Chen, Junru Li, El Mahdi Chayti, Jingzhao ZhangTue, 10 Ma🤖 cs.LG

Synthetic data for ratemaking: imputation-based methods vs adversarial networks and autoencoders

This paper benchmarks Multivariate Imputation by Chained Equations (MICE) against deep generative models like Variational Autoencoders and Conditional Tabular GANs for synthetic ratemaking data, finding that MICE offers a simpler yet high-fidelity alternative that effectively preserves statistical distributions and supports robust Generalized Linear Model training.

Yevhen Havrylenko, Meelis Käärik, Artur TuttarTue, 10 Ma🤖 cs.LG

Mini-batch Estimation for Deep Cox Models: Statistical Foundations and Practical Guidance

This paper establishes the statistical foundations of the mini-batch maximum partial-likelihood estimator (mb-MPLE) for deep Cox models optimized via stochastic gradient descent, proving its consistency and optimal convergence rates while providing practical guidance on hyperparameter tuning and demonstrating its effectiveness in large-scale applications where standard estimation is intractable.

Lang Zeng, Weijing Tang, Zhao Ren, Ying DingTue, 10 Ma🤖 cs.LG