Refining Cramér-Rao Bound With Multivariate Parameters: An Extrinsic Geometry Perspective

This paper presents a vector generalization of the curvature-corrected Cramér-Rao bound for multivariate parameters in the nonasymptotic regime, utilizing extrinsic geometry and sum-of-squares relaxations to derive directional and matrix-valued refinements that offer more faithful estimation limits than classical second-order corrections, as demonstrated through curved Gaussian and spherical multinomial models.

Sunder Ram KrishnanWed, 11 Ma📊 stat

Uniform Lorden-type bounds for overshoot moments for standard exponential families: small drift and an exponential correction

This paper establishes uniform Lorden-type moment bounds for the overshoot of random walks with sign-changing increments from standard exponential families in the small-drift regime, demonstrating that these bounds improve to a constant of 1 for large barriers and providing explicit exponential convergence rates interpreted through optimal transport metrics.

El'mira Yu. Kalimulina, Mark Ya. KelbertWed, 11 Ma📊 stat

On the last time and the number of times an estimator is more than epsilon from its target value

This paper establishes the limit distributions for the last occurrence and total count of times a strongly consistent estimator deviates from its target by at least ε\varepsilon as ε0\varepsilon \to 0, providing a unified framework applicable to parametric and nonparametric settings that yields new optimality results for maximum likelihood estimators and methods for constructing sequential confidence sets.

Nils Lid Hjort, Grete FenstadWed, 11 Ma📊 stat

Second order asymptotics for the number of times an estimator is more than epsilon from its target value

This paper investigates second-order asymptotics for the number of times a strongly consistent estimator deviates from its target by more than ε\varepsilon, introducing a concept of "asymptotic relative deficiency" to distinguish between estimators with identical first-order efficiency and demonstrating that specific finite-sample corrections (such as using n1/3n-1/3 for normal variance) minimize the expected number of such errors.

Nils Lid Hjort, Grete FenstadWed, 11 Ma📊 stat

Adaptive and Stratified Subsampling for High-Dimensional Robust Estimation

This paper introduces Adaptive Importance Sampling and Stratified Subsampling estimators that achieve minimax-optimal rates for robust high-dimensional sparse regression under heavy-tailed noise, contamination, and temporal dependence, while also providing fully specified de-biasing procedures for valid confidence intervals and demonstrating superior empirical performance over uniform subsampling.

Prateek Mittal, Joohi ChauhanWed, 11 Ma🤖 cs.LG

Kernel Debiased Plug-in Estimation based on the Universal Least Favorable Submodel

This paper introduces ULFS-KDPE, a novel kernel-based estimator that achieves semiparametric efficiency for pathwise differentiable parameters in nonparametric models by constructing a data-adaptive debiasing flow via a universal least favorable submodel, thereby eliminating the need for explicit efficient influence function derivation while ensuring rigorous theoretical guarantees and computational tractability.

Haiyi Chen, Yang Liu, Ivana MalenicaWed, 11 Ma🤖 cs.LG