Is adaptive early stopping possible in statistical inverse problems ?

Jeudi 21 juin 2018 15:30-16:30 - Gilles Blanchard - Université de Potsdam

Résumé : (Joint work with M. Hoffmann and M. Reiss)
Consider a statistical inverse problem where different estimators $\hatf_1, \ldots , \hatf_K$ are available (ranked in order of increasing « complexity », here identified with variance) and the classical problem of estimator selection. For a (data-dependent) choice of the estimator index $\hatk$, there exist a number of well-known methods achieving oracle adaptivity in a wide range of contexts, for instance penalization methods, or Lepski’s method.
However, they have in common that the estimators for \em all possible values of $k$ have to be computed first, and only then compared to each other in some way to determine the final choice. Contrast this to an « early stopping » approach where we are able to compute iteratively the estimators for $k= 1, 2, \ldots $ and have to decide to stop at some point without being allowed to compute the following estimators. Is oracle adaptivity possible then ? This question is motivated by settings where computing estimators for larger $k$ requires more computational cost ; furthermore, some form of early stopping is most often used in practice.
We propose a precise mathematical formulation of this question — in the idealized framework of a Gaussian sequence model with $D$ observed noisy coefficients. In this model, we provide upper and lower bounds on what is achievable using linear regularized filter estimators commonly used for statistical inverse problems.

Lieu : salle 3L15

Is adaptive early stopping possible in statistical inverse problems ?  Version PDF