Adaptive minimax optimality in statistical inverse problems via SOLIT – Sharp Optimal Lepskii-Inspired Tuning

04/20/2023
by   Housen Li, et al.
0

We consider statistical linear inverse problems in separable Hilbert spaces and filter-based reconstruction methods of the form f̂_α = q_α(T^*T)T^*Y, where Y is the available data, T the forward operator, (q_α)_α∈𝒜 an ordered filter, and α > 0 a regularization parameter. Whenever such a method is used in practice, α has to be chosen appropriately. Typically, the aim is to find or at least approximate the best possible α in the sense that mean squared error (MSE) 𝔼 [‖f̂_α - f^†‖^2] w.r.t. the true solution f^† is minimized. In this paper, we introduce the Sharp Optimal Lepskiĭ-Inspired Tuning (SOLIT) method, which yields an a posteriori parameter choice rule ensuring adaptive minimax rates of convergence. It depends only on Y and the noise level σ as well as the operator T and the filter (q_α)_α∈𝒜 and does not require any problem-dependent tuning of further parameters. We prove an oracle inequality for the corresponding MSE in a general setting and derive the rates of convergence in different scenarios. By a careful analysis we show that no other a posteriori parameter choice rule can yield a better performance in terms of the convergence rate of the MSE. In particular, our results reveal that the typical understanding of Lepskiii-type methods in inverse problems leading to a loss of a log factor is wrong. In addition, the empirical performance of SOLIT is examined in simulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2020

On the asymptotical regularization for linear inverse problems in presence of white noise

We interpret steady linear statistical inverse problems as artificial dy...
research
10/06/2021

Iterate Averaging and Filtering Algorithms for Linear Inverse Problems

It has been proposed that classical filtering methods, like the Kalman f...
research
06/11/2021

Learning the optimal regularizer for inverse problems

In this work, we consider the linear inverse problem y=Ax+ϵ, where A X→ ...
research
05/11/2018

Rate optimal estimation of quadratic functionals in inverse problems with partially unknown operator and application to testing problems

We consider the estimation of quadratic functionals in a Gaussian sequen...
research
04/16/2022

PAC-Bayesian Based Adaptation for Regularized Learning

In this paper, we propose a PAC-Bayesian a posteriori parameter selectio...
research
03/18/2021

Error Analysis of Douglas-Rachford Algorithm for Linear Inverse Problems: Asymptotics of Proximity Operator for Squared Loss

Proximal splitting-based convex optimization is a promising approach to ...
research
07/06/2023

On the Optimality of Functional Sliced Inverse Regression

In this paper, we prove that functional sliced inverse regression (FSIR)...

Please sign up or login with your details

Forgot password? Click here to reset