DeepAI AI Chat
Log In Sign Up

Sharp oracle inequalities for stationary points of nonconvex penalized M-estimators

by   Andreas Elsener, et al.
ETH Zurich

Many statistical estimation procedures lead to nonconvex optimization problems. Algorithms to solve these are often guaranteed to output a stationary point of the optimization problem. Oracle inequalities are an important theoretical instrument to asses the statistical performance of an estimator. Oracle results have focused on the theoretical properties of the uncomputable (global) minimum or maximum. In the present work a general framework used for convex optimization problems to derive oracle inequalities for stationary points is extended. A main new ingredient of these oracle inequalities is that they are sharp: they show closeness to the best approximation within the model plus a remainder term. We apply this framework to different estimation problems.


page 1

page 2

page 3

page 4


Accelerating Inexact HyperGradient Descent for Bilevel Optimization

We present a method for solving general nonconvex-strongly-convex bileve...

Concentration inequalities of MLE and robust MLE

The Maximum Likelihood Estimator (MLE) serves an important role in stati...

Solution of linear ill-posed problems by model selection and aggregation

We consider a general statistical linear inverse problem, where the solu...

Sharp Inequalities for f-divergences

f-divergences are a general class of divergences between probability mea...

Adaptive Denoising of Signals with Shift-Invariant Structure

We study the problem of discrete-time signal denoising, following the li...

Learning the intensity of time events with change-points

We consider the problem of learning the inhomogeneous intensity of a cou...

Sparse Kronecker Product Decomposition: A General Framework of Signal Region Detection in Image Regression

This paper aims to present the first Frequentist framework on signal reg...