Optimal computational and statistical rates of convergence for sparse nonconvex learning problems

06/20/2013
by   Zhaoran Wang, et al.
0

We provide theoretical analysis of the statistical and computational properties of penalized M-estimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regularization path-following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence for any local solution attained by the algorithm. Computationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all first-order algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In particular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/14/2012

A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem

We consider solving the ℓ_1-regularized least-squares (ℓ_1-LS) problem i...
research
07/09/2019

Nonconvex Regularized Robust Regression with Oracle Properties in Polynomial Time

This paper investigates tradeoffs among optimization errors, statistical...
research
12/17/2014

Support recovery without incoherence: A case for nonconvex regularization

We demonstrate that the primal-dual witness proof method may be used to ...
research
12/16/2021

Analysis of Generalized Bregman Surrogate Algorithms for Nonsmooth Nonconvex Statistical Learning

Modern statistical applications often involve minimizing an objective fu...
research
11/11/2021

Nonconvex flexible sparsity regularization: theory and monotone numerical schemes

Flexible sparsity regularization means stably approximating sparse solut...
research
05/21/2018

Effective Dimension of Exp-concave Optimization

We investigate the role of the effective (a.k.a. statistical) dimension ...
research
06/11/2021

A Unified Framework for Constructing Nonconvex Regularizations

Over the past decades, many individual nonconvex methods have been propo...

Please sign up or login with your details

Forgot password? Click here to reset