Multi-Step Stochastic ADMM in High Dimensions: Applications to Sparse Optimization and Noisy Matrix Decomposition

02/20/2014
by   Hanie Sedghi, et al.
0

We propose an efficient ADMM method with guarantees for high-dimensional problems. We provide explicit bounds for the sparse optimization problem and the noisy matrix decomposition problem. For sparse optimization, we establish that the modified ADMM method has an optimal convergence rate of O(s d/T), where s is the sparsity level, d is the data dimension and T is the number of steps. This matches with the minimax lower bounds for sparse estimation. For matrix decomposition into sparse and low rank components, we provide the first guarantees for any online method, and prove a convergence rate of Õ((s+r)β^2(p) /T) + O(1/p) for a p× p matrix, where s is the sparsity level, r is the rank and Θ(√(p))≤β(p)≤Θ(p). Our guarantees match the minimax lower bound with respect to s,r and T. In addition, we match the minimax lower bound with respect to the matrix dimension p, i.e. β(p)=Θ(√(p)), for many important statistical models including the independent noise model, the linear Bayesian network and the latent Gaussian graphical model under some conditions. Our ADMM method is based on epoch-based annealing and consists of inexpensive steps which involve projections on to simple norm balls. Experiments show that for both sparse optimization and matrix decomposition problems, our algorithm outperforms the state-of-the-art methods. In particular, we reach higher accuracy with same time complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2011

Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions

We analyze a class of estimators based on convex relaxation for solving ...
research
02/21/2017

A Nonconvex Free Lunch for Low-Rank plus Sparse Matrix Recovery

We study the problem of low-rank plus sparse matrix recovery. We propose...
research
07/25/2022

Minimax Rates for High-dimensional Double Sparse Structure over ℓ_q-balls

In this paper, we focus on the high-dimensional double sparse structure,...
research
05/05/2021

On the Optimality of Nuclear-norm-based Matrix Completion for Problems with Smooth Non-linear Structure

Originally developed for imputing missing entries in low rank, or approx...
research
01/12/2021

The Beta-Mixture Shrinkage Prior for Sparse Covariances with Posterior Minimax Rates

Statistical inference for sparse covariance matrices is crucial to revea...
research
12/24/2021

Accelerated and instance-optimal policy evaluation with linear function approximation

We study the problem of policy evaluation with linear function approxima...
research
01/22/2020

Optimal estimation of sparse topic models

Topic models have become popular tools for dimension reduction and explo...

Please sign up or login with your details

Forgot password? Click here to reset