An Accelerated Doubly Stochastic Gradient Method with Faster Explicit Model Identification

08/11/2022
by   Runxue Bao, et al.
0

Sparsity regularized loss minimization problems play an important role in various fields including machine learning, data mining, and modern statistics. Proximal gradient descent method and coordinate descent method are the most popular approaches to solving the minimization problem. Although existing methods can achieve implicit model identification, aka support set identification, in a finite number of iterations, these methods still suffer from huge computational costs and memory burdens in high-dimensional scenarios. The reason is that the support set identification in these methods is implicit and thus cannot explicitly identify the low-complexity structure in practice, namely, they cannot discard useless coefficients of the associated features to achieve algorithmic acceleration via dimension reduction. To address this challenge, we propose a novel accelerated doubly stochastic gradient descent (ADSGD) method for sparsity regularized loss minimization problems, which can reduce the number of block iterations by eliminating inactive coefficients during the optimization process and eventually achieve faster explicit model identification and improve the algorithm efficiency. Theoretically, we first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity. More importantly, we prove that ADSGD can achieve a linear rate of explicit model identification. Numerically, experimental results on benchmark datasets confirm the efficiency of our proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2021

Screening for Sparse Online Learning

Sparsity promoting regularizers are widely used to impose low-complexity...
research
05/12/2013

Accelerated Mini-Batch Stochastic Dual Coordinate Ascent

Stochastic dual coordinate ascent (SDCA) is an effective technique for s...
research
04/23/2022

Distributed Dynamic Safe Screening Algorithms for Sparse Regularization

Distributed optimization has been widely used as one of the most efficie...
research
07/11/2021

Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient Descent

Data representation techniques have made a substantial contribution to a...
research
10/22/2020

Model identification and local linear convergence of coordinate descent

For composite nonsmooth optimization problems, Forward-Backward algorith...
research
03/30/2020

Explicit Regularization of Stochastic Gradient Methods through Duality

We consider stochastic gradient methods under the interpolation regime w...
research
01/14/2022

ℓ_1-norm constrained multi-block sparse canonical correlation analysis via proximal gradient descent

Multi-block CCA constructs linear relationships explaining coherent vari...

Please sign up or login with your details

Forgot password? Click here to reset