Stochastic Primal-Dual Proximal ExtraGradient Descent for Compositely Regularized Optimization

08/20/2017
by   Tianyi Lin, et al.
0

We consider a wide range of regularized stochastic minimization problems with two regularization terms, one of which is composed with a linear function. This optimization model abstracts a number of important applications in artificial intelligence and machine learning, such as fused Lasso, fused logistic regression, and a class of graph-guided regularized minimization. The computational challenges of this model are in two folds. On one hand, the closed-form solution of the proximal mapping associated with the composed regularization term or the expected objective function is not available. On the other hand, the calculation of the full gradient of the expectation in the objective is very expensive when the number of input data samples is considerably large. To address these issues, we propose a stochastic variant of extra-gradient type methods, namely Stochastic Primal-Dual Proximal ExtraGradient descent (SPDPEG), and analyze its convergence property for both convex and strongly convex objectives. For general convex objectives, the uniformly average iterates generated by SPDPEG converge in expectation with O(1/√(t)) rate. While for strongly convex objectives, the uniformly and non-uniformly average iterates generated by SPDPEG converge with O((t)/t) and O(1/t) rates, respectively. The order of the rate of the proposed algorithm is known to match the best convergence rate for first-order stochastic algorithms. Experiments on fused logistic regression and graph-guided regularized logistic regression problems show that the proposed algorithm performs very efficiently and consistently outperforms other competing algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/22/2018

On the Iteration Complexity Analysis of Stochastic Primal-Dual Hybrid Gradient Approach with High Probability

In this paper, we propose a stochastic Primal-Dual Hybrid Gradient (PDHG...
research
03/07/2017

Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms

We consider empirical risk minimization of linear predictors with convex...
research
12/29/2016

Geometric descent method for convex composite minimization

In this paper, we extend the geometric descent method recently proposed ...
research
05/24/2015

A Fast and Flexible Algorithm for the Graph-Fused Lasso

We propose a new algorithm for solving the graph-fused lasso (GFL), a me...
research
01/26/2017

Linear convergence of SDCA in statistical estimation

In this paper, we consider stochastic dual coordinate (SDCA) without st...
research
08/28/2017

An inexact subsampled proximal Newton-type method for large-scale machine learning

We propose a fast proximal Newton-type algorithm for minimizing regulari...
research
02/25/2016

Fast Nonsmooth Regularized Risk Minimization with Continuation

In regularized risk minimization, the associated optimization problem be...

Please sign up or login with your details

Forgot password? Click here to reset