Inexact and Stochastic Generalized Conditional Gradient with Augmented Lagrangian and Proximal Step

05/11/2020
by   Antonio Silveti-Falls, et al.
0

In this paper we propose and analyze inexact and stochastic versions of the CGALP algorithm developed in the authors' previous paper, which we denote ICGALP, that allows for errors in the computation of several important quantities. In particular this allows one to compute some gradients, proximal terms, and/or linear minimization oracles in an inexact fashion that facilitates the practical application of the algorithm to computationally intensive settings, e.g. in high (or possibly infinite) dimensional Hilbert spaces commonly found in machine learning problems. The algorithm is able to solve composite minimization problems involving the sum of three convex proper lower-semicontinuous functions subject to an affine constraint of the form Ax=b for some bounded linear operator A. Only one of the functions in the objective is assumed to be differentiable, the other two are assumed to have an accessible prox operator and a linear minimization oracle. As main results, we show convergence of the Lagrangian to an optimum and asymptotic feasibility of the affine constraint as well as weak convergence of the dual variable to a solution of the dual problem, all in an almost sure sense. Almost sure convergence rates, both pointwise and ergodic, are given for the Lagrangian values and the feasibility gap. Numerical experiments verifying the predicted rates of convergence are shown as well.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2022

Faster Projection-Free Augmented Lagrangian Methods via Weak Proximal Oracle

This paper considers a convex composite optimization problem with affine...
research
10/02/2019

Global exponential stability of primal-dual gradient flow dynamics based on the proximal augmented Lagrangian: A Lyapunov-based approach

For a class of nonsmooth composite optimization problems with linear equ...
research
11/20/2009

Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for Sparsity Regularized Estimation

We analyze the convergence behaviour of a recently proposed algorithm fo...
research
01/29/2019

Stochastic Conditional Gradient Method for Composite Convex Minimization

In this paper, we propose the first practical algorithm to minimize stoc...
research
04/09/2018

Frank-Wolfe Splitting via Augmented Lagrangian Method

Minimizing a function over an intersection of convex sets is an importan...
research
01/23/2019

A Fully Stochastic Primal-Dual Algorithm

A new stochastic primal-dual algorithm for solving a composite optimizat...
research
10/23/2020

Sub-linear convergence of a stochastic proximal iteration method in Hilbert space

We consider a stochastic version of the proximal point algorithm for opt...

Please sign up or login with your details

Forgot password? Click here to reset