Generalized conditional gradient: analysis of convergence and applications

10/22/2015
by   Alain Rakotomamonjy, et al.
0

The objectives of this technical report is to provide additional results on the generalized conditional gradient methods introduced by Bredies et al. [BLM05]. Indeed , when the objective function is smooth, we provide a novel certificate of optimality and we show that the algorithm has a linear convergence rate. Applications of this algorithm are also discussed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2019

Convergence Revisit on Generalized Symmetric ADMM

In this note, we show a sublinear nonergodic convergence rate for the al...
research
05/29/2019

Linear interpolation gives better gradients than Gaussian smoothing in derivative-free optimization

In this paper, we consider derivative free optimization problems, where ...
research
01/29/2023

Conditional generalized quantiles based on expected utility model and equivalent characterization of properties

As a counterpart to the (static) risk measures of generalized quantiles ...
research
01/31/2019

Tight bounds on the convergence rate of generalized ratio consensus algorithms

The problems discussed in this paper are motivated by the ratio consensu...
research
11/29/2022

Remarks on some conditional generalized Borel-Cantelli lemmas

We discuss some conditional generalized Borel-Cantelli lemmas and invest...
research
09/08/2018

Online Adaptive Methods, Universality and Acceleration

We present a novel method for convex unconstrained optimization that, wi...
research
05/09/2012

Alternating Projections for Learning with Expectation Constraints

We present an objective function for learning with unlabeled data that u...

Please sign up or login with your details

Forgot password? Click here to reset