Safeguarded Learned Convex Optimization

03/04/2020
by   Howard Heaton, et al.
0

Many applications require repeatedly solving a certain type of optimization problem, each time with new (but similar) data. Data-driven algorithms can "learn to optimize" (L2O) with much fewer iterations and with similar cost per iteration as general-purpose optimization algorithms. L2O algorithms are often derived from general-purpose algorithms, but with the inclusion of (possibly many) tunable parameters. Exceptional performance has been demonstrated when the parameters are optimized for a particular distribution of data. Unfortunately, it is impossible to ensure all L2O algorithms always converge to a solution. However, we present a framework that uses L2O updates together with a safeguard to guarantee convergence for convex problems with proximal and/or gradient oracles. The safeguard is simple and computationally cheap to implement, and it should be activated only when the current L2O updates would perform poorly or appear to diverge. This approach yields the numerical benefits of employing machine learning methods to create rapid L2O algorithms while still guaranteeing convergence. Our numerical examples demonstrate the efficacy of this approach for existing and new L2O schemes.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/09/2015

Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

Submodular function minimization is a fundamental optimization problem t...
08/29/2015

Generalized Uniformly Optimal Methods for Nonlinear Programming

In this paper, we present a generic framework to extend existing uniform...
09/01/2015

Adaptive Smoothing Algorithms for Nonsmooth Composite Convex Minimization

We propose an adaptive smoothing algorithm based on Nesterov's smoothing...
07/17/2019

Dynamic optimization with side information

We present a data-driven framework for incorporating side information in...
06/30/2020

Conditional Gradient Methods for convex optimization with function constraints

Conditional gradient methods have attracted much attention in both machi...
08/26/2021

Non-Dissipative and Structure-Preserving Emulators via Spherical Optimization

Approximating a function with a finite series, e.g., involving polynomia...
01/27/2022

From the Ravine method to the Nesterov method and vice versa: a dynamical system perspective

We revisit the Ravine method of Gelfand and Tsetlin from a dynamical sys...