Safeguarded Learned Convex Optimization

03/04/2020
by   Howard Heaton, et al.
0

Many applications require repeatedly solving a certain type of optimization problem, each time with new (but similar) data. Data-driven algorithms can "learn to optimize" (L2O) with much fewer iterations and with similar cost per iteration as general-purpose optimization algorithms. L2O algorithms are often derived from general-purpose algorithms, but with the inclusion of (possibly many) tunable parameters. Exceptional performance has been demonstrated when the parameters are optimized for a particular distribution of data. Unfortunately, it is impossible to ensure all L2O algorithms always converge to a solution. However, we present a framework that uses L2O updates together with a safeguard to guarantee convergence for convex problems with proximal and/or gradient oracles. The safeguard is simple and computationally cheap to implement, and it should be activated only when the current L2O updates would perform poorly or appear to diverge. This approach yields the numerical benefits of employing machine learning methods to create rapid L2O algorithms while still guaranteeing convergence. Our numerical examples demonstrate the efficacy of this approach for existing and new L2O schemes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/09/2015

Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

Submodular function minimization is a fundamental optimization problem t...
research
08/29/2015

Generalized Uniformly Optimal Methods for Nonlinear Programming

In this paper, we present a generic framework to extend existing uniform...
research
09/01/2015

Adaptive Smoothing Algorithms for Nonsmooth Composite Convex Minimization

We propose an adaptive smoothing algorithm based on Nesterov's smoothing...
research
07/17/2019

Dynamic optimization with side information

We present a data-driven framework for incorporating side information in...
research
03/01/2023

Composite Optimization Algorithms for Sigmoid Networks

In this paper, we use composite optimization algorithms to solve sigmoid...
research
08/26/2021

Non-Dissipative and Structure-Preserving Emulators via Spherical Optimization

Approximating a function with a finite series, e.g., involving polynomia...
research
06/23/2020

Inexact Derivative-Free Optimization for Bilevel Learning

Variational regularization techniques are dominant in the field of mathe...

Please sign up or login with your details

Forgot password? Click here to reset