Neural Conditional Gradients

03/12/2018
by   Patrick Schramowski, et al.
0

The move from hand-designed to learned optimizers in machine learning has been quite successful for gradient-based and -free optimizers. When facing a constrained problem, however, maintaining feasibility typically requires a projection step, which might be computationally expensive and not differentiable. We show how the design of projection-free convex optimization algorithms can be cast as a learning problem based on Frank-Wolfe Networks: recurrent networks implementing the Frank-Wolfe algorithm aka. conditional gradients. This allows them to learn to exploit structure when, e.g., optimizing over rank-1 matrices. Our LSTM-learned optimizers outperform hand-designed as well learned but unconstrained ones. We demonstrate this for training support vector machines and softmax classifiers.

READ FULL TEXT
research
06/14/2016

Learning to learn by gradient descent by gradient descent

The move from hand-designed features to learned features in machine lear...
research
10/26/2017

Gradient Sparsification for Communication-Efficient Distributed Optimization

Modern large scale machine learning applications require stochastic opti...
research
10/14/2020

Deep Neural Network Training with Frank-Wolfe

This paper studies the empirical efficacy and benefits of using projecti...
research
06/19/2019

Locally Accelerated Conditional Gradients

Conditional gradient methods form a class of projection-free first-order...
research
06/19/2023

Projection-Free Online Convex Optimization via Efficient Newton Iterations

This paper presents new projection-free algorithms for Online Convex Opt...
research
05/20/2018

Projection-Free Algorithms in Statistical Estimation

Frank-Wolfe algorithm (FW) and its variants have gained a surge of inter...
research
10/21/2019

Learning to Learn by Zeroth-Order Oracle

In the learning to learn (L2L) framework, we cast the design of optimiza...

Please sign up or login with your details

Forgot password? Click here to reset