Learning to learn by gradient descent by gradient descent

06/14/2016
by   Marcin Andrychowicz, et al.
0

The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.

READ FULL TEXT

page 2

page 4

page 8

page 12

page 16

research
03/12/2018

Neural Conditional Gradients

The move from hand-designed to learned optimizers in machine learning ha...
research
03/10/2020

Learning to be Global Optimizer

The advancement of artificial intelligence has cast a new light on the d...
research
10/21/2019

Learning to Learn by Zeroth-Order Oracle

In the learning to learn (L2L) framework, we cast the design of optimiza...
research
06/20/2016

Neural networks with differentiable structure

While gradient descent has proven highly successful in learning connecti...
research
03/01/2017

Learning to Optimize Neural Nets

Learning to Optimize is a recently proposed framework for learning optim...
research
11/11/2016

Learning to Learn without Gradient Descent by Gradient Descent

We learn recurrent neural network optimizers trained on simple synthetic...
research
01/04/2021

Learning to Optimize Under Constraints with Unsupervised Deep Neural Networks

In this paper, we propose a machine learning (ML) method to learn how to...

Please sign up or login with your details

Forgot password? Click here to reset