Symbolic Discovery of Optimization Algorithms

02/13/2023
by   Xiangning Chen, et al.
0

We present a method to formulate algorithm discovery as program search, and apply it to discover optimization algorithms for deep neural network training. We leverage efficient search techniques to explore an infinite and sparse program space. To bridge the large generalization gap between proxy and target tasks, we also introduce program selection and simplification strategies. Our method discovers a simple and effective optimization algorithm, Lion (EvoLved Sign Momentum). It is more memory-efficient than Adam as it only keeps track of the momentum. Different from adaptive optimizers, its update has the same magnitude for each parameter calculated through the sign operation. We compare Lion with widely used optimizers, such as Adam and Adafactor, for training a variety of models on different tasks. On image classification, Lion boosts the accuracy of ViT by up to 2 vision-language contrastive learning, we achieve 88.3 91.1 results by 2 Adam by achieving a better FID score and reducing the training compute by up to 2.3x. For autoregressive, masked language modeling, and fine-tuning, Lion exhibits a similar or better performance compared to Adam. Our analysis of Lion reveals that its performance gain grows with the training batch size. It also requires a smaller learning rate than Adam due to the larger norm of the update produced by the sign function. Additionally, we examine the limitations of Lion and identify scenarios where its improvements are small or not statistically significant. The implementation of Lion is publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2023

The Marginal Value of Momentum for Small Learning Rate SGD

Momentum is known to accelerate the convergence of gradient descent in s...
research
01/24/2023

Read the Signs: Towards Invariance to Gradient Descent's Hyperparameter Initialization

We propose ActiveLR, an optimization meta algorithm that localizes the l...
research
07/25/2019

DEAM: Accumulated Momentum with Discriminative Weight for Stochastic Optimization

Optimization algorithms with momentum, e.g., Nesterov Accelerated Gradie...
research
06/22/2021

Adaptive Learning Rate and Momentum for Training Deep Neural Networks

Recent progress on deep learning relies heavily on the quality and effic...
research
06/29/2020

Gradient-only line searches to automatically determine learning rates for a variety of stochastic training algorithms

Gradient-only and probabilistic line searches have recently reintroduced...
research
07/26/2019

BGADAM: Boosting based Genetic-Evolutionary ADAM for Convolutional Neural Network Optimization

Among various optimization algorithms, ADAM can achieve outstanding perf...
research
04/14/2020

Stochastic batch size for adaptive regularization in deep network optimization

We propose a first-order stochastic optimization algorithm incorporating...

Please sign up or login with your details

Forgot password? Click here to reset