Memory Augmented Optimizers for Deep Learning

06/20/2021
by   Paul-Aymeric McRae, et al.
0

Popular approaches for minimizing loss in data-driven learning often involve an abstraction or an explicit retention of the history of gradients for efficient parameter updates. The aggregated history of gradients nudges the parameter updates in the right direction even when the gradients at any given step are not informative. Although the history of gradients summarized in meta-parameters or explicitly stored in memory has been shown effective in theory and practice, the question of whether all or only a subset of the gradients in the history are sufficient in deciding the parameter updates remains unanswered. In this paper, we propose a framework of memory-augmented gradient descent optimizers that retain a limited view of their gradient history in their internal memory. Such optimizers scale well to large real-life datasets, and our experiments show that the memory augmented extensions of standard optimizers enjoy accelerated convergence and improved performance on a majority of computer vision and language tasks that we considered. Additionally, we prove that the proposed class of optimizers with fixed-size memory converge under assumptions of strong convexity, regardless of which gradients are selected or how they are linearly combined to form the update step.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2023

EMO: Episodic Memory Optimization for Few-Shot Meta-Learning

Few-shot meta-learning presents a challenge for gradient descent optimiz...
research
02/06/2023

Optimization using Parallel Gradient Evaluations on Multiple Parameters

We propose a first-order method for convex optimization, where instead o...
research
06/13/2023

Accelerated Convergence of Nesterov's Momentum for Deep Neural Networks under Partial Strong Convexity

Current state-of-the-art analyses on the convergence of gradient descent...
research
12/20/2022

Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers

Large pretrained language models have shown surprising In-Context Learni...
research
11/28/2022

AdaTask: A Task-aware Adaptive Learning Rate Approach to Multi-task Learning

Multi-task learning (MTL) models have demonstrated impressive results in...
research
08/01/2022

Dynamic Batch Adaptation

Current deep learning adaptive optimizer methods adjust the step magnitu...
research
05/25/2023

SketchOGD: Memory-Efficient Continual Learning

When machine learning models are trained continually on a sequence of ta...

Please sign up or login with your details

Forgot password? Click here to reset