Improving information retention in large scale online continual learning

10/12/2022
by   Zhipeng Cai, et al.
0

Given a stream of data sampled from non-stationary distributions, online continual learning (OCL) aims to adapt efficiently to new data while retaining existing knowledge. The typical approach to address information retention (the ability to retain previous knowledge) is keeping a replay buffer of a fixed size and computing gradients using a mixture of new data and the replay buffer. Surprisingly, the recent work (Cai et al., 2021) suggests that information retention remains a problem in large scale OCL even when the replay buffer is unlimited, i.e., the gradients are computed using all past data. This paper focuses on this peculiarity to understand and address information retention. To pinpoint the source of this problem, we theoretically show that, given limited computation budgets at each time step, even without strict storage limit, naively applying SGD with constant or constantly decreasing learning rates fails to optimize information retention in the long term. We propose using a moving average family of methods to improve optimization for non-stationary objectives. Specifically, we design an adaptive moving average (AMA) optimizer and a moving-average-based learning rate schedule (MALR). We demonstrate the effectiveness of AMA+MALR on large-scale benchmarks, including Continual Localization (CLOC), Google Landmarks, and ImageNet. Code will be released upon publication.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/26/2023

Summarizing Stream Data for Memory-Restricted Online Continual Learning

Replay-based methods have proved their effectiveness on online continual...
research
06/29/2023

Improving Online Continual Learning Performance and Stability with Temporal Ensembles

Neural networks are very effective when trained on large datasets for a ...
research
05/18/2021

ACAE-REMIND for Online Continual Learning with Compressed Feature Replay

Online continual learning aims to learn from a non-IID stream of data fr...
research
12/22/2021

Continual learning of longitudinal health records

Continual learning denotes machine learning methods which can adapt to n...
research
07/15/2022

Improving Task-free Continual Learning by Distributionally Robust Memory Evolution

Task-free continual learning (CL) aims to learn a non-stationary data st...
research
10/06/2020

The Effectiveness of Memory Replay in Large Scale Continual Learning

We study continual learning in the large scale setting where tasks in th...
research
05/11/2021

TAG: Task-based Accumulated Gradients for Lifelong learning

When an agent encounters a continual stream of new tasks in the lifelong...

Please sign up or login with your details

Forgot password? Click here to reset