Gradient-Matching Coresets for Rehearsal-Based Continual Learning

03/28/2022
by   Lukas Balles, et al.
0

The goal of continual learning (CL) is to efficiently update a machine learning model with new data without forgetting previously-learned knowledge. Most widely-used CL methods rely on a rehearsal memory of data points to be reused while training on new data. Curating such a rehearsal memory to maintain a small, informative subset of all the data seen so far is crucial to the success of these methods. We devise a coreset selection method for rehearsal-based continual learning. Our method is based on the idea of gradient matching: The gradients induced by the coreset should match, as closely as possible, those induced by the original training dataset. Inspired by the neural tangent kernel theory, we perform this gradient matching across the model's initialization distribution, allowing us to extract a coreset without having to train the model first. We evaluate the method on a wide range of continual learning scenarios and demonstrate that it improves the performance of rehearsal-based CL methods compared to competing memory management strategies such as reservoir sampling.

READ FULL TEXT
research
12/09/2021

Gradient-matching coresets for continual learning

We devise a coreset selection method based on the idea of gradient match...
research
04/10/2022

Information-theoretic Online Memory Selection for Continual Learning

A challenging problem in task-free continual learning is the online sele...
research
06/23/2021

Multiband VAE: Latent Space Partitioning for Knowledge Consolidation in Continual Learning

We propose a new method for unsupervised continual knowledge consolidati...
research
05/19/2023

Conditional Online Learning for Keyword Spotting

Modern approaches for keyword spotting rely on training deep neural netw...
research
09/04/2023

Instant Continual Learning of Neural Radiance Fields

Neural radiance fields (NeRFs) have emerged as an effective method for n...
research
07/29/2023

Continual Learning in Predictive Autoscaling

Predictive Autoscaling is used to forecast the workloads of servers and ...
research
07/16/2021

Continual Learning for Automated Audio Captioning Using The Learning Without Forgetting Approach

Automated audio captioning (AAC) is the task of automatically creating t...

Please sign up or login with your details

Forgot password? Click here to reset