Information-theoretic Online Memory Selection for Continual Learning

04/10/2022
by   Shengyang Sun, et al.
0

A challenging problem in task-free continual learning is the online selection of a representative replay memory from data streams. In this work, we investigate the online memory selection problem from an information-theoretic perspective. To gather the most information, we propose the surprise and the learnability criteria to pick informative points and to avoid outliers. We present a Bayesian model to compute the criteria efficiently by exploiting rank-one matrix structures. We demonstrate that these criteria encourage selecting informative points in a greedy algorithm for online memory selection. Furthermore, by identifying the importance of the timing to update the memory, we introduce a stochastic information-theoretic reservoir sampler (InfoRS), which conducts sampling among selective points with high information. Compared to reservoir sampling, InfoRS demonstrates improved robustness against data imbalance. Finally, empirical performances over continual learning benchmarks manifest its efficiency and efficacy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/28/2022

Gradient-Matching Coresets for Rehearsal-Based Continual Learning

The goal of continual learning (CL) is to efficiently update a machine l...
research
06/02/2021

Online Coreset Selection for Rehearsal-based Continual Learning

A dataset is a shred of crucial evidence to describe a task. However, ea...
research
12/09/2021

Gradient-matching coresets for continual learning

We devise a coreset selection method based on the idea of gradient match...
research
08/21/2021

Principal Gradient Direction and Confidence Reservoir Sampling for Continual Learning

Task-free online continual learning aims to alleviate catastrophic forge...
research
08/04/2020

Online Continual Learning under Extreme Memory Constraints

Continual Learning (CL) aims to develop agents emulating the human abili...
research
07/06/2021

Prioritized training on points that are learnable, worth learning, and not yet learned

We introduce Goldilocks Selection, a technique for faster model training...

Please sign up or login with your details

Forgot password? Click here to reset