Improving Task-free Continual Learning by Distributionally Robust Memory Evolution

07/15/2022
by   Zhenyi Wang, et al.
0

Task-free continual learning (CL) aims to learn a non-stationary data stream without explicit task definitions and not forget previous knowledge. The widely adopted memory replay approach could gradually become less effective for long data streams, as the model may memorize the stored examples and overfit the memory buffer. Second, existing methods overlook the high uncertainty in the memory data distribution since there is a big gap between the memory data distribution and the distribution of all the previous data examples. To address these problems, for the first time, we propose a principled memory evolution framework to dynamically evolve the memory data distribution by making the memory buffer gradually harder to be memorized with distributionally robust optimization (DRO). We then derive a family of methods to evolve the memory buffer data in the continuous probability measure space with Wasserstein gradient flow (WGF). The proposed DRO is w.r.t the worst-case evolved memory data distribution, thus guarantees the model performance and learns significantly more robust features than existing memory-replay-based methods. Extensive experiments on existing benchmarks demonstrate the effectiveness of the proposed methods for alleviating forgetting. As a by-product of the proposed framework, our method is more robust to adversarial examples than existing task-free CL methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2022

Learning an evolved mixture model for task-free continual learning

Recently, continual learning (CL) has gained significant interest becaus...
research
04/20/2023

Regularizing Second-Order Influences for Continual Learning

Continual learning aims to learn on non-stationary data streams without ...
research
06/27/2020

Gradient Based Memory Editing for Task-Free Continual Learning

Prior work on continual learning often operate in a "task-aware" manner,...
research
03/19/2022

Practical Recommendations for Replay-based Continual Learning Methods

Continual Learning requires the model to learn from a stream of dynamic,...
research
08/21/2021

Principal Gradient Direction and Confidence Reservoir Sampling for Continual Learning

Task-free online continual learning aims to alleviate catastrophic forge...
research
10/12/2022

Improving information retention in large scale online continual learning

Given a stream of data sampled from non-stationary distributions, online...
research
10/12/2022

Task-Free Continual Learning via Online Discrepancy Distance Learning

Learning from non-stationary data streams, also called Task-Free Continu...

Please sign up or login with your details

Forgot password? Click here to reset