On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning

10/12/2022
by   Lorenzo Bonicelli, et al.
0

Rehearsal approaches enjoy immense popularity with Continual Learning (CL) practitioners. These methods collect samples from previously encountered data distributions in a small memory buffer; subsequently, they repeatedly optimize on the latter to prevent catastrophic forgetting. This work draws attention to a hidden pitfall of this widespread practice: repeated optimization on a small pool of data inevitably leads to tight and unstable decision boundaries, which are a major hindrance to generalization. To address this issue, we propose Lipschitz-DrivEn Rehearsal (LiDER), a surrogate objective that induces smoothness in the backbone network by constraining its layer-wise Lipschitz constants w.r.t. replay examples. By means of extensive experiments, we show that applying LiDER delivers a stable performance gain to several state-of-the-art rehearsal CL methods across multiple datasets, both in the presence and absence of pre-training. Through additional ablative experiments, we highlight peculiar aspects of buffer overfitting in CL and better characterize the effect produced by LiDER. Code is available at https://github.com/aimagelab/LiDER

READ FULL TEXT

page 2

page 16

page 19

research
11/18/2021

GCR: Gradient Coreset Based Replay Buffer Selection For Continual Learning

Continual learning (CL) aims to develop techniques by which a single mod...
research
04/20/2023

Regularizing Second-Order Influences for Continual Learning

Continual learning aims to learn on non-stationary data streams without ...
research
05/26/2023

Summarizing Stream Data for Memory-Restricted Online Continual Learning

Replay-based methods have proved their effectiveness on online continual...
research
06/14/2022

Learning towards Synchronous Network Memorizability and Generalizability for Continual Segmentation across Multiple Sites

In clinical practice, a segmentation network is often required to contin...
research
10/04/2020

Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting

The goal of continual learning (CL) is to learn a sequence of tasks with...
research
04/21/2023

SequeL: A Continual Learning Library in PyTorch and JAX

Continual Learning is an important and challenging problem in machine le...
research
08/07/2019

Visualizing the PHATE of Neural Networks

Understanding why and how certain neural networks outperform others is k...

Please sign up or login with your details

Forgot password? Click here to reset