Exploring Example Influence in Continual Learning

09/25/2022
by   Qing Sun, et al.
0

Continual Learning (CL) sequentially learns new tasks like human beings, with the goal to achieve better Stability (S, remembering past tasks) and Plasticity (P, adapting to new tasks). Due to the fact that past training data is not available, it is valuable to explore the influence difference on S and P among training examples, which may improve the learning pattern towards better SP. Inspired by Influence Function (IF), we first study example influence via adding perturbation to example weight and computing the influence derivation. To avoid the storage and calculation burden of Hessian inverse in neural networks, we propose a simple yet effective MetaSP algorithm to simulate the two key steps in the computation of IF and obtain the S- and P-aware example influence. Moreover, we propose to fuse two kinds of example influence by solving a dual-objective optimization problem, and obtain a fused influence towards SP Pareto optimality. The fused influence can be used to control the update of model and optimize the storage of rehearsal. Empirical results show that our algorithm significantly outperforms state-of-the-art methods on both task- and class-incremental benchmark CL datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2022

Balancing Stability and Plasticity through Advanced Null Space in Continual Learning

Continual learning is a learning paradigm that learns tasks sequentially...
research
06/18/2022

NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks

The goal of continual learning (CL) is to learn different tasks over tim...
research
09/18/2019

Continual learning: A comparative study on how to defy forgetting in classification tasks

Artificial neural networks thrive in solving the classification problem ...
research
01/03/2022

Class-Incremental Continual Learning into the eXtended DER-verse

The staple of human intelligence is the capability of acquiring knowledg...
research
02/27/2019

Continual Learning with Tiny Episodic Memories

Learning with less supervision is a major challenge in artificial intell...
research
07/11/2022

Repairing Neural Networks by Leaving the Right Past Behind

Prediction failures of machine learning models often arise from deficien...
research
02/27/2023

Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets

Increasingly larger datasets have become a standard ingredient to advanc...

Please sign up or login with your details

Forgot password? Click here to reset