DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning

04/30/2023
by   Zifeng Wang, et al.
0

Rehearsal-based approaches are a mainstay of continual learning (CL). They mitigate the catastrophic forgetting problem by maintaining a small fixed-size buffer with a subset of data from past tasks. While most rehearsal-based approaches study how to effectively exploit the knowledge from the buffered past data, little attention is paid to the inter-task relationships with the critical task-specific and task-invariant knowledge. By appropriately leveraging inter-task relationships, we propose a novel CL method named DualHSIC to boost the performance of existing rehearsal-based methods in a simple yet effective way. DualHSIC consists of two complementary components that stem from the so-called Hilbert Schmidt independence criterion (HSIC): HSIC-Bottleneck for Rehearsal (HBR) lessens the inter-task interference and HSIC Alignment (HA) promotes task-invariant knowledge sharing. Extensive experiments show that DualHSIC can be seamlessly plugged into existing rehearsal-based methods for consistent performance improvements, and also outperforms recent state-of-the-art regularization-enhanced rehearsal methods. Source code will be released.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2023

Online Continual Learning via the Knowledge Invariant and Spread-out Properties

The goal of continual learning is to provide intelligent agents that are...
research
01/15/2021

Learning Invariant Representation for Continual Learning

Continual learning aims to provide intelligent agents that are capable o...
research
03/16/2023

Steering Prototype with Prompt-tuning for Rehearsal-free Continual Learning

Prototype, as a representation of class embeddings, has been explored to...
research
12/31/2021

Revisiting Experience Replay: Continual Learning by Adaptively Tuning Task-wise Relationship

Continual learning requires models to learn new tasks while maintaining ...
research
07/12/2021

Kernel Continual Learning

This paper introduces kernel continual learning, a simple but effective ...
research
07/05/2020

Pseudo-Rehearsal for Continual Learning with Normalizing Flows

Catastrophic forgetting (CF) happens whenever a neural network overwrite...
research
08/16/2023

Graph Relation Aware Continual Learning

Continual graph learning (CGL) studies the problem of learning from an i...

Please sign up or login with your details

Forgot password? Click here to reset