Task-Free Continual Learning via Online Discrepancy Distance Learning

10/12/2022
by   Fei Ye, et al.
0

Learning from non-stationary data streams, also called Task-Free Continual Learning (TFCL) remains challenging due to the absence of explicit task information. Although recently some methods have been proposed for TFCL, they lack theoretical guarantees. Moreover, forgetting analysis during TFCL was not studied theoretically before. This paper develops a new theoretical analysis framework which provides generalization bounds based on the discrepancy distance between the visited samples and the entire information made available for training the model. This analysis gives new insights into the forgetting behaviour in classification tasks. Inspired by this theoretical model, we propose a new approach enabled by the dynamic component expansion mechanism for a mixture model, namely the Online Discrepancy Distance Learning (ODDL). ODDL estimates the discrepancy between the probabilistic representation of the current memory buffer and the already accumulated knowledge and uses it as the expansion signal to ensure a compact network architecture with optimal performance. We then propose a new sample selection approach that selectively stores the most relevant samples into the memory buffer through the discrepancy-based measure, further improving the performance. We perform several TFCL experiments with the proposed methodology, which demonstrate that the proposed approach achieves the state of the art performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2022

Learning an evolved mixture model for task-free continual learning

Recently, continual learning (CL) has gained significant interest becaus...
research
07/20/2022

Continual Variational Autoencoder Learning via Online Cooperative Memorization

Due to their inference, data representation and reconstruction propertie...
research
08/25/2021

Lifelong Infinite Mixture Model Based on Knowledge-Driven Dirichlet Process

Recent research efforts in lifelong learning propose to grow a mixture o...
research
09/18/2023

Analysis of the Memorization and Generalization Capabilities of AI Agents: Are Continual Learners Robust?

In continual learning (CL), an AI agent (e.g., autonomous vehicles or ro...
research
07/15/2022

Improving Task-free Continual Learning by Distributionally Robust Memory Evolution

Task-free continual learning (CL) aims to learn a non-stationary data st...
research
01/18/2022

Continual Learning for CTR Prediction: A Hybrid Approach

Click-through rate(CTR) prediction is a core task in cost-per-click(CPC)...
research
12/15/2021

Lifelong Generative Modelling Using Dynamic Expansion Graph Model

Variational Autoencoders (VAEs) suffer from degenerated performance, whe...

Please sign up or login with your details

Forgot password? Click here to reset