Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning

06/15/2022
by   Tianlong Chen, et al.
9

Class-incremental learning (CIL) suffers from the notorious dilemma between learning newly added classes and preserving previously learned class knowledge. That catastrophic forgetting issue could be mitigated by storing historical data for replay, which yet would cause memory overheads as well as imbalanced prediction updates. To address this dilemma, we propose to leverage "free" external unlabeled data querying in continual learning. We first present a CIL with Queried Unlabeled Data (CIL-QUD) scheme, where we only store a handful of past training samples as anchors and use them to query relevant unlabeled examples each time. Along with new and past stored data, the queried unlabeled are effectively utilized, through learning-without-forgetting (LwF) regularizers and class-balance training. Besides preserving model generalization over past and current tasks, we next study the problem of adversarial robustness for CIL-QUD. Inspired by the recent success of learning robust models with unlabeled data, we explore a new robustness-aware CIL setting, where the learned adversarial robustness has to resist forgetting and be transferred as new tasks come in continually. While existing options easily fail, we show queried unlabeled data can continue to benefit, and seamlessly extend CIL-QUD into its robustified versions, RCIL-QUD. Extensive experiments demonstrate that CIL-QUD achieves substantial accuracy gains on CIFAR-10 and CIFAR-100, compared to previous state-of-the-art CIL approaches. Moreover, RCIL-QUD establishes the first strong milestone for robustness-aware CIL. Codes are available in https://github.com/VITA-Group/CIL-QUD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/23/2022

Learning to Predict Gradients for Semi-Supervised Continual Learning

A key challenge for machine intelligence is to learn new visual concepts...
research
03/29/2019

Incremental Learning with Unlabeled Data in the Wild

Deep neural networks are known to suffer from catastrophic forgetting in...
research
03/07/2023

Robustness-preserving Lifelong Learning via Dataset Condensation

Lifelong learning (LL) aims to improve a predictive model as the data so...
research
04/19/2022

Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled Data

Deep neural network (DNN) suffers from catastrophic forgetting when lear...
research
11/01/2021

Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling

Contrastive learning approaches have achieved great success in learning ...
research
01/02/2021

ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for Semi-supervised Continual Learning

Continual learning usually assumes the incoming data are fully labeled, ...
research
11/22/2022

Gated Class-Attention with Cascaded Feature Drift Compensation for Exemplar-free Continual Learning of Vision Transformers

In this paper we propose a new method for exemplar-free class incrementa...

Please sign up or login with your details

Forgot password? Click here to reset