Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled Data

04/19/2022
by   Yu-Ming Tang, et al.
0

Deep neural network (DNN) suffers from catastrophic forgetting when learning incrementally, which greatly limits its applications. Although maintaining a handful of samples (called `exemplars`) of each task could alleviate forgetting to some extent, existing methods are still limited by the small number of exemplars since these exemplars are too few to carry enough task-specific knowledge, and therefore the forgetting remains. To overcome this problem, we propose to `imagine` diverse counterparts of given exemplars referring to the abundant semantic-irrelevant information from unlabeled data. Specifically, we develop a learnable feature generator to diversify exemplars by adaptively generating diverse counterparts of exemplars based on semantic information from exemplars and semantically-irrelevant information from unlabeled data. We introduce semantic contrastive learning to enforce the generated samples to be semantic consistent with exemplars and perform semanticdecoupling contrastive learning to encourage diversity of generated samples. The diverse generated samples could effectively prevent DNN from forgetting when learning new tasks. Our method does not bring any extra inference cost and outperforms state-of-the-art methods on two benchmarks CIFAR-100 and ImageNet-Subset by a clear margin.

READ FULL TEXT
research
11/23/2022

Semi-Supervised Lifelong Language Learning

Lifelong learning aims to accumulate knowledge and alleviate catastrophi...
research
03/29/2019

Incremental Learning with Unlabeled Data in the Wild

Deep neural networks are known to suffer from catastrophic forgetting in...
research
06/15/2022

Queried Unlabeled Data Improves and Robustifies Class-Incremental Learning

Class-incremental learning (CIL) suffers from the notorious dilemma betw...
research
11/18/2022

Delving into Transformer for Incremental Semantic Segmentation

Incremental semantic segmentation(ISS) is an emerging task where old mod...
research
11/10/2022

Mitigating Forgetting in Online Continual Learning via Contrasting Semantically Distinct Augmentations

Online continual learning (OCL) aims to enable model learning from a non...
research
02/14/2022

Learn by Challenging Yourself: Contrastive Visual Representation Learning with Hard Sample Generation

Contrastive learning (CL), a self-supervised learning approach, can effe...
research
04/25/2023

SAFE: Machine Unlearning With Shard Graphs

We present Synergy Aware Forgetting Ensemble (SAFE), a method to adapt l...

Please sign up or login with your details

Forgot password? Click here to reset