Incremental Classifier Learning with Generative Adversarial Networks

02/02/2018
by   Yue Wu, et al.
0

In this paper, we address the incremental classifier learning problem, which suffers from catastrophic forgetting. The main reason for catastrophic forgetting is that the past data are not available during learning. Typical approaches keep some exemplars for the past classes and use distillation regularization to retain the classification capability on the past classes and balance the past and new classes. However, there are four main problems with these approaches. First, the loss function is not efficient for classification. Second, there is unbalance problem between the past and new classes. Third, the size of pre-decided exemplars is usually limited and they might not be distinguishable from unseen new classes. Forth, the exemplars may not be allowed to be kept for a long time due to privacy regulations. To address these problems, we propose (a) a new loss function to combine the cross-entropy loss and distillation loss, (b) a simple way to estimate and remove the unbalance between the old and new classes , and (c) using Generative Adversarial Networks (GANs) to generate historical data and select representative exemplars during generation. We believe that the data generated by GANs have much less privacy issues than real images because GANs do not directly copy any real image patches. We evaluate the proposed method on CIFAR-100, Flower-102, and MS-Celeb-1M-Base datasets and extensive experiments demonstrate the effectiveness of our method.

READ FULL TEXT

page 6

page 7

page 9

research
07/05/2019

Incremental Concept Learning via Online Generative Memory Recall

The ability to learn more and more concepts over time from incrementally...
research
03/23/2021

Balanced Softmax Cross-Entropy for Incremental Learning

Deep neural networks are prone to catastrophic forgetting when increment...
research
12/15/2020

Class-incremental Learning with Rectified Feature-Graph Preservation

In this paper, we address the problem of distillation-based class-increm...
research
07/09/2021

Lifelong Twin Generative Adversarial Networks

In this paper, we propose a new continuously learning generative model, ...
research
07/08/2018

Distillation Techniques for Pseudo-rehearsal Based Incremental Learning

The ability to learn from incrementally arriving data is essential for a...
research
10/16/2019

Label-Conditioned Next-Frame Video Generation with Neural Flows

Recent state-of-the-art video generation systems employ Generative Adver...

Please sign up or login with your details

Forgot password? Click here to reset