Maintaining Discrimination and Fairness in Class Incremental Learning

11/16/2019
by   Bowen Zhao, et al.
0

Deep neural networks (DNNs) have been applied in class incremental learning, which aims to solve common real-world problems of learning new classes continually. One drawback of standard DNNs is that they are prone to catastrophic forgetting. Knowledge distillation (KD) is a commonly used technique to alleviate this problem. In this paper, we demonstrate it can indeed help the model to output more discriminative results within old classes. However, it cannot alleviate the problem that the model tends to classify objects into new classes, causing the positive effect of KD to be hidden and limited. We observed that an important factor causing catastrophic forgetting is that the weights in the last fully connected (FC) layer are highly biased in class incremental learning. In this paper, we propose a simple and effective solution motivated by the aforementioned observations to address catastrophic forgetting. Firstly, we utilize KD to maintain the discrimination within old classes. Then, to further maintain the fairness between old classes and new classes, we propose Weight Aligning (WA) that corrects the biased weights in the FC layer after normal training process. Unlike previous work, WA does not require any extra parameters or a validation set in advance, as it utilizes the information provided by the biased weights themselves. The proposed method is evaluated on ImageNet-1000, ImageNet-100, and CIFAR-100 under various settings. Experimental results show that the proposed method can effectively alleviate catastrophic forgetting and significantly outperform state-of-the-art methods.

READ FULL TEXT
research
03/19/2019

Class-incremental Learning via Deep Model Consolidation

Deep neural networks (DNNs) often suffer from "catastrophic forgetting" ...
research
08/29/2023

Rotation Augmented Distillation for Exemplar-Free Class Incremental Learning with Detailed Analysis

Class incremental learning (CIL) aims to recognize both the old and new ...
research
04/10/2022

FOSTER: Feature Boosting and Compression for Class-Incremental Learning

The ability to learn new concepts continually is necessary in this ever-...
research
06/30/2021

When Video Classification Meets Incremental Classes

With the rapid development of social media, tremendous videos with new c...
research
05/30/2019

Large Scale Incremental Learning

Modern machine learning suffers from catastrophic forgetting when learni...
research
07/31/2019

Overcoming Catastrophic Forgetting by Neuron-level Plasticity Control

To address the issue of catastrophic forgetting in neural networks, we p...
research
05/09/2023

SRIL: Selective Regularization for Class-Incremental Learning

Human intelligence gradually accepts new information and accumulates kno...

Please sign up or login with your details

Forgot password? Click here to reset