Self-Training for Class-Incremental Semantic Segmentation

12/06/2020
by   Lu Yu, et al.
0

We study incremental learning for semantic segmentation where when learning new classes we have no access to the labeled data of previous tasks. When incrementally learning new classes, deep neural networks suffer from catastrophic forgetting of previous learned knowledge. To address this problem, we propose to apply a self-training approach that leverages unlabeled data, which is used for rehearsal of previous knowledge. Additionally, conflict reduction is proposed to resolve the conflicts of pseudo labels generated from both the old and new models. We show that maximizing self-entropy can further improve results by smoothing the overconfident predictions. The experiments demonstrate state-of-the-art results: obtaining a relative gain of up to 114 on Pascal-VOC 2012 and 8.5 state-of-the-art methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset