Decoupling Representation and Classifier for Noisy Label Learning

11/16/2020
by   Hui Zhang, et al.
0

Since convolutional neural networks (ConvNets) can easily memorize noisy labels, which are ubiquitous in visual classification tasks, it has been a great challenge to train ConvNets against them robustly. Various solutions, e.g., sample selection, label correction, and robustifying loss functions, have been proposed for this challenge, and most of them stick to the end-to-end training of the representation (feature extractor) and classifier. In this paper, by a deep rethinking and careful re-examining on learning behaviors of the representation and classifier, we discover that the representation is much more fragile in the presence of noisy labels than the classifier. Thus, we are motivated to design a new method, i.e., REED, to leverage above discoveries to learn from noisy labels robustly. The proposed method contains three stages, i.e., obtaining the representation by self-supervised learning without any labels, transferring the noisy label learning problem into a semisupervised one by the classifier directly and reliably trained with noisy labels, and joint semi-supervised retraining of both the representation and classifier. Extensive experiments are performed on both synthetic and real benchmark datasets. Results demonstrate that the proposed method can beat the state-of-the-art ones by a large margin, especially under high noise level.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset