Self-paced Resistance Learning against Overfitting on Noisy Labels

05/07/2021
by   Xiaoshuang Shi, et al.
0

Noisy labels composed of correct and corrupted ones are pervasive in practice. They might significantly deteriorate the performance of convolutional neural networks (CNNs), because CNNs are easily overfitted on corrupted labels. To address this issue, inspired by an observation, deep neural networks might first memorize the probably correct-label data and then corrupt-label samples, we propose a novel yet simple self-paced resistance framework to resist corrupted labels, without using any clean validation data. The proposed framework first utilizes the memorization effect of CNNs to learn a curriculum, which contains confident samples and provides meaningful supervision for other training samples. Then it adopts selected confident samples and a proposed resistance loss to update model parameters; the resistance loss tends to smooth model parameters' update or attain equivalent prediction over each class, thereby resisting model overfitting on corrupted labels. Finally, we unify these two modules into a single loss function and optimize it in an alternative learning. Extensive experiments demonstrate the significantly superior performance of the proposed framework over recent state-of-the-art methods on noisy-label data. Source codes of the proposed method are available on https://github.com/xsshi2015/Self-paced-Resistance-Learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

Label Noise-Robust Learning using a Confidence-Based Sieving Strategy

In learning tasks with label noise, boosting model robustness against ov...
research
12/14/2017

MentorNet: Regularizing Very Deep Neural Networks on Corrupted Labels

Recent studies have discovered that deep networks are capable of memoriz...
research
09/29/2022

Effective Vision Transformer Training: A Data-Centric Perspective

Vision Transformers (ViTs) have shown promising performance compared wit...
research
04/14/2021

Joint Negative and Positive Learning for Noisy Labels

Training of Convolutional Neural Networks (CNNs) with data with noisy la...
research
06/19/2020

Cross-denoising Network against Corrupted Labels in Medical Image Segmentation with Domain Shift

Deep convolutional neural networks (DCNNs) have contributed many breakth...
research
06/13/2020

Generalization by Recognizing Confusion

A recently-proposed technique called self-adaptive training augments mod...
research
06/30/2022

ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State

To train robust deep neural networks (DNNs), we systematically study sev...

Please sign up or login with your details

Forgot password? Click here to reset