How Does Disagreement Benefit Co-teaching?

01/14/2019
by   Xingrui Yu, et al.
0

Learning with noisy labels is one of the most important question in weakly-supervised learning domain. Classical approaches focus on adding the regularization or estimating the noise transition matrix. However, either a regularization bias is permanently introduced, or the noise transition matrix is hard to be estimated accurately. In this paper, following a novel path to train on small-loss samples, we propose a robust learning paradigm called Co-teaching+. This paradigm naturally bridges "Update by Disagreement" strategy with Co-teaching that trains two deep neural networks, thus consists of disagreement-update step and cross-update step. In disagreement-update step, two networks predicts all data first, and feeds forward prediction disagreement data only. Then, in cross-update step, each network selects its small-loss data from such disagreement data, but back propagates the small-loss data by its peer network and updates itself parameters. Empirical results on noisy versions of MNIST, CIFAR-10 and NEWS demonstrate that Co-teaching+ is much superior to the state-of-the-art methods in the robustness of trained deep models.

READ FULL TEXT

page 6

page 7

page 8

research
03/05/2020

Combating noisy labels by agreement: A joint training method with co-regularization

Deep Learning with noisy labels is a practically challenging problem in ...
research
04/18/2018

Co-sampling: Training Robust Networks for Extremely Noisy Supervision

Training robust deep networks is challenging under noisy labels. Current...
research
03/23/2021

Co-matching: Combating Noisy Labels by Augmentation Anchoring

Deep learning with noisy labels is challenging as deep neural networks h...
research
12/03/2022

CrossSplit: Mitigating Label Noise Memorization through Data Splitting

We approach the problem of improving robustness of deep learning algorit...
research
06/10/2020

Meta Transition Adaptation for Robust Deep Learning with Noisy Labels

To discover intrinsic inter-class transition probabilities underlying da...
research
05/21/2018

Masking: A New Perspective of Noisy Supervision

It is important to learn classifiers under noisy labels due to their ubi...
research
06/27/2022

Compressing Features for Learning with Noisy Labels

Supervised learning can be viewed as distilling relevant information fro...

Please sign up or login with your details

Forgot password? Click here to reset