Compressing Features for Learning with Noisy Labels

06/27/2022
by   Yingyi Chen, et al.
0

Supervised learning can be viewed as distilling relevant information from input data into feature representations. This process becomes difficult when supervision is noisy as the distilled information might not be relevant. In fact, recent research shows that networks can easily overfit all labels including those that are corrupted, and hence can hardly generalize to clean datasets. In this paper, we focus on the problem of learning with noisy labels and introduce compression inductive bias to network architectures to alleviate this over-fitting problem. More precisely, we revisit one classical regularization named Dropout and its variant Nested Dropout. Dropout can serve as a compression constraint for its feature dropping mechanism, while Nested Dropout further learns ordered feature representations w.r.t. feature importance. Moreover, the trained models with compression regularization are further combined with Co-teaching for performance boost. Theoretically, we conduct bias-variance decomposition of the objective function under compression regularization. We analyze it for both single model and Co-teaching. This decomposition provides three insights: (i) it shows that over-fitting is indeed an issue for learning with noisy labels; (ii) through an information bottleneck formulation, it explains why the proposed feature compression helps in combating label noise; (iii) it gives explanations on the performance boost brought by incorporating compression regularization into Co-teaching. Experiments show that our simple approach can have comparable or even better performance than the state-of-the-art methods on benchmarks with real-world label noise including Clothing1M and ANIMAL-10N. Our implementation is available at https://yingyichen-cyy.github.io/CompressFeatNoisyLabels/.

READ FULL TEXT
research
04/28/2021

Boosting Co-teaching with Compression Regularization for Label Noise

In this paper, we study the problem of learning image classification mod...
research
03/03/2022

On Learning Contrastive Representations for Learning with Noisy Labels

Deep neural networks are able to memorize noisy labels easily with a sof...
research
03/29/2022

Agreement or Disagreement in Noise-tolerant Mutual Learning?

Deep learning has made many remarkable achievements in many fields but s...
research
07/11/2023

Unleashing the Potential of Regularization Strategies in Learning with Noisy Labels

In recent years, research on learning with noisy labels has focused on d...
research
10/13/2019

What happens when self-supervision meets Noisy Labels?

The major driving force behind the immense success of deep learning mode...
research
01/14/2019

How Does Disagreement Benefit Co-teaching?

Learning with noisy labels is one of the most important question in weak...
research
02/05/2014

Learning Ordered Representations with Nested Dropout

In this paper, we study ordered representations of data in which differe...

Please sign up or login with your details

Forgot password? Click here to reset