DeepAI AI Chat
Log In Sign Up

ProSelfLC: Progressive Self Label Correction for Target Revising in Label Noise

05/07/2020
by   Xinshao Wang, et al.
Queen's University Belfast
Anyvision Group
2

In this work, we address robust deep learning under label noise (semi-supervised learning) from the perspective of target revising. We make three main contributions. First, we present a comprehensive mathematical study on existing target modification techniques, including Pseudo-Label [1], label smoothing [2], bootstrapping [3], knowledge distillation [4], confidence penalty [5], and joint optimisation [6]. Consequently, we reveal their relationships and drawbacks. Second, we propose ProSelfLC, a progressive and adaptive self label correction method, endorsed by learning time and predictive confidence. It addresses the disadvantages of existing algorithms and embraces many practical merits: (1) It is end-to-end trainable; (2) Given an example, ProSelfLC has the ability to revise an one-hot target by adding the information about its similarity structure, and correcting its semantic class; (3) No auxiliary annotations, or extra learners are required. Our proposal is designed according to the well-known expertise: deep neural networks learn simple meaningful patterns before fitting noisy patterns [7-9], and entropy regularisation principle [10, 11]. Third, label smoothing, confidence penalty and naive label correction perform on par with the state-of-the-art in our implementation. This probably indicates they were not benchmarked properly in prior work. Furthermore, our ProSelfLC outperforms them significantly.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/30/2022

ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State

To train robust deep neural networks (DNNs), we systematically study sev...
03/05/2020

Does label smoothing mitigate label noise?

Label smoothing is commonly used in training deep learning models, where...
01/23/2017

Regularizing Neural Networks by Penalizing Confident Output Distributions

We systematically explore regularizing neural networks by penalizing low...
03/06/2023

Fighting noise and imbalance in Action Unit detection problems

Action Unit (AU) detection aims at automatically caracterizing facial ex...
06/15/2022

ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise

As label noise, one of the most popular distribution shifts, severely de...
05/23/2023

Mitigating Label Noise through Data Ambiguation

Label noise poses an important challenge in machine learning, especially...
04/26/2021

An Exploration into why Output Regularization Mitigates Label Noise

Label noise presents a real challenge for supervised learning algorithms...