A robust approach for deep neural networks in presence of label noise: relabelling and filtering instances during training

09/08/2021
by   Anabel Gómez-Ríos, et al.
0

Deep learning has outperformed other machine learning algorithms in a variety of tasks, and as a result, it has become more and more popular and used. However, as other machine learning algorithms, deep learning, and convolutional neural networks (CNNs) in particular, perform worse when the data sets present label noise. Therefore, it is important to develop algorithms that help the training of deep networks and their generalization to noise-free test sets. In this paper, we propose a robust training strategy against label noise, called RAFNI, that can be used with any CNN. This algorithm filters and relabels instances of the training set based on the predictions and their probabilities made by the backbone neural network during the training process. That way, this algorithm improves the generalization ability of the CNN on its own. RAFNI consists of three mechanisms: two mechanisms that filter instances and one mechanism that relabels instances. In addition, it does not suppose that the noise rate is known nor does it need to be estimated. We evaluated our algorithm using different data sets of several sizes and characteristics. We also compared it with state-of-the-art models using the CIFAR10 and CIFAR100 benchmarks under different types and rates of label noise and found that RAFNI achieves better results in most cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2019

An Effective Label Noise Model for DNN Text Classification

Because large, human-annotated datasets suffer from labeling errors, it ...
research
05/27/2019

Combating Label Noise in Deep Learning Using Abstention

We introduce a novel method to combat label noise when training deep neu...
research
07/20/2021

kNet: A Deep kNN Network To Handle Label Noise

Deep Neural Networks require large amounts of labeled data for their tra...
research
07/03/2019

Measuring the Data Efficiency of Deep Learning Methods

In this paper, we propose a new experimental protocol and use it to benc...
research
08/17/2022

Performance Evaluation of Selective Fixed-filter Active Noise Control based on Different Convolutional Neural Networks

Due to its rapid response time and a high degree of robustness, the sele...
research
05/24/2018

VisualBackProp for learning using privileged information with CNNs

In many machine learning applications, from medical diagnostics to auton...
research
05/31/2023

How to Construct Perfect and Worse-than-Coin-Flip Spoofing Countermeasures: A Word of Warning on Shortcut Learning

Shortcut learning, or `Clever Hans effect` refers to situations where a ...

Please sign up or login with your details

Forgot password? Click here to reset