Disturbing Target Values for Neural Network Regularization

10/11/2021
by   Yongho Kim, et al.
0

Diverse regularization techniques have been developed such as L2 regularization, Dropout, DisturbLabel (DL) to prevent overfitting. DL, a newcomer on the scene, regularizes the loss layer by flipping a small share of the target labels at random and training the neural network on this distorted data so as to not learn the training data. It is observed that high confidence labels during training cause the overfitting problem and DL selects disturb labels at random regardless of the confidence of labels. To solve this shortcoming of DL, we propose Directional DisturbLabel (DDL) a novel regularization technique that makes use of the class probabilities to infer the confident labels and using these labels to regularize the model. This active regularization makes use of the model behavior during training to regularize it in a more directed manner. To address regression problems, we also propose DisturbValue (DV), and DisturbError (DE). DE uses only predefined confident labels to disturb target values. DV injects noise into a portion of target values at random similar to DL. In this paper, 6 and 8 datasets are used to validate the robustness of our methods in classification and regression tasks respectively. Finally, we demonstrate that our methods are either comparable to or outperform DisturbLabel, L2 regularization, and Dropout. Also, we achieve the best performance in more than half the datasets by combining our methods with either L2 regularization or Dropout.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/31/2020

DL-Reg: A Deep Learning Regularization Technique using Linear Regression

Regularization plays a vital role in the context of deep learning by pre...
research
08/19/2023

PDL: Regularizing Multiple Instance Learning with Progressive Dropout Layers

Multiple instance learning (MIL) was a weakly supervised learning approa...
research
06/08/2021

Muddling Label Regularization: Deep Learning for Tabular Datasets

Deep Learning (DL) is considered the state-of-the-art in computer vision...
research
08/18/2021

Confidence Adaptive Regularization for Deep Learning with Noisy Labels

Recent studies on the memorization effects of deep neural networks on no...
research
12/15/2021

Robust Neural Network Classification via Double Regularization

The presence of mislabeled observations in data is a notoriously challen...
research
06/06/2021

Regularization in ResNet with Stochastic Depth

Regularization plays a major role in modern deep learning. From classic ...

Please sign up or login with your details

Forgot password? Click here to reset