Feature Weaken: Vicinal Data Augmentation for Classification

11/20/2022
by   Songhao Jiang, et al.
0

Deep learning usually relies on training large-scale data samples to achieve better performance. However, over-fitting based on training data always remains a problem. Scholars have proposed various strategies, such as feature dropping and feature mixing, to improve the generalization continuously. For the same purpose, we subversively propose a novel training method, Feature Weaken, which can be regarded as a data augmentation method. Feature Weaken constructs the vicinal data distribution with the same cosine similarity for model training by weakening features of the original samples. In especially, Feature Weaken changes the spatial distribution of samples, adjusts sample boundaries, and reduces the gradient optimization value of back-propagation. This work can not only improve the classification performance and generalization of the model, but also stabilize the model training and accelerate the model convergence. We conduct extensive experiments on classical deep convolution neural models with five common image classification datasets and the Bert model with four common text classification datasets. Compared with the classical models or the generalization improvement methods, such as Dropout, Mixup, Cutout, and CutMix, Feature Weaken shows good compatibility and performance. We also use adversarial samples to perform the robustness experiments, and the results show that Feature Weaken is effective in improving the robustness of the model.

READ FULL TEXT

page 1

page 3

page 7

research
04/19/2022

Image Data Augmentation for Deep Learning: A Survey

Deep learning has achieved remarkable results in many computer vision ta...
research
11/16/2019

Signed Input Regularization

Over-parameterized deep models usually over-fit to a given training dist...
research
09/12/2022

DoubleMix: Simple Interpolation-Based Data Augmentation for Text Classification

This paper proposes a simple yet effective interpolation-based data augm...
research
05/30/2023

ShuffleMix: Improving Representations via Channel-Wise Shuffle of Interpolated Hidden States

Mixup style data augmentation algorithms have been widely adopted in var...
research
05/31/2023

Multi-Epoch Learning for Deep Click-Through Rate Prediction Models

The one-epoch overfitting phenomenon has been widely observed in industr...
research
06/20/2019

Data Interpolating Prediction: Alternative Interpretation of Mixup

Data augmentation by mixing samples, such as Mixup, has widely been used...
research
03/15/2022

Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness

Data modification, either via additional training datasets, data augment...

Please sign up or login with your details

Forgot password? Click here to reset