GradAug: A New Regularization Method for Deep Neural Networks

11/28/2020
by   chen chen, et al.
1

We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at https://github.com/taoyang1122/GradAug

READ FULL TEXT

page 3

page 11

research
03/13/2023

Three Guidelines You Should Know for Universally Slimmable Self-Supervised Learning

We propose universally slimmable self-supervised learning (dubbed as US3...
research
09/06/2017

Neural Networks Regularization Through Class-wise Invariant Representation Learning

Training deep neural networks is known to require a large number of trai...
research
01/08/2023

RobArch: Designing Robust Architectures against Adversarial Attacks

Adversarial Training is the most effective approach for improving the ro...
research
07/21/2019

signADAM: Learning Confidences for Deep Neural Networks

In this paper, we propose a new first-order gradient-based algorithm to ...
research
08/11/2023

Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregation

Deep neural networks are vulnerable to universal adversarial perturbatio...
research
02/05/2021

Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity

While deep neural networks show great performance on fitting to the trai...
research
09/15/2020

Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup

While deep neural networks achieve great performance on fitting the trai...

Please sign up or login with your details

Forgot password? Click here to reset