DeepAI AI Chat
Log In Sign Up

Stochastic Batch Augmentation with An Effective Distilled Dynamic Soft Label Regularizer

by   Qian Li, et al.
Xi'an Jiaotong University
NetEase, Inc
The Hong Kong University of Science and Technology

Data augmentation have been intensively used in training deep neural network to improve the generalization, whether in original space (e.g., image space) or representation space. Although being successful, the connection between the synthesized data and the original data is largely ignored in training, without considering the distribution information that the synthesized samples are surrounding the original sample in training. Hence, the behavior of the network is not optimized for this. However, that behavior is crucially important for generalization, even in the adversarial setting, for the safety of the deep learning system. In this work, we propose a framework called Stochastic Batch Augmentation (SBA) to address these problems. SBA stochastically decides whether to augment at iterations controlled by the batch scheduler and in which a ”distilled” dynamic soft label regularization is introduced by incorporating the similarity in the vicinity distribution respect to raw samples. The proposed regularization provides direct supervision by the KL-Divergence between the output soft-max distributions of original and virtual data. Our experiments on CIFAR-10, CIFAR-100, and ImageNet show that SBA can improve the generalization of the neural networks and speed up the convergence of network training.


page 1

page 2

page 3

page 4


Augment your batch: better training with larger batches

Large-batch SGD is important for scaling training of deep neural network...

Stochastic Training is Not Necessary for Generalization

It is widely believed that the implicit regularization of stochastic gra...

LA3: Efficient Label-Aware AutoAugment

Automated augmentation is an emerging and effective technique to search ...

Removing Undesirable Feature Contributions Using Out-of-Distribution Data

Several data augmentation methods deploy unlabeled-in-distribution (UID)...

Post-synaptic potential regularization has potential

Improving generalization is one of the main challenges for training deep...

Soft Labeling Affects Out-of-Distribution Detection of Deep Neural Networks

Soft labeling becomes a common output regularization for generalization ...

Multi-Sample ζ-mixup: Richer, More Realistic Synthetic Samples from a p-Series Interpolant

Modern deep learning training procedures rely on model regularization te...