Achieving Generalizable Robustness of Deep Neural Networks by Stability Training

06/03/2019
by   Jan Laermann, et al.
0

We study the recently introduced stability training as a general-purpose method to increase the robustness of deep neural networks against input perturbations. In particular, we explore its use as an alternative to data augmentation and validate its performance against a number of distortion types and transformations including adversarial examples. In our ImageNet-scale image classification experiments stability training performs on a par or even outperforms data augmentation for specific transformations, while consistently offering improved robustness against a broader range of distortion strengths and types unseen during training, a considerably smaller hyperparameter dependence and less potentially negative side effects compared to data augmentation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset