DeepAI AI Chat
Log In Sign Up

Achieving Generalizable Robustness of Deep Neural Networks by Stability Training

by   Jan Laermann, et al.

We study the recently introduced stability training as a general-purpose method to increase the robustness of deep neural networks against input perturbations. In particular, we explore its use as an alternative to data augmentation and validate its performance against a number of distortion types and transformations including adversarial examples. In our ImageNet-scale image classification experiments stability training performs on a par or even outperforms data augmentation for specific transformations, while consistently offering improved robustness against a broader range of distortion strengths and types unseen during training, a considerably smaller hyperparameter dependence and less potentially negative side effects compared to data augmentation.


page 1

page 2

page 3

page 4


Feature-level augmentation to improve robustness of deep neural networks to affine transformations

Recent studies revealed that convolutional neural networks do not genera...

Training Augmentation with Adversarial Examples for Robust Speech Recognition

This paper explores the use of adversarial examples in training speech r...

A Causal View on Robustness of Neural Networks

We present a causal view on the robustness of neural networks against in...

Adversarial and Random Transformations for Robust Domain Adaptation and Generalization

Data augmentation has been widely used to improve generalization in trai...

Data Augmentation via Structured Adversarial Perturbations

Data augmentation is a major component of many machine learning methods ...

PRIME: A Few Primitives Can Boost Robustness to Common Corruptions

Despite their impressive performance on image classification tasks, deep...