Feature-level augmentation to improve robustness of deep neural networks to affine transformations

02/10/2022
by   Adrian Sandru, et al.
0

Recent studies revealed that convolutional neural networks do not generalize well to small image transformations, e.g. rotations by a few degrees or translations of a few pixels. To improve the robustness to such transformations, we propose to introduce data augmentation at intermediate layers of the neural architecture, in addition to the common data augmentation applied on the input images. By introducing small perturbations to activation maps (features) at various levels, we develop the capacity of the neural network to cope with such transformations. We conduct experiments on three image classification benchmarks (Tiny ImageNet, Caltech-256 and Food-101), considering two different convolutional architectures (ResNet-18 and DenseNet-121). When compared with two state-of-the-art stabilization methods, the empirical results show that our approach consistently attains the best trade-off between accuracy and mean flip rate.

READ FULL TEXT

page 1

page 4

research
06/03/2019

Achieving Generalizable Robustness of Deep Neural Networks by Stability Training

We study the recently introduced stability training as a general-purpose...
research
02/19/2020

Modelling response to trypophobia trigger using intermediate layers of ImageNet networks

In this paper, we approach the problem of detecting trypophobia triggers...
research
12/02/2020

A Self-Supervised Feature Map Augmentation (FMA) Loss and Combined Augmentations Finetuning to Efficiently Improve the Robustness of CNNs

Deep neural networks are often not robust to semantically-irrelevant cha...
research
01/14/2019

Data Augmentation with Manifold Exploring Geometric Transformations for Increased Performance and Robustness

In this paper we propose a novel augmentation technique that improves no...
research
04/06/2023

Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets

Deep networks have achieved impressive results on a range of well-curate...
research
12/27/2021

PRIME: A Few Primitives Can Boost Robustness to Common Corruptions

Despite their impressive performance on image classification tasks, deep...
research
11/12/2019

Trainable Spectrally Initializable Matrix Transformations in Convolutional Neural Networks

In this work, we investigate the application of trainable and spectrally...

Please sign up or login with your details

Forgot password? Click here to reset