PRIME: A Few Primitives Can Boost Robustness to Common Corruptions

12/27/2021
by   Apostolos Modas, et al.
8

Despite their impressive performance on image classification tasks, deep networks have a hard time generalizing to many common corruptions of their data. To fix this vulnerability, prior works have mostly focused on increasing the complexity of their training pipelines, combining multiple methods, in the name of diversity. However, in this work, we take a step back and follow a principled approach to achieve robustness to common corruptions. We propose PRIME, a general data augmentation scheme that consists of simple families of max-entropy image transformations. We show that PRIME outperforms the prior art for corruption robustness, while its simplicity and plug-and-play nature enables it to be combined with other methods to further boost their robustness. Furthermore, we analyze PRIME to shed light on the importance of the mixing strategy on synthesizing corrupted images, and to reveal the robustness-accuracy trade-offs arising in the context of common corruptions. Finally, we show that the computational efficiency of our method allows it to be easily used in both on-line and off-line data augmentation schemes.

READ FULL TEXT

page 1

page 3

page 6

page 12

page 13

page 14

research
06/21/2019

A Fourier Perspective on Model Robustness in Computer Vision

Achieving robustness to distributional shift is a longstanding and chall...
research
10/08/2021

Observations on K-image Expansion of Image-Mixing Augmentation for Classification

Image-mixing augmentations (e.g., Mixup or CutMix), which typically mix ...
research
02/10/2022

Feature-level augmentation to improve robustness of deep neural networks to affine transformations

Recent studies revealed that convolutional neural networks do not genera...
research
12/05/2020

Data Boost: Text Data Augmentation Through Reinforcement Learning Guided Conditional Generation

Data augmentation is proven to be effective in many NLU tasks, especiall...
research
03/08/2022

Data augmentation with mixtures of max-entropy transformations for filling-level classification

We address the problem of distribution shifts in test-time data with a p...
research
04/26/2022

Deeper Insights into ViTs Robustness towards Common Corruptions

Recent literature have shown design strategies from Convolutions Neural ...

Please sign up or login with your details

Forgot password? Click here to reset