A Fourier Perspective on Model Robustness in Computer Vision

06/21/2019
by   Dong Yin, et al.
3

Achieving robustness to distributional shift is a longstanding and challenging goal of computer vision. Data augmentation is a commonly used approach for improving robustness, however robustness gains are typically not uniform across corruption types. Indeed increasing performance in the presence of random noise is often met with reduced performance on other corruptions such as contrast change. Understanding when and why these sorts of trade-offs occur is a crucial step towards mitigating them. Towards this end, we investigate recently observed trade-offs caused by Gaussian data augmentation and adversarial training. We find that both methods improve robustness to corruptions that are concentrated in the high frequency domain while reducing robustness to corruptions that are concentrated in the low frequency domain. This suggests that one way to mitigate these trade-offs via data augmentation is to use a more diverse set of augmentations. Towards this end we observe that AutoAugment, a recently proposed data augmentation policy optimized for clean accuracy, achieves state-of-the-art robustness on the CIFAR-10-C and ImageNet-C benchmarks.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 9

page 14

research
06/06/2019

Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation

Deploying machine learning systems in the real world requires both high ...
research
10/15/2020

Does Data Augmentation Benefit from Split BatchNorms

Data augmentation has emerged as a powerful technique for improving the ...
research
12/27/2021

PRIME: A Few Primitives Can Boost Robustness to Common Corruptions

Despite their impressive performance on image classification tasks, deep...
research
02/24/2022

Fourier-Based Augmentations for Improved Robustness and Uncertainty Calibration

Diverse data augmentation strategies are a natural approach to improving...
research
03/03/2022

Robustness and Adaptation to Hidden Factors of Variation

We tackle here a specific, still not widely addressed aspect, of AI robu...
research
03/17/2023

Finding Competence Regions in Domain Generalization

We propose a "learning to reject" framework to address the problem of si...
research
04/26/2022

Deeper Insights into ViTs Robustness towards Common Corruptions

Recent literature have shown design strategies from Convolutions Neural ...

Please sign up or login with your details

Forgot password? Click here to reset