Using Videos to Evaluate Image Model Robustness

04/22/2019
by   Keren Gu, et al.
0

Human visual systems are robust to a wide range of image transformations that are challenging for artificial networks. We present the first study of image model robustness to the minute transformations found across video frames, which we term "natural robustness". Compared to previous studies on adversarial examples and synthetic distortions, natural robustness captures a more diverse set of common image transformations that occur in the natural environment. Our study across a dozen model architectures shows that more accurate models are more robust to natural transformations, and that robustness to synthetic color distortions is a good proxy for natural robustness. In examining brittleness in videos, we find that majority of the brittleness found in videos lies outside the typical definition of adversarial examples (99.9%). Finally, we investigate training techniques to reduce brittleness and find that no single technique systematically improves natural robustness across twelve tested architectures.

READ FULL TEXT

page 1

page 6

page 9

research
01/27/2021

Detecting Adversarial Examples by Input Transformations, Defense Perturbations, and Voting

Over the last few years, convolutional neural networks (CNNs) have prove...
research
12/03/2018

Disentangling Adversarial Robustness and Generalization

Obtaining deep networks that are robust against adversarial examples and...
research
07/16/2019

Natural Adversarial Examples

We introduce natural adversarial examples -- real-world, unmodified, and...
research
10/26/2021

Can't Fool Me: Adversarially Robust Transformer for Video Understanding

Deep neural networks have been shown to perform poorly on adversarial ex...
research
02/15/2021

Certified Robustness to Programmable Transformations in LSTMs

Deep neural networks for natural language processing are fragile in the ...
research
02/22/2020

Robustness to Programmable String Transformations via Augmented Abstract Training

Deep neural networks for natural language processing tasks are vulnerabl...
research
09/07/2018

A Deeper Look at 3D Shape Classifiers

We investigate the role of representations and architectures for classif...

Please sign up or login with your details

Forgot password? Click here to reset