Improving the robustness of ImageNet classifiers using elements of human visual cognition

06/20/2019
by   A. Emin Orhan, et al.
2

We investigate the robustness properties of image recognition models equipped with two features inspired by human vision, an explicit episodic memory and a shape bias, at the ImageNet scale. As reported in previous work, we show that an explicit episodic memory improves the robustness of image recognition models against small-norm adversarial perturbations under some threat models. It does not, however, improve the robustness against more natural, and typically larger, perturbations. Learning more robust features during training appears to be necessary for robustness in this second sense. We show that features derived from a model that was encouraged to learn global, shape-based representations (Geirhos et al., 2019) do not only improve the robustness against natural perturbations, but when used in conjunction with an episodic memory, they also provide additional robustness against adversarial perturbations. Finally, we address three important design choices for the episodic memory: memory size, dimensionality of the memories and the retrieval method. We show that to make the episodic memory more compact, it is preferable to reduce the number of memories by clustering them, instead of reducing their dimensionality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2020

Adversarial and Natural Perturbations for General Robustness

In this paper we aim to explore the general robustness of neural network...
research
11/23/2019

Universal Adversarial Perturbations to Understand Robustness of Texture vs. Shape-biased Training

Convolutional Neural Networks (CNNs) used on image classification tasks ...
research
07/17/2019

Robustness properties of Facebook's ResNeXt WSL models

We investigate the robustness properties of ResNeXt image recognition mo...
research
11/15/2018

A Spectral View of Adversarially Robust Features

Given the apparent difficulty of learning models that are robust to adve...
research
11/20/2021

Discrete Representations Strengthen Vision Transformer Robustness

Vision Transformer (ViT) is emerging as the state-of-the-art architectur...
research
12/12/2022

Robust Perception through Equivariance

Deep networks for computer vision are not reliable when they encounter a...
research
09/07/2018

A Deeper Look at 3D Shape Classifiers

We investigate the role of representations and architectures for classif...

Please sign up or login with your details

Forgot password? Click here to reset