Training on Foveated Images Improves Robustness to Adversarial Attacks

08/01/2023
by   Muhammad A. Shah, et al.
0

Deep neural networks (DNNs) have been shown to be vulnerable to adversarial attacks – subtle, perceptually indistinguishable perturbations of inputs that change the response of the model. In the context of vision, we hypothesize that an important contributor to the robustness of human visual perception is constant exposure to low-fidelity visual stimuli in our peripheral vision. To investigate this hypothesis, we develop , an image transform that simulates the loss in fidelity of peripheral vision by blurring the image and reducing its color saturation based on the distance from a given fixation point. We show that compared to DNNs trained on the original images, DNNs trained on images transformed by are substantially more robust to adversarial attacks, as well as other, non-adversarial, corruptions, achieving up to 25% higher accuracy on perturbed data.

READ FULL TEXT

page 3

page 6

page 16

research
09/05/2020

Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks

Deep neural networks (DNNs) are now commonly used in many domains. Howev...
research
06/05/2023

Adversarial alignment: Breaking the trade-off between the strength of an attack and its relevance to human perception

Deep neural networks (DNNs) are known to have a fundamental sensitivity ...
research
08/07/2023

Fixed Inter-Neuron Covariability Induces Adversarial Robustness

The vulnerability to adversarial perturbations is a major flaw of Deep N...
research
11/11/2020

Adversarial images for the primate brain

Deep artificial neural networks have been proposed as a model of primate...
research
03/02/2018

Protecting JPEG Images Against Adversarial Attacks

As deep neural networks (DNNs) have been integrated into critical system...
research
03/31/2022

Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond

Rain removal aims to remove rain streaks from images/videos and reduce t...
research
12/22/2022

Aliasing is a Driver of Adversarial Attacks

Aliasing is a highly important concept in signal processing, as careful ...

Please sign up or login with your details

Forgot password? Click here to reset