Backpropagating Linearly Improves Transferability of Adversarial Examples

12/07/2020
by   Yiwen Guo, et al.
0

The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community. In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs. We revisit a not so new but definitely noteworthy hypothesis of Goodfellow et al.'s and disclose that the transferability can be enhanced by improving the linearity of DNNs in an appropriate manner. We introduce linear backpropagation (LinBP), a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients. More specifically, it calculates forward as normal but backpropagates loss as if some nonlinear activations are not encountered in the forward pass. Experimental results demonstrate that this simple yet effective method obviously outperforms current state-of-the-arts in crafting transferable adversarial examples on CIFAR-10 and ImageNet, leading to more effective attacks on a variety of DNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2023

StyLess: Boosting the Transferability of Adversarial Examples

Adversarial attacks can mislead deep neural networks (DNNs) by adding im...
research
02/10/2023

Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples

The transferability of adversarial examples across deep neural networks ...
research
02/14/2020

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

Skip connections are an essential component of current state-of-the-art ...
research
05/26/2022

Transferable Adversarial Attack based on Integrated Gradients

The vulnerability of deep neural networks to adversarial examples has dr...
research
12/21/2021

A Theoretical View of Linear Backpropagation and Its Convergence

Backpropagation is widely used for calculating gradients in deep neural ...
research
06/16/2019

Defending Against Adversarial Attacks Using Random Forests

As deep neural networks (DNNs) have become increasingly important and po...
research
02/20/2021

Going Far Boosts Attack Transferability, but Do Not Do It

Deep Neural Networks (DNNs) could be easily fooled by Adversarial Exampl...

Please sign up or login with your details

Forgot password? Click here to reset