Dynamics-aware Adversarial Attack of Adaptive Neural Networks

10/15/2022
by   An Tao, et al.
0

In this paper, we investigate the dynamics-aware adversarial attack problem of adaptive neural networks. Most existing adversarial attack algorithms are designed under a basic assumption – the network architecture is fixed throughout the attack process. However, this assumption does not hold for many recently proposed adaptive neural networks, which adaptively deactivate unnecessary execution units based on inputs to improve computational efficiency. It results in a serious issue of lagged gradient, making the learned attack at the current step ineffective due to the architecture change afterward. To address this issue, we propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient. More specifically, we reformulate the gradients to be aware of the potential dynamic changes of network architectures, so that the learned attack better "leads" the next step than the dynamics-unaware methods when network architecture changes dynamically. Extensive experiments on representative types of adaptive neural networks for both 2D images and 3D point clouds show that our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.

READ FULL TEXT

page 1

page 6

page 8

page 10

page 11

research
02/19/2021

Effective and Efficient Vote Attack on Capsule Networks

Standard Convolutional Neural Networks (CNNs) can be easily fooled by im...
research
10/08/2018

Combinatorial Attacks on Binarized Neural Networks

Binarized Neural Networks (BNNs) have recently attracted significant int...
research
03/12/2023

Adaptive Local Adversarial Attacks on 3D Point Clouds for Augmented Reality

As the key technology of augmented reality (AR), 3D recognition and trac...
research
06/20/2023

Analysis of the Benefits and Efficacy of the Addition of Variants and Reality Paths to the Blackboard Architecture

While the Blackboard Architecture has been in use since the 1980s, it ha...
research
04/01/2023

GradMDM: Adversarial Attack on Dynamic Networks

Dynamic neural networks can greatly reduce computation redundancy withou...
research
03/10/2019

Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints

As neural networks continue their reach into nearly every aspect of soft...
research
01/01/2020

Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient

Recently, backpropagation through time inspired learning algorithms are ...

Please sign up or login with your details

Forgot password? Click here to reset