Sparse Adversarial Video Attacks with Spatial Transformations

11/10/2021
by   Ronghui Mu, et al.
0

In recent years, a significant amount of research efforts concentrated on adversarial attacks on images, while adversarial video attacks have seldom been explored. We propose an adversarial attack strategy on videos, called DeepSAVA. Our model includes both additive perturbation and spatial transformation by a unified optimisation framework, where the structural similarity index (SSIM) measure is adopted to measure the adversarial distance. We design an effective and novel optimisation scheme which alternatively utilizes Bayesian optimisation to identify the most influential frame in a video and Stochastic gradient descent (SGD) based optimisation to produce both additive and spatial-transformed perturbations. Doing so enables DeepSAVA to perform a very sparse attack on videos for maintaining human imperceptibility while still achieving state-of-the-art performance in terms of both attack success rate and adversarial transferability. Our intensive experiments on various types of deep neural networks and video datasets confirm the superiority of DeepSAVA.

READ FULL TEXT
research
10/15/2020

Generalizing Universal Adversarial Attacks Beyond Additive Perturbations

The previous study has shown that universal adversarial attacks can fool...
research
09/17/2020

MultAV: Multiplicative Adversarial Videos

The majority of adversarial machine learning research focuses on additiv...
research
03/25/2019

The LogBarrier adversarial attack: making effective use of decision boundary information

Adversarial attacks for image classification are small perturbations to ...
research
06/03/2021

PDPGD: Primal-Dual Proximal Gradient Descent Adversarial Attack

State-of-the-art deep neural networks are sensitive to small input pertu...
research
09/11/2019

Identifying and Resisting Adversarial Videos Using Temporal Consistency

Video classification is a challenging task in computer vision. Although ...
research
06/10/2021

Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm

Sparse adversarial attacks can fool deep neural networks (DNNs) by only ...
research
09/19/2020

Making Images Undiscoverable from Co-Saliency Detection

In recent years, co-saliency object detection (CoSOD) has achieved signi...

Please sign up or login with your details

Forgot password? Click here to reset