Sparse Black-box Video Attack with Reinforcement Learning

01/11/2020
by   Huanqian Yan, et al.
14

Adversarial attacks on video recognition models have been explored recently. However, most existing works treat each video frame equally and ignore their temporal interactions. To overcome this drawback, a few methods try to select some key frames, and then perform attacks based on them. Unfortunately, their selecting strategy is independent with the attacking step, therefore the resulting performance is limited. In this paper, we aim to attack video recognition task in the black-box setting. The difference is, we think the frame selection phase is closely relevant with the attacking phase. The reasonable key frames should be adjusted according to the feedback of attacking threat models. Based on this idea, we formulate the black-box video attacks into the Reinforcement Learning (RL) framework. Specifically, the environment in RL is set as the threat models, and the agent in RL plays the role of frame selecting and video attacking simultaneously. By continuously querying the threat models and receiving the feedback of predicted probabilities (reward), the agent adjusts its frame selection strategy and performs attacks (action). Step by step, the optimal key frames are selected and the smallest adversarial perturbations are achieved. We conduct a series of experiments with two mainstream video recognition models: C3D and LRCN on the public UCF-101 and HMDB-51 datasets. The results demonstrate that the proposed method can significantly reduce the perturbation of adversarial examples and attacking on the sparse video frames can have better attack effectiveness than attacking on each frame.

READ FULL TEXT

page 1

page 2

page 8

page 9

research
11/21/2019

Heuristic Black-box Adversarial Attacks on Video Recognition Models

We study the problem of attacking video recognition models in the black-...
research
01/03/2023

Efficient Robustness Assessment via Adversarial Spatial-Temporal Focus on Videos

Adversarial robustness assessment for video recognition models has raise...
research
08/29/2021

Reinforcement Learning Based Sparse Black-box Adversarial Attack on Video Recognition Models

We explore the black-box adversarial attack on video recognition models....
research
10/29/2021

Attacking Video Recognition Models with Bullet-Screen Comments

Recent research has demonstrated that Deep Neural Networks (DNNs) are vu...
research
05/11/2023

Inter-frame Accelerate Attack against Video Interpolation Models

Deep learning based video frame interpolation (VIF) method, aiming to sy...
research
11/13/2018

Deep Q learning for fooling neural networks

Deep learning models are vulnerable to external attacks. In this paper, ...
research
10/02/2017

Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight

Deep reinforcement learning has shown promising results in learning cont...

Please sign up or login with your details

Forgot password? Click here to reset