Patternless Adversarial Attacks on Video Recognition Networks

02/12/2020
by   Itay Naeh, et al.
0

Deep neural networks for classification of videos, just like image classification networks, may be subjected to adversarial manipulation. The main difference between image classifiers and video classifiers is that the latter usually use temporal information contained within the video in the form of optical flow or implicitly by various differences between adjacent frames. In this work we present a manipulation scheme for fooling video classifiers by introducing a spatial patternless temporal perturbation that is practically unnoticed by human observers and undetectable by leading image adversarial pattern detection algorithms. After demonstrating the manipulation of action classification of single videos, we generalize the procedure to make adversarial patterns with temporal invariance that generalizes across different classes for both targeted and untargeted attacks.

READ FULL TEXT

page 2

page 5

research
09/11/2019

Identifying and Resisting Adversarial Videos Using Temporal Consistency

Video classification is a challenging task in computer vision. Although ...
research
11/28/2018

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers

The success of deep learning research has catapulted deep models into pr...
research
10/22/2019

Attacking Optical Flow

Deep neural nets achieve state-of-the-art performance on the problem of ...
research
09/13/2021

PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos

Extensive research has demonstrated that deep neural networks (DNNs) are...
research
02/11/2021

Frame Difference-Based Temporal Loss for Video Stylization

Neural style transfer models have been used to stylize an ordinary video...
research
08/14/2017

Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API

Due to the growth of video data on Internet, automatic video analysis ha...
research
09/09/2022

Robust-by-Design Classification via Unitary-Gradient Neural Networks

The use of neural networks in safety-critical systems requires safe and ...

Please sign up or login with your details

Forgot password? Click here to reset