DeepAI AI Chat
Log In Sign Up

Patternless Adversarial Attacks on Video Recognition Networks

by   Itay Naeh, et al.

Deep neural networks for classification of videos, just like image classification networks, may be subjected to adversarial manipulation. The main difference between image classifiers and video classifiers is that the latter usually use temporal information contained within the video in the form of optical flow or implicitly by various differences between adjacent frames. In this work we present a manipulation scheme for fooling video classifiers by introducing a spatial patternless temporal perturbation that is practically unnoticed by human observers and undetectable by leading image adversarial pattern detection algorithms. After demonstrating the manipulation of action classification of single videos, we generalize the procedure to make adversarial patterns with temporal invariance that generalizes across different classes for both targeted and untargeted attacks.


page 2

page 5


Identifying and Resisting Adversarial Videos Using Temporal Consistency

Video classification is a challenging task in computer vision. Although ...

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers

The success of deep learning research has catapulted deep models into pr...

Attacking Optical Flow

Deep neural nets achieve state-of-the-art performance on the problem of ...

PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos

Extensive research has demonstrated that deep neural networks (DNNs) are...

Frame Difference-Based Temporal Loss for Video Stylization

Neural style transfer models have been used to stylize an ordinary video...

Attacking Automatic Video Analysis Algorithms: A Case Study of Google Cloud Video Intelligence API

Due to the growth of video data on Internet, automatic video analysis ha...

Robust-by-Design Classification via Unitary-Gradient Neural Networks

The use of neural networks in safety-critical systems requires safe and ...