Adversarial Perturbations Against Real-Time Video Classification Systems

07/02/2018
by   Shasha Li, et al.
6

Recent research has demonstrated the brittleness of machine learning systems to adversarial perturbations. However, the studies have been mostly limited to perturbations on images and more generally, classification that does not deal with temporally varying inputs. In this paper we ask "Are adversarial perturbations possible in real-time video classification systems and if so, what properties must they satisfy?" Such systems find application in surveillance applications, smart vehicles, and smart elderly care and thus, misclassification could be particularly harmful (e.g., a mishap at an elderly care facility may be missed). We show that accounting for temporal structure is key to generating adversarial examples in such systems. We exploit recent advances in generative adversarial network (GAN) architectures to account for temporal correlations and generate adversarial samples that can cause misclassification rates of over 80 the samples also leave other activities largely unaffected making them extremely stealthy. Finally, we also surprisingly find that in many scenarios, the same perturbation can be applied to every frame in a video clip that makes the adversary's ability to achieve misclassification relatively easy.

READ FULL TEXT

page 9

page 11

page 14

page 15

page 16

research
12/09/2017

NAG: Network for Adversary Generation

Adversarial perturbations can pose a serious threat for deploying machin...
research
05/09/2017

Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN

We propose a novel technique to make neural network robust to adversaria...
research
06/21/2023

Universal adversarial perturbations for multiple classification tasks with quantum classifiers

Quantum adversarial machine learning is an emerging field that studies t...
research
03/07/2018

Sparse Adversarial Perturbations for Videos

Although adversarial samples of deep neural networks (DNNs) have been in...
research
08/27/2018

Targeted Nonlinear Adversarial Perturbations in Images and Videos

We introduce a method for learning adversarial perturbations targeted to...
research
06/19/2018

Built-in Vulnerabilities to Imperceptible Adversarial Perturbations

Designing models that are robust to small adversarial perturbations of t...
research
11/30/2022

Towards Interpreting Vulnerability of Multi-Instance Learning via Customized and Universal Adversarial Perturbations

Multi-instance learning (MIL) is a great paradigm for dealing with compl...

Please sign up or login with your details

Forgot password? Click here to reset