Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power of Geometric Transformations

10/05/2021
by   Shasha Li, et al.
0

When compared to the image classification models, black-box adversarial attacks against video classification models have been largely understudied. This could be possible because, with video, the temporal dimension poses significant additional challenges in gradient estimation. Query-efficient black-box attacks rely on effectively estimated gradients towards maximizing the probability of misclassifying the target video. In this work, we demonstrate that such effective gradients can be searched for by parameterizing the temporal structure of the search space with geometric transformations. Specifically, we design a novel iterative algorithm Geometric TRAnsformed Perturbations (GEO-TRAP), for attacking video classification models. GEO-TRAP employs standard geometric transformation operations to reduce the search space for effective gradients into searching for a small group of parameters that define these operations. This group of parameters describes the geometric progression of gradients, resulting in a reduced and structured search space. Our algorithm inherently leads to successful perturbations with surprisingly few queries. For example, adversarial examples generated from GEO-TRAP have better attack success rates with  73.55 state-of-the-art method for video adversarial attacks on the widely used Jester dataset. Overall, our algorithm exposes vulnerabilities of diverse video classification models and achieves new state-of-the-art results under black-box settings on two large datasets.

READ FULL TEXT

page 8

page 18

page 19

research
03/13/2020

GeoDA: a geometric framework for black-box adversarial attacks

Adversarial examples are known as carefully perturbed images fooling ima...
research
09/10/2021

A Strong Baseline for Query Efficient Attacks in a Black Box Setting

Existing black box search methods have achieved high success rate in gen...
research
04/10/2019

Black-box Adversarial Attacks on Video Recognition Models

Deep neural networks (DNNs) are known for their vulnerability to adversa...
research
10/28/2022

Distributed Black-box Attack against Image Classification Cloud Services

Black-box adversarial attacks can fool image classifiers into misclassif...
research
11/28/2018

Adversarial Attacks for Optical Flow-Based Action Recognition Classifiers

The success of deep learning research has catapulted deep models into pr...
research
06/11/2019

Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks

Unlike the white-box counterparts that are widely studied and readily ac...
research
04/20/2022

Adversarial Scratches: Deployable Attacks to CNN Classifiers

A growing body of work has shown that deep neural networks are susceptib...

Please sign up or login with your details

Forgot password? Click here to reset