Analysis and Extensions of Adversarial Training for Video Classification

06/16/2022
by   Kaleab Alemayehu Kinfu, et al.
10

Adversarial training (AT) is a simple yet effective defense against adversarial attacks to image classification systems, which is based on augmenting the training set with attacks that maximize the loss. However, the effectiveness of AT as a defense for video classification has not been thoroughly studied. Our first contribution is to show that generating optimal attacks for video requires carefully tuning the attack parameters, especially the step size. Notably, we show that the optimal step size varies linearly with the attack budget. Our second contribution is to show that using a smaller (sub-optimal) attack budget at training time leads to a more robust performance at test time. Based on these findings, we propose three defenses against attacks with variable attack budgets. The first one, Adaptive AT, is a technique where the attack budget is drawn from a distribution that is adapted as training iterations proceed. The second, Curriculum AT, is a technique where the attack budget is increased as training iterations proceed. The third, Generative AT, further couples AT with a denoising generative adversarial network to boost robust performance. Experiments on the UCF101 dataset demonstrate that the proposed methods improve adversarial robustness against multiple attack types.

READ FULL TEXT
research
12/30/2022

Guidance Through Surrogate: Towards a Generic Diagnostic Attack

Adversarial training is an effective approach to make deep neural networ...
research
01/31/2022

Can Adversarial Training Be Manipulated By Non-Robust Features?

Adversarial training, originally designed to resist test-time adversaria...
research
03/22/2023

Test-time Defense against Adversarial Attacks: Detection and Reconstruction of Adversarial Examples via Masked Autoencoder

Existing defense methods against adversarial attacks can be categorized ...
research
03/10/2022

Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack

Defense models against adversarial attacks have grown significantly, but...
research
09/20/2019

Defending Against Physically Realizable Attacks on Image Classification

We study the problem of defending deep neural network approaches for ima...
research
03/12/2020

Topological Effects on Attacks Against Vertex Classification

Vertex classification is vulnerable to perturbations of both graph topol...
research
04/10/2020

Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness

With the growth of interest in the attack and defense of deep neural net...

Please sign up or login with your details

Forgot password? Click here to reset