Can't Fool Me: Adversarially Robust Transformer for Video Understanding

10/26/2021
by   Divya Choudhary, et al.
0

Deep neural networks have been shown to perform poorly on adversarial examples. To address this, several techniques have been proposed to increase robustness of a model for image classification tasks. However, in video understanding tasks, developing adversarially robust models is still unexplored. In this paper, we aim to bridge this gap. We first show that simple extensions of image based adversarially robust models slightly improve the worst-case performance. Further, we propose a temporal attention regularization scheme in Transformer to improve the robustness of attention modules to adversarial examples. We illustrate using a large-scale video data set YouTube-8M that the final model (A-ART) achieves close to non-adversarial performance on its adversarial example set. We achieve 91 examples, whereas baseline Transformer and simple adversarial extensions achieve 72.9 robustness over the state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/18/2021

Enhancing Transformer for Video Understanding Using Gated Multi-Level Attention and Temporal Adversarial Training

The introduction of Transformer model has led to tremendous advancements...
research
08/31/2019

Knowledge Enhanced Attention for Robust Natural Language Inference

Neural network models have been very successful at achieving high accura...
research
05/23/2018

Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients

In recent years, neural networks have demonstrated outstanding effective...
research
10/16/2020

Mischief: A Simple Black-Box Attack Against Transformer Architectures

We introduce Mischief, a simple and lightweight method to produce a clas...
research
09/03/2019

Robust Invisible Video Watermarking with Attention

The goal of video watermarking is to embed a message within a video file...
research
04/22/2019

Using Videos to Evaluate Image Model Robustness

Human visual systems are robust to a wide range of image transformations...
research
11/25/2021

Clustering Effect of (Linearized) Adversarial Robust Models

Adversarial robustness has received increasing attention along with the ...

Please sign up or login with your details

Forgot password? Click here to reset