Meta Adversarial Training

01/27/2021
by   Jan-Hendrik Metzen, et al.
24

Recently demonstrated physical-world adversarial attacks have exposed vulnerabilities in perception systems that pose severe risks for safety-critical applications such as autonomous driving. These attacks place adversarial artifacts in the physical world that indirectly cause the addition of universal perturbations to inputs of a model that can fool it in a variety of contexts. Adversarial training is the most effective defense against image-dependent adversarial attacks. However, tailoring adversarial training to universal perturbations is computationally expensive since the optimal universal perturbations depend on the model weights which change during training. We propose meta adversarial training (MAT), a novel combination of adversarial training with meta-learning, which overcomes this challenge by meta-learning universal perturbations along with model training. MAT requires little extra computation while continuously adapting a large set of perturbations to the current model. We present results for universal patch and universal perturbation attacks on image classification and traffic-light detection. MAT considerably increases robustness against universal patch attacks compared to prior work.

READ FULL TEXT

page 1

page 21

page 22

page 23

page 24

research
12/10/2018

Defending against Universal Perturbations with Shared Adversarial Training

Classifiers such as deep neural networks have been shown to be vulnerabl...
research
11/27/2018

Universal Adversarial Training

Standard adversarial attacks change the predicted class label of an imag...
research
06/22/2020

Learning to Generate Noise for Robustness against Multiple Perturbations

Adversarial learning has emerged as one of the successful techniques to ...
research
07/11/2022

Physical Passive Patch Adversarial Attacks on Visual Odometry Systems

Deep neural networks are known to be susceptible to adversarial perturba...
research
06/18/2022

DECK: Model Hardening for Defending Pervasive Backdoors

Pervasive backdoors are triggered by dynamic and pervasive input perturb...
research
12/09/2021

Amicable Aid: Turning Adversarial Attack to Benefit Classification

While adversarial attacks on deep image classification models pose serio...
research
05/27/2022

Semi-supervised Semantics-guided Adversarial Training for Trajectory Prediction

Predicting the trajectories of surrounding objects is a critical task in...

Please sign up or login with your details

Forgot password? Click here to reset