Learning to Generate Noise for Robustness against Multiple Perturbations

06/22/2020
by   Divyam Madaan, et al.
0

Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations. However, the majority of existing defense methods are tailored to defend against a single category of adversarial perturbation (e.g. ℓ_∞-attack). In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system. To tackle this challenge of robustness against multiple perturbations, we propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks. Specifically, we propose Meta Noise Generator (MNG) that outputs optimal noise to stochastically perturb a given sample, such that it helps lower the error on diverse adversarial perturbations. However, training on multiple perturbations simultaneously significantly increases the computational overhead during training. To address this issue, we train our MNG while randomly sampling an attack at each epoch, which incurs negligible overhead over standard adversarial training. We validate the robustness of our framework on various datasets and against a wide variety of unseen perturbations, demonstrating that it significantly outperforms the relevant baselines across multiple perturbations with marginal computational cost compared to the multiple perturbations training.

READ FULL TEXT
research
04/08/2023

Robust Deep Learning Models Against Semantic-Preserving Adversarial Attack

Deep learning models can be fooled by small l_p-norm adversarial perturb...
research
01/27/2021

Meta Adversarial Training

Recently demonstrated physical-world adversarial attacks have exposed vu...
research
10/07/2022

A2: Efficient Automated Attacker for Boosting Adversarial Training

Based on the significant improvement of model robustness by AT (Adversar...
research
03/27/2021

Improving Model Robustness by Adaptively Correcting Perturbation Levels with Active Queries

In addition to high accuracy, robustness is becoming increasingly import...
research
08/03/2018

DeepCloak: Adversarial Crafting As a Defensive Measure to Cloak Processes

Over the past decade, side-channels have proven to be significant and pr...
research
09/12/2022

Adaptive Perturbation Generation for Multiple Backdoors Detection

Extensive evidence has demonstrated that deep neural networks (DNNs) are...
research
08/20/2023

Adversarial Collaborative Filtering for Free

Collaborative Filtering (CF) has been successfully used to help users di...

Please sign up or login with your details

Forgot password? Click here to reset