Diffusion Models for Adversarial Purification

05/16/2022
by   Weili Nie, et al.
38

Adversarial purification refers to a class of defense methods that remove adversarial perturbations using a generative model. These methods do not make assumptions on the form of attack and the classification model, and thus can defend pre-existing classifiers against unseen threats. However, their performance currently falls behind adversarial training methods. In this work, we propose DiffPure that uses diffusion models for adversarial purification: Given an adversarial example, we first diffuse it with a small amount of noise following a forward diffusion process, and then recover the clean image through a reverse generative process. To evaluate our method against strong adaptive attacks in an efficient and scalable way, we propose to use the adjoint method to compute full gradients of the reverse generative process. Extensive experiments on three image datasets including CIFAR-10, ImageNet and CelebA-HQ with three classifier architectures including ResNet, WideResNet and ViT demonstrate that our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods, often by a large margin. Project page: https://diffpure.github.io.

READ FULL TEXT

Authors

page 4

page 22

01/12/2022

Towards Adversarially Robust Deep Image Denoising

This work systematically investigates the adversarial robustness of deep...
10/15/2021

Adversarial Purification through Representation Disentanglement

Deep learning models are vulnerable to adversarial examples and make inc...
10/17/2020

A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness

Stochastic Neural Networks (SNNs) that inject noise into their hidden la...
05/30/2022

Guided Diffusion Model for Adversarial Purification

With wider application of deep neural networks (DNNs) in various algorit...
01/12/2022

Adversarially Robust Classification by Conditional Generative Model Inversion

Most adversarial attack defense methods rely on obfuscating gradients. T...
11/06/2020

Generative adversarial training of product of policies for robust and adaptive movement primitives

In learning from demonstrations, many generative models of trajectories ...
03/26/2022

Reverse Engineering of Imperceptible Adversarial Image Perturbations

It has been well recognized that neural network based image classifiers ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.