Guided Diffusion Model for Adversarial Purification from Random Noise

06/22/2022
by   Quanlin Wu, et al.
0

In this paper, we propose a novel guided diffusion purification approach to provide a strong defense against adversarial attacks. Our model achieves 89.62 robust accuracy under PGD-L_inf attack (eps = 8/255) on the CIFAR-10 dataset. We first explore the essential correlations between unguided diffusion models and randomized smoothing, enabling us to apply the models to certified robustness. The empirical results show that our models outperform randomized smoothing by 5

READ FULL TEXT
research
09/19/2023

Language Guided Adversarial Purification

Adversarial purification using generative models demonstrates strong adv...
research
06/16/2023

Towards Better Certified Segmentation via Diffusion Models

The robustness of image segmentation has been an important research topi...
research
05/30/2022

Guided Diffusion Model for Adversarial Purification

With wider application of deep neural networks (DNNs) in various algorit...
research
08/28/2023

DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing

Diffusion models have been leveraged to perform adversarial purification...
research
09/17/2020

Label Smoothing and Adversarial Robustness

Recent studies indicate that current adversarial attack methods are flaw...
research
05/31/2023

Incremental Randomized Smoothing Certification

Randomized smoothing-based certification is an effective approach for ob...
research
06/28/2021

Certified Robustness via Randomized Smoothing over Multiplicative Parameters

We propose a novel approach of randomized smoothing over multiplicative ...

Please sign up or login with your details

Forgot password? Click here to reset