DensePure: Understanding Diffusion Models towards Adversarial Robustness

11/01/2022
by   Chaowei Xiao, et al.
7

Diffusion models have been recently employed to improve certified robustness through the process of denoising. However, the theoretical understanding of why diffusion models are able to improve the certified robustness is still lacking, preventing from further improvement. In this study, we close this gap by analyzing the fundamental properties of diffusion models and establishing the conditions under which they can enhance certified robustness. This deeper understanding allows us to propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. classifier). Given an (adversarial) input, DensePure consists of multiple runs of denoising via the reverse process of the diffusion model (with different random seeds) to get multiple reversed samples, which are then passed through the classifier, followed by majority voting of inferred labels to make the final prediction. This design of using multiple runs of denoising is informed by our theoretical analysis of the conditional distribution of the reversed sample. Specifically, when the data density of a clean sample is high, its conditional density under the reverse process in a diffusion model is also high; thus sampling from the latter conditional distribution can purify the adversarial example and return the corresponding clean sample with a high probability. By using the highest density point in the conditional distribution as the reversed sample, we identify the robust region of a given instance under the diffusion model's reverse process. We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works. In practice, DensePure can approximate the label of the high density region in the conditional distribution so that it can enhance certified robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2021

PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Driven Adaptive Prior

Denoising diffusion probabilistic models have been recently proposed to ...
research
02/05/2023

ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion Trajectories

Diffusion models have recently exhibited remarkable abilities to synthes...
research
08/28/2023

DiffSmooth: Certifiably Robust Learning via Diffusion Models and Local Smoothing

Diffusion models have been leveraged to perform adversarial purification...
research
07/26/2023

How Does Diffusion Influence Pretrained Language Models on Out-of-Distribution Data?

Transformer-based pretrained language models (PLMs) have achieved great ...
research
01/25/2023

On the Mathematics of Diffusion Models

This paper attempts to present the stochastic differential equations of ...
research
03/27/2023

Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection

As the use of machine learning continues to expand, the importance of en...
research
08/09/2023

Do Diffusion Models Suffer Error Propagation? Theoretical Analysis and Consistency Regularization

While diffusion models have achieved promising performances in data synt...

Please sign up or login with your details

Forgot password? Click here to reset