On the Robustness of Latent Diffusion Models

06/14/2023
by   Jianping Zhang, et al.
0

Latent diffusion models achieve state-of-the-art performance on a variety of generative tasks, such as image synthesis and image editing. However, the robustness of latent diffusion models is not well studied. Previous works only focus on the adversarial attacks against the encoder or the output image under white-box settings, regardless of the denoising process. Therefore, in this paper, we aim to analyze the robustness of latent diffusion models more thoroughly. We first study the influence of the components inside latent diffusion models on their white-box robustness. In addition to white-box scenarios, we evaluate the black-box robustness of latent diffusion models via transfer attacks, where we consider both prompt-transfer and model-transfer settings and possible defense mechanisms. However, all these explorations need a comprehensive benchmark dataset, which is missing in the literature. Therefore, to facilitate the research of the robustness of latent diffusion models, we propose two automatic dataset construction pipelines for two kinds of image editing models and release the whole dataset. Our code and dataset are available at <https://github.com/jpzhang1810/LDM-Robustness>.

READ FULL TEXT

page 3

page 7

page 8

page 13

page 14

page 15

page 16

page 17

research
09/14/2023

Semantic Adversarial Attacks via Diffusion Models

Traditional adversarial attacks concentrate on manipulating clean exampl...
research
01/06/2021

Adversarial Robustness by Design through Analog Computing and Synthetic Gradients

We propose a new defense mechanism against adversarial attacks inspired ...
research
03/17/2023

A Recipe for Watermarking Diffusion Models

Recently, diffusion models (DMs) have demonstrated their advantageous po...
research
07/04/2023

SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis

We present SDXL, a latent diffusion model for text-to-image synthesis. C...
research
10/17/2020

A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness

Stochastic Neural Networks (SNNs) that inject noise into their hidden la...
research
11/23/2019

Invert and Defend: Model-based Approximate Inversion of Generative Adversarial Networks for Secure Inference

Inferring the latent variable generating a given test sample is a challe...
research
10/19/2020

RobustBench: a standardized adversarial robustness benchmark

Evaluation of adversarial robustness is often error-prone leading to ove...

Please sign up or login with your details

Forgot password? Click here to reset