Mist: Towards Improved Adversarial Examples for Diffusion Models

05/22/2023
by   Chumeng Liang, et al.
0

Diffusion Models (DMs) have empowered great success in artificial-intelligence-generated content, especially in artwork creation, yet raising new concerns in intellectual properties and copyright. For example, infringers can make profits by imitating non-authorized human-created paintings with DMs. Recent researches suggest that various adversarial examples for diffusion models can be effective tools against these copyright infringements. However, current adversarial examples show weakness in transferability over different painting-imitating methods and robustness under straightforward adversarial defense, for example, noise purification. We surprisingly find that the transferability of adversarial examples can be significantly enhanced by exploiting a fused and modified adversarial loss term under consistent parameters. In this work, we comprehensively evaluate the cross-method transferability of adversarial examples. The experimental observation shows that our method generates more transferable adversarial examples with even stronger robustness against the simple adversarial defense.

READ FULL TEXT

page 4

page 5

page 7

page 8

page 9

research
09/07/2022

On the Transferability of Adversarial Examples between Encrypted Models

Deep neural networks (DNNs) are well known to be vulnerable to adversari...
research
04/16/2019

Reducing Adversarial Example Transferability Using Gradient Regularization

Deep learning algorithms have increasingly been shown to lack robustness...
research
02/09/2023

Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples

Diffusion Models (DMs) achieve state-of-the-art performance in generativ...
research
02/07/2023

Toward Face Biometric De-identification using Adversarial Examples

The remarkable success of face recognition (FR) has endangered the priva...
research
11/20/2018

Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples

How can we make machine learning provably robust against adversarial exa...
research
05/12/2020

Increased-confidence adversarial examples for improved transferability of Counter-Forensic attacks

Transferability of adversarial examples is a key issue to study the secu...
research
12/29/2021

Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently

Deep neural networks are vulnerable to adversarial examples (AEs), which...

Please sign up or login with your details

Forgot password? Click here to reset