Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems

03/03/2020
by   Nataniel Ruiz, et al.
7

Face modification systems using deep learning have become increasingly powerful and accessible. Given images of a person's face, such systems can generate new images of that same person under different expressions and poses. Some systems can also modify targeted attributes such as hair color or age. This type of manipulated images and video have been coined DeepFakes. In order to prevent a malicious user from generating modified images of a person without their consent we tackle the new problem of generating adversarial attacks against image translation systems, which disrupt the resulting output image. We call this problem disrupting deepfakes. We adapt traditional adversarial attacks to our scenario. Most image translation architectures are generative models conditioned on an attribute (e.g. put a smile on this person's face). We present class transferable adversarial attacks that generalize to different classes, which means that the attacker does not need to have knowledge about the conditioning vector. In gray-box scenarios, blurring can mount a successful defense against disruption. We present a spread-spectrum adversarial attack, which evades blurring defenses.

READ FULL TEXT

page 2

page 3

page 9

page 12

page 15

page 19

research
02/18/2020

Deflecting Adversarial Attacks

There has been an ongoing cycle where stronger defenses against adversar...
research
04/26/2022

Restricted Black-box Adversarial Attack Against DeepFake Face Swapping

DeepFake face swapping presents a significant threat to online security ...
research
09/12/2023

Generalized Attacks on Face Verification Systems

Face verification (FV) using deep neural network models has made tremend...
research
12/04/2017

Face Translation between Images and Videos using Identity-aware CycleGAN

This paper presents a new problem of unpaired face translation between i...
research
09/04/2023

Toward Defensive Letter Design

A major approach for defending against adversarial attacks aims at contr...
research
10/04/2022

On the Robustness of Deep Clustering Models: Adversarial Attacks and Defenses

Clustering models constitute a class of unsupervised machine learning me...

Please sign up or login with your details

Forgot password? Click here to reset