Black-Box Attack against GAN-Generated Image Detector with Contrastive Perturbation

11/07/2022
by   Zijie Lou, et al.
0

Visually realistic GAN-generated facial images raise obvious concerns on potential misuse. Many effective forensic algorithms have been developed to detect such synthetic images in recent years. It is significant to assess the vulnerability of such forensic detectors against adversarial attacks. In this paper, we propose a new black-box attack method against GAN-generated image detectors. A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model under a contrastive loss function. GAN images and their simulated real counterparts are constructed as positive and negative samples, respectively. Leveraging on the trained attack model, imperceptible contrastive perturbation could be applied to input synthetic images for removing GAN fingerprint to some extent. As such, existing GAN-generated image detectors are expected to be deceived. Extensive experimental results verify that the proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs. High visual quality of the attacked images is also achieved. The source code will be available at https://github.com/ZXMMD/BAttGAND.

READ FULL TEXT

page 1

page 4

page 5

page 10

research
04/25/2021

Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors

Visually realistic GAN-generated images have recently emerged as an impo...
research
12/26/2020

Sparse Adversarial Attack to Object Detection

Adversarial examples have gained tons of attention in recent years. Many...
research
09/03/2023

Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection

Malicious use of deepfakes leads to serious public concerns and reduces ...
research
09/04/2023

Memory augment is All You Need for image restoration

Image restoration is a low-level vision task, most CNN methods are desig...
research
08/01/2021

An Effective and Robust Detector for Logo Detection

In recent years, intellectual property (IP), which represents literary, ...
research
09/25/2021

Two Souls in an Adversarial Image: Towards Universal Adversarial Example Detection using Multi-view Inconsistency

In the evasion attacks against deep neural networks (DNN), the attacker ...
research
02/07/2022

FrePGAN: Robust Deepfake Detection Using Frequency-level Perturbations

Various deepfake detectors have been proposed, but challenges still exis...

Please sign up or login with your details

Forgot password? Click here to reset