Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack

03/22/2022
by   Chi Liu, et al.
16

DeepFakes are raising significant social concerns. Although various DeepFake detectors have been developed as forensic countermeasures, these detectors are still vulnerable to attacks. Recently, a few attacks, principally adversarial attacks, have succeeded in cloaking DeepFake images to evade detection. However, these attacks have typical detector-specific designs, which require prior knowledge about the detector, leading to poor transferability. Moreover, these attacks only consider simple security scenarios. Less is known about how effective they are in high-level scenarios where either the detectors or the attacker's knowledge varies. In this paper, we solve the above challenges with presenting a novel detector-agnostic trace removal attack for DeepFake anti-forensics. Instead of investigating the detector side, our attack looks into the original DeepFake creation pipeline, attempting to remove all detectable natural DeepFake traces to render the fake images more "authentic". To implement this attack, first, we perform a DeepFake trace discovery, identifying three discernible traces. Then a trace removal network (TR-Net) is proposed based on an adversarial learning framework involving one generator and multiple discriminators. Each discriminator is responsible for one individual trace representation to avoid cross-trace interference. These discriminators are arranged in parallel, which prompts the generator to remove various traces simultaneously. To evaluate the attack efficacy, we crafted heterogeneous security scenarios where the detectors were embedded with different levels of defense and the attackers' background knowledge of data varies. The experimental results show that the proposed attack can significantly compromise the detection accuracy of six state-of-the-art DeepFake detectors while causing only a negligible loss in visual quality to the original DeepFake samples.

READ FULL TEXT

page 4

page 9

page 12

page 13

research
04/25/2021

Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors

Visually realistic GAN-generated images have recently emerged as an impo...
research
03/29/2022

Exploring Frequency Adversarial Attacks for Face Forgery Detection

Various facial manipulation techniques have drawn serious public concern...
research
03/27/2023

EMShepherd: Detecting Adversarial Samples via Side-channel Leakage

Deep Neural Networks (DNN) are vulnerable to adversarial perturbations-s...
research
09/03/2023

Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection

Malicious use of deepfakes leads to serious public concerns and reduces ...
research
09/02/2020

Flow-based detection and proxy-based evasion of encrypted malware C2 traffic

State of the art deep learning techniques are known to be vulnerable to ...
research
06/01/2022

Attack-Agnostic Adversarial Detection

The growing number of adversarial attacks in recent years gives attacker...
research
07/23/2023

Towards Generic and Controllable Attacks Against Object Detection

Existing adversarial attacks against Object Detectors (ODs) suffer from ...

Please sign up or login with your details

Forgot password? Click here to reset