Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors

04/19/2022
by   Nyee Thoang Lim, et al.
0

Deepfakes utilise Artificial Intelligence (AI) techniques to create synthetic media where the likeness of one person is replaced with another. There are growing concerns that deepfakes can be maliciously used to create misleading and harmful digital contents. As deepfakes become more common, there is a dire need for deepfake detection technology to help spot deepfake media. Present deepfake detection models are able to achieve outstanding accuracy (>90 However, most of them are limited to within-dataset scenario, where the same dataset is used for training and testing. Most models do not generalise well enough in cross-dataset scenario, where models are tested on unseen datasets from another source. Furthermore, state-of-the-art deepfake detection models rely on neural network-based classification models that are known to be vulnerable to adversarial attacks. Motivated by the need for a robust deepfake detection model, this study adapts metamorphic testing (MT) principles to help identify potential factors that could influence the robustness of the examined model, while overcoming the test oracle problem in this domain. Metamorphic testing is specifically chosen as the testing technique as it fits our demand to address learning-based system testing with probabilistic outcomes from largely black-box components, based on potentially large input domains. We performed our evaluations on MesoInception-4 and TwoStreamNet models, which are the state-of-the-art deepfake detection models. This study identified makeup application as an adversarial attack that could fool deepfake detectors. Our experimental results demonstrate that both the MesoInception-4 and TwoStreamNet models degrade in their performance by up to 30% when the input data is perturbed with makeup.

READ FULL TEXT
research
11/19/2020

Adversarial Threats to DeepFake Detection: A Practical Perspective

Facially manipulated images and videos or DeepFakes can be used maliciou...
research
03/14/2022

Fairness Evaluation in Deepfake Detection Models using Metamorphic Testing

Fairness of deepfake detectors in the presence of anomalies are not well...
research
08/15/2022

InvisibiliTee: Angle-agnostic Cloaking from Person-Tracking Systems with a Tee

After a survey for person-tracking system-induced privacy concerns, we p...
research
10/17/2021

Black-box Adversarial Attacks on Network-wide Multi-step Traffic State Prediction Models

Traffic state prediction is necessary for many Intelligent Transportatio...
research
05/11/2018

Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness

Natural Language Inference is a challenging task that has received subst...
research
08/03/2022

Spectrum Focused Frequency Adversarial Attacks for Automatic Modulation Classification

Artificial intelligence (AI) technology has provided a potential solutio...
research
11/16/2020

Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection

The fast and continuous growth in number and quality of deepfake videos ...

Please sign up or login with your details

Forgot password? Click here to reset