Deepfakes with an adversarial twist.
This work uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors. We created adversarial perturbations using the Fast Gradient Sign Method and the Carlini and Wagner L2 norm attack in both blackbox and whitebox settings. Detectors achieved over 95 deepfakes, but less than 27 two improvements to deepfake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior (DIP). Lipschitz regularization constrains the gradient of the detector with respect to the input in order to increase robustness to input perturbations. The DIP defense removes perturbations using generative convolutional neural networks in an unsupervised manner. Regularization improved the detection of perturbed deepfakes on average, including a 10 accuracy boost in the blackbox case. The DIP defense achieved 95 perturbed deepfakes that fooled the original detector, while retaining 98 accuracy in other cases on a 100 image subsample.READ FULL TEXT VIEW PDF
Machine learning and deep learning in particular has advanced tremendous...
It is well-known that standard neural networks, even with a high
Adversarial examples are intentionally perturbed images that mislead
Deep neural networks have been proven vulnerable against adversarial
There is substantial interest in the use of machine learning (ML) based
Despite their accuracy, neural network-based classifiers are still prone...
This work tackles the Pixel Privacy task put forth by MediaEval 2019. Ou...
Deepfakes with an adversarial twist.
This work enhances deepfakes with adversarial perturbations to fool common deepfake detectors. Deepfakes
replace a “source” individual in an image or video with a “target” individual’s likeness using deep learning[df_review]. Adversarial perturbations
are modifications made to an image in order to fool a classifier. An adversary can choose these perturbations to be small so that the difference between the perturbed and original images is visually imperceptible. Figure1 shows a deepfake generated from source and target images as well as its adversarially perturbed version. A deepfake detector correctly classifies the original as fake, but fails to detect the perturbed deepfake which looks almost identical. In our results, detectors achieved over 95% accuracy on unperturbed deepfakes, but less than 27% accuracy on perturbed deepfakes.
Deepfakes have been used for many malicious applications. In 2019, an app called DeepNude was released which could take an image of a fully-clothed woman and generate an image with her clothes removed [deepnude]. Furthermore, Facebook found over 500 accounts spreading pro-President Trump, anti-Chinese government messages using deepfake profile pictures [cnn_profle]. These harmful uses of deepfakes violate individuals’ identity and can also propagate misinformation, especially on social media. Ahead of the 2020 U.S. election, Twitter stated plans to label deepfakes [twitter] while Facebook announced it would remove deepfakes altogether [facebook]. But adversarial perturbations can compromise the performance of deepfake detection methods used on these platforms.
To defend against these perturbations, we explore two improvements to deepfake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior. Lipschitz regularization, introduced in [regularization], constrains the gradient of the detector with respect to the input data. We use Deep Image Prior (DIP), originally an image restoration technique [dip], to remove perturbations by iteratively optimizing a generative convolutional neural network in an unsupervised manner. To our knowledge, this is the first application of DIP for removing adversarial perturbations. Overall, the contributions of this work aim to highlight the vulnerability of deepfake detectors to adversarial attacks, as well as present methods to improve robustness.
We focus on deepfake images of celebrity faces as the scope of this work. Our dataset consists of 10,000 images: 5,000 real and 5,000 fake. The 5,000 real images were randomly sampled from the CelebA dataset [celeba]. Fig. 2 includes examples of real and fake images from our dataset.
Most deepfake creation methods use a generative adversarial network (GAN) to replace the face of a “source” individual with that of a “target” individual [df_review]. The generator in these methods consists of an encoder-decoder based network. First, the methods train a common encoder but different decoders for each face. Then, the source image is passed through the common encoder and the target’s decoder to create a deepfake. A shortcoming of these methods is that training the encoder and decoder networks requires many images of both the source and target individuals. Creating a dataset of deepfakes using these methods is difficult: We would require numerous images for each individual and would also have to train separate decoders for each target.
Instead, we created the 5,000 fake images in our dataset using an existing implementation called the “Few-Shot Face Translation GAN” [fewshot_ftg]
. This implementation takes inspiration from Few-Shot Unsupervised Image-to-Image Translation (FUNIT)[funit] and Spatially-Adaptive Denormalization (SPADE) [spade]. FUNIT transforms an image from a source domain to look like an image from a target domain. Moreover, it does so using only a single source image and a small set of (or even a single) target image. It achieves this by simultaneously learning to translate between images sampled from numerous source and target domains during training; this allows FUNIT to generalize to unseen source and target domains at test time [funit]. SPADE is a normalization layer that conditions normalization on an input image segmentation map in order to preserve semantic information [spade]. The Few-Shot Face Translation GAN adds SPADE units to the FUNIT generator allowing us to create a deepfake using only a single source and target image.
Common deepfake detection methods use convolutional neural networks (CNNs) to classify images as “real” or “fake” [df_review]. Prior work has shown that the VGG [vgg] and ResNet [resnet] CNN architectures achieve high accuracy for detecting deepfakes from a variety of creation methods [df_review]
. We tested the VGG16 and ResNet18 architectures on our dataset. The original architectures for both these models have thousand-dimensional output vectors. Instead, we replaced the last layer of these architectures to output two-dimensional softmax vectors corresponding to the real and fake classes. We chose a softmax vector over a sigmoid scalar to make the models compatible with the Carlini and Wagnernorm attack discussed in section III-B.
The models achieved test accuracies of 99.7% and 93.2% as well as Area Under the Receiver Operating Characteristic (AUROC) curve values of 99.9% and 97.9%, respectively. These results are based on a 75%-25% train-test split after 5 epochs of training with a batch size of 16. TableI contains additional performance metrics for these deepfake detectors.
Deep neural networks and many other pattern recognition models are vulnerable to adversarial examples – input data that has been perturbed to make the model misclassify the input[carlini2017adversarial]. The adversary can further craft these adversarial perturbations to have small magnitude so that the adversarial examples are difficult to distinguish from the original unperturbed input data. We tested the effect of adversarial perturbations on deepfake detectors using the following two attacks: the Fast Gradient Sign Method (FGSM) [fgsm] and the Carlini and Wagner Norm attack (CW-) [cw]. We chose FGSM to try a popular, efficient attack and CW- to try a slow but stronger attack. This work only considers perturbations of fake images: An adversary’s goal is to manipulate deepfakes so they are classified as real and not vice versa.
Let x be the vector of pixel values in an input image to a model and y be the corresponding true target class value. Let
be the training loss function (e.g. categorical cross-entropy loss for a softmax classifier) whererepresents the parameters of the model. FGSM exploits the gradient of the loss with respect to the input, , to generate the adversarial example, :
is a hyperparameter that controls the magnitude of the perturbation per pixel. By keepingsmall, we can limit the magnitude of the perturbations and thus minimize visual distortions in the adversarial examples. In practice, the pixel values of the adversarial examples are further clipped to a range of floating point values between 0 and 1. We used an value of 0.02 to generate our FGSM adversarial examples. This value was chosen after evaluating the attack effectiveness and visual distortions for several values in the range [0.01, 0.10].
To see why this attack is effective in causing a misclassification, we examine the linear approximation of the loss using its Taylor series expansion:
Using the function of the gradient ensures that the dot product in the second term of (2) is positive. Thus, FGSM chooses the perturbation that causes the maximum increase in the value of the linearized loss function subject to the pixel-perturbation control parameter.
This attack simultaneously minimizes two objectives. Let x’ be a perturbed image. The first objective is to minimize the norm of the perturbation:
The second objective tries to make the perturbation cause a misclassification. Let
represent the pre-softmax vector output (or logits) of a multi-class neural network classifier. The second objective is as follows:
Here, and index into with being the index of the true target class. By minimizing , we try to maximize the difference between the logit of an incorrect class and the logit of the true class. Since the predicted class corresponds to the maximum logit, minimizing effectively tries to cause a misclassification. is a parameter that defines a threshold by which the logit corresponding to the incorrect predicted class should exceed the logit of the true target class.
The attack also performs a change of variable from x’ to :
is positive and controls the relative strength of the two objectives. In practice, c is chosen using a modified binary search which finds the smallest value of in a provided range, such that is less than 0. This search along with the iterative gradient descent optimization process makes the attack very slow. However, this attack breaks many previously proposed defenses against adversarial examples [carlini2017adversarial]. For further details about the attack, we refer the reader to the CW- paper [cw] and the implementation we used [cw_implementation].
For all adversarial examples generated using this method, we chose as the range for with 5 search steps. We performed a maximum of 1000 iterations for optimization with a learning rate of . We used 200 for the value of . The value of was chosen by trying out values in the range . The range of was chosen by initially performing attacks using a range of and then narrowing down the range to include the values of most commonly chosen by the search steps. The values for and range of were evaluated objectively based on the decrease in accuracy of the classifier under attack and subjectively based on the amount of visible distortions in the perturbed images. We left all other parameters to the defaults recommended by the implementation [cw_implementation].
|ResNet (Regularized: =5)||95.0%||99.5%||91.5%||99.3%||99.2%||90.8%|
|ResNet (Regularized =50)||94.1%||98.6%||96.6%||91.4%||91.8%||96.8%|
|ResNet (Regularized =500)||87.5%||92.3%||85.3%||90.6%||89.9%||84.4%|
|ResNet (Regularized =5000)||96.2%||99.1%||98.0%||94.2%||94.5%||98.1%|
|ResNet (Average Regularized)||93.2%||97.4%||92.9%||93.9%||93.9%||92.5%|
|ResNet (Data Augmented)||98.7%||99.9%||98.5%||98.9%||98.9%||98.5%|
|Note: Unperturbed results listed for a test dataset containing 1,250 images each of real and fake classes.|
|Model||Unperturbed||Perturbed: FGSM||Perturbed: CW-|
|ResNet (Regularized =5)||99.3%||42.2%||26.5%||14.5%||0.0%|
|ResNet (Regularized =50)||91.4%||17.8%||12.7%||6.2%||0.0%|
|ResNet (Regularized =500)||90.6%||53.2%||9.0%||19.8%||0.0%|
|ResNet (Regularized =5000)||94.2%||12.8%||1.9%||16.7%||2.2%|
|ResNet (Average Regularized)||93.9%||31.5%||12.5%||14.3%||0.5%|
|ResNet (Data Augmented)||98.9%||17.0%||2.2%||3.8%||0.1%|
|Note: Adversarial attacks conducted using only the fake images. Best regularized performances.|
Adversarial attacks on machine learning models fall into two types depending on the amount of information available to the adversary about the model under attack:
: The adversary has complete access to the model under attack, including the model architecture and parameters. It may be unlikely for an adversary to have access to model parameters in many scenarios. However, machine learning solutions such as deepfake detectors often use existing, publicly known and accessible architectures for transfer learning purposes[df_review].
Blackbox Attack: The adversary has limited or almost no information about the model under attack. Previous research [szegedy2013intriguing, fgsm, tramer2017space, papernot2016transferability]
has shown that adversarial examples created using whitebox attacks on one model also damage performance of different models trained for the same task. Furthermore, these attacks do not even have to be in the same family of classifiers. For example, adversarial examples created using a neural network also work on support vector machines and decision tree classifiers[papernot2016transferability]. This transferability of adversarial examples is what makes blackbox attacks possible. In this work, we perform blackbox attacks on the VGG model by creating whitebox examples for the ResNet model. Similarly, blackbox examples for the ResNet model are generated by creating whitebox examples for the VGG model.
Adversarial attacks significantly reduced the performance of both the VGG and ResNet deepfake detection models. We compare results on datasets of unperturbed and perturbed fake images created using the test set. The datasets exclude real images since they were not perturbed. Fig. 3 and Table II show the adversarial attack results.
For unperturbed fake images, VGG achieved an accuracy of 99.7% and ResNet achieved 95.2%. In the blackbox FGSM case, the accuracy decreased to 8.9% for VGG and 20.8% for ResNet. Blackbox CW- reduced the accuracy of VGG to 26.6% and ResNet to 4.6%. In the whitebox FGSM case, the accuracy dropped to 0.0% for VGG and 7.5% for ResNet. Whitebox CW- lowered the accuracy of both VGG and ResNet to 0.0%. As in section II-B, these results are based on a 75%-25% train-test split.
Whitebox attacks were more effective than blackbox attacks. This is expected because whitebox attacks have complete access to the model under attack, whereas blackbox attacks do not. Whitebox attacks reduced model accuracies on fake images to 0% in all cases except ResNet with FGSM. Still, blackbox attacks resulted in less than 27% accuracy on perturbed fake images. Furthermore, CW- was more effective than FGSM in all cases except the blackbox attack on VGG. We suspect CW- overfits to the ResNet model in this case.
Lipschitz regularization, introduced in [regularization], constrains the gradient of the detector with respect to the input data. We achieve this by training the model using an augmented loss function involving the norms of the logit gradients:
Here, we use to represent the augmented loss function and to represent the training loss function before augmentation. represents the pre-softmax scalar output (or logit) corresponding to class for a multi-class neural network classifier. is the total number of target classes and is the dimensionality of the input vector. As before, x is the input vector, y is the corresponding true target class value and represents the model parameters. controls the strength of the regularization term in the augmented loss function.
Linearizing the (non-augmented) loss function provides some intuition into why this regularization can help:
As shown above, the linear approximation can be written in terms of the gradients of the detector logits with respect to the input. Then, we expect that minimizing the norm of these gradients will desensitize the loss from small perturbations allowing the network to retain performance on inputs with adversarial perturbations. In the extreme case, if the norms of these gradients are zero, then the loss for the original unperturbed image equals the loss for the adversarially perturbed image.
Lipshitz regularization improved the detection of adversarially perturbed deepfakes by ResNet models on average. We do not report regularization results for VGG given computational constraints and the slow nature of the CW- attack (around 2 minutes per image). We trained models with the following values for the regularization strength (): 5, 50, 500, and 5000. Table II shows the results for all values; Fig. 4 and our discussion below focus on results for the values on average and for the values that achieved the best results. Regularization did not affect the accuracy on the unperturbed test data: We observed 93.2% accuracy for both unregularized and regularized models on average (Table I).
In the blackbox case, unregularized models obtained an accuracy of 20.8% for FGSM and 4.6% for CW- on perturbed fake images. Regularized models improved detection of perturbed images to 31.5% for FGSM and 14.3% for CW- on average. In the best case, regularized models achieved an accuracy of 53.2% for FGSM and 19.8% for CW- on perturbed images.
Similarly, regularized models also performed better than unregularized models in the whitebox case. Unregularized models obtained an accuracy of 7.5% for FGSM and 0.0% for CW- on perturbed fake images, as reported above. On average, regularized models improved detection of perturbed images to 12.5% for FGSM and to 0.5% for CW-. In the best case, regularized models achieved an accuracy of 26.5% for FGSM and 2.2% for CW- on perturbed images. Overall, although regularization improved robustness to adversarial perturbations, the performance remains suboptimal for real world applications.
Another approach to defending against adversarial attacks is to pre-process the input to remove perturbations before feeding it to the classifier. We do this by using an unsupervised technique called Deep Image Prior (DIP) which was originally introduced in [dip]
for image restoration purposes such as image denoising, inpainting and super resolution.
This section summarizes the key ideas from the original DIP paper [dip]. Let be a corrupted image (e.g. a noisy image) and x be the ground truth uncorrupted image. Recovering x from can be formulated as the following optimization problem:
Here, represents a domain-dependent “distance” or dissimilarity between and x. is a regularization term that represents knowledge about ground truth images. The prior knowledge from regularization is critical since recovering x from is generally an ill-posed problem.
We can replace x in (9) with a surjective function and optimize over instead:
The DIP technique uses a generative CNN, , with parameters and random seed z in place of . Through experimentation, [dip] shows that the architecture of a convolutional neural network itself encodes a prior that favors natural images over corrupted ones. This allows us to get a good reconstruction even if we ignore the regularization term , leading to the following optimization problem:
In practice, if the network is optimized for too long, it learns to generate the corruptions. However, the network learns to generate “natural” features before learning to generate the corruptions due to the prior that the CNN encodes. In other words, a good image reconstruction tends to exist somewhere along the optimization trajectory.
|Note: DIP Results based on 100 images subsampled according to section V-C.|
|Classifier Threshold||Attack||Unperturbed||Blackbox: Perturbed||Whitebox: Perturbed|
|Note: DIP Results based on 100 images subsampled according to section V-C.|
We can use the image restoration framework described above to remove adversarial perturbations from adversarial examples. We simply replace with in (11). We chose Mean Squared Error (MSE) calculated pixel-wise over the images as our dissimilarity metric, . This metric was chosen since it was effective in [dip] for various applications including image denoising, super resolution and JPEG compression artifact removal. Thus, we modify the DIP optimization in (11) to remove adversarial perturbations as follows:
We propose the following deepfake defense using (12). Given that an unperturbed image tends to occur somewhere along the DIP optimization trajectory, we feed the generated image at an intermediate iteration into an existing classifier. The classifier output for the generated image at the intermediate iteration is then used to make a final classification for the image. Throughout section V, the “classification of the DIP defense” refers to the final classification made using this process, and “classifier” refers to the CNN model used to obtain the classification.
We used only the ResNet model for DIP due to computational constraints as described in V-C. We also trained the classifier for an additional 10 epochs on the training dataset. For these 10 epochs, the training dataset was augmented so that approximately 40% of it contained blurry images. This was done because the reconstructed DIP images without the perturbations tended to be slightly less sharp compared to the original images in the training and test sets. We created the blurry images by preprocessing training images using a Gaussian blur kernel with values selected uniformly from the range [3.0, 5.0]. Table I lists the performance metrics of the ResNet model trained on the augmented dataset. Table II reports the adversarial attack results for this model, which are similar to the results for the model without data augmentation.
Fig. 5 shows a perturbed (FGSM) fake image being reconstructed using the DIP framework. This image was chosen such that the ResNet model classifies it as real. We observe that as the number of DIP optimization iterations increases, the images gain more detail. But as the image sharpness increases, the generated images also tend to include adversarial perturbations: The generated image at iteration 9,000 is slightly noiser than the ones at iterations 3,000 and 6,000.
Fig. (a)a shows the classifier output for the perturbed (FGSM) fake image in Fig. 5 along the DIP optimization path. We ignore the predictions for the first 500 iterations where the generative CNN is still learning how to produce a natural-looking image. We observe that, after this, the classifier output remains flat and close to 0 (fake) until around iteration 5,000. Following this, the classifier output increases as the generated image begins including the perturbations until it flattens out at 1 (real). Fig. (b)b shows a similar pattern for a perturbed fake (CW-) image. In contrast, for an unperturbed fake image (Fig. (c)c), the graph flattens out at a fake prediction and never reaches a real prediction. For a real unperturbed image (Fig. (d)d), the graph reaches a real prediction much earlier than in the perturbed fake cases. In each case, the classifier predicts the correct class at iteration 6,000.
We performed the DIP optimization for 10,000 iterations on a total of 100 images based on the test set. Iteration 6,000 was used to obtain the classification of the DIP defense after evaluating iterations in the range of 2,500 to 7,500. We used a U-Net architecture [unet] for the generative CNN, , in the DIP optimization described in (12). This architecture was used because it was shown to be effective in [dip] for both denoising and removing JPEG compression artifacts from images. For the exact architecture details, we refer to our code repository linked at the end of this paper. The optimization process is slow and took approximately 30 minutes for each image using a NVIDIA Tesla K80 GPU on Google Colab. For this reason, we chose only 100 images for the experiments.
We randomly sampled 10 images each from the following 2 categories for both perturbed FGSM and CW- images in blackbox and whitebox settings (80 images total):
Perturbed Fake-Wrong: A perturbed fake image that the classifier predicts as real.
Perturbed Fake-Correct: A perturbed fake image that the classifier predicts as fake.
In addition, we sampled 10 images each from the following 2 categories for unperturbed images (20 images total):
Unperturbed Fake-Correct: An unperturbed fake image that the classifier predicts as fake.
Unperturbed Real-Correct: An unperturbed real image that the classifier predicts as real.
All images were sampled such that the ResNet model (trained on the augmented dataset) obtained a correct prediction on the unperturbed versions of the images.
We report DIP results using classification thresholds of 0.5 and 0.25. Table III reports the overall performance of the DIP defense across all 100 images while Table IV includes the accuracy of the DIP defense for each category of images. The tables also include a baseline performance from the classifier without the DIP defense.
The DIP defense achieved 95% on perturbed deepfakes that fooled the original detector (Perturbed Fake-Wrong), while retaining 98% accuracy in other cases (Real-Correct and Fake-Correct) with a 0.25 threshold. Overall, on the 100 image subsample, the defense obtained 97.0% accuracy and 99.2% AUROC for both classification thresholds. Varying the threshold reveals the tradeoff between incorrectly predicting real images as fake (false positives) and fake images as real (false negatives). For deepfake detection, false positives are generally less of a problem than false negatives.
Using a threshold of 0.5 yielded 100% recall for both unperturbed and perturbed fake images. But this threshold resulted in only 70% recall for real images. On the other hand, using a threshold of 0.25 improved the recall for real images to 90%, but also reduced recall for fake images to 97.8%. Specifically, the accuracy in the whitebox perturbed (FGSM) fake-wrong category decreased from 100% to 80%.
Our results demonstrate that adversarial perturbations can enhance deepfakes, making them significantly more difficult to detect. Lipschitz regularization made the CNNs more robust to adversarial perturbations in general. However, the performance boost from regularization alone may not be enough for practical use in deepfake detection. This was especially true in the whitebox CW- setting where even the regularized model only classified 2.2% of the perturbed fake images correctly. The DIP defense shows more promising results. It achieved a recall of 97.8% for perturbed and unperturbed fake images using a classification threshold of 0.25 (Table III). Furthermore, the DIP defense retained at least 90.0% of the classifier’s performance on real images using the same threshold value.
While the DIP defense showed success for deepfake detection on the 100 images tested, we emphasize that additional experiments would be required to demonstrate success on adversarial attacks in other domains. For example, deepfake classifiers only need to be robust to adversarial perturbations for one class of images (the fake class), while in other domains, robustness to adversarial attacks on more than one class may be important. Another limitation of the DIP defense is the time it takes to process a single image. As described in section V-C, each image took a little under 30 minutes to process on a NVIDIA Tesla K80 GPU. This may limit the practicality of the defense in situations where there are resource constraints or where many images need to be processed in real time. Future work involves finding more efficient methods for improving deepfake detector robustness to adversarial perturbations.
The authors sincerely thank Professor Bart Kosko for his feedback and guidance throughout this work. We also acknowledge the USC Center for AI in Society’s Student Branch for providing us with computing resources.
Code and additional architecture details are available at: https://github.com/ApGa/adversarial_deepfakes.