It’s common for people to share personal photos on social networks. Recent developments of image manipulation techniques via Generative Models (GMs)  result in serious concerns over the authenticity of the images. As these techniques are easily accessible [44, 21, 7, 31, 61, 8, 27], the shared images are at a greater risk for misuse after manipulation. Generation of fake images can be categorized into two types: entire image generation and partial image manipulation [46, 48]. While the former generates entirely new images by feeding a noise code to the GM, the latter involves the partial manipulation of a real image. Since the latter alters the semantics of real images, it is generally considered as a greater risk, and thus partial image manipulation detection is the focus of this work.
Detecting such manipulation is an important step to alleviate societal concerns on the authenticity of shared images. Prior works have been proposed to combat manipulated media . They leverage properties that are prone to being manipulated, including mouth movement , steganalysis features , attention mechanism [11, 23], etc. However, these methods are often overfitted to the image manipulation method and the dataset used in training, and suffer when tested on data with a different distribution.
|Method||Year||Detection||Purpose||Manipulation||Generalizable||Add||Recover||Template||# of||Img. ind.|
|Cozzolino et al. ||Passive||Img. man. det.||Entire/Partial||✔||✗||✗||-||-||-|
|Nataraj et al. ||Passive||Img. man. det.||Entire/Partial||✔||✗||✗||-||-||-|
|Rossler et al. ||Passive||Img. man. det.||Entire/Partial||✗||✗||✗||-||-||-|
|Zhang et al. ||Passive||Img. man. det.||Partial||✔||✗||✗||-||-||-|
|Wang et al. ||Passive||Img. man. det.||Entire/Partial||✔||✗||✗||-||-||-|
|Wu et al. ||Passive||Img. man. det.||Entire/Partial||✗||✗||✗||-||-||-|
|Qian et al. ||Passive||Img. man. det.||Entire/Partial||✗||✗||✗||-||-||-|
|Dang et al. ||Passive||Img. man. det.||Partial||✗||✗||✗||-||-||-|
|Masi et al. ||Passive||Img. man. det.||Partial||✗||✗||✗||-||-||-|
|Nirkin et al. ||Passive||Img. man. det.||Partial||✗||✗||✗||-||-||-|
|Asnani et al. ||Passive||Img. man. det.||Entire/Partial||✔||✗||✗||-||-||-|
|Segalis et al. ||Proactive||Deepfake disruption||Partial||✗||✔||✗||Adversarial attack||✔|
|Ruiz et al. ||Proactive||Deepfake disruption||Partial||✗||✔||✗||Adversarial attack||✔|
|Yeh et al. ||Proactive||Deepfake disruption||Partial||✗||✔||✗||Adversarial attack||✔|
|Wang et al. ||Proactive||Deepfake tagging||Partial||✗||✔||✔||Fixed template||✗|
|Ours||-||Proactive||Img. man. det.||Partial||✔||✔||✔||Unsupervised learning||✔|
All the aforementioned methods adopt a passive scheme since the input image, being real or manipulated, is accepted as is for detection. Alternatively, there is also a proactive
scheme proposed for a few computer vision tasks, which involves adding signals to the original image. For example, prior works add a predefined template to real images which either disrupt the output of the GM[40, 54, 41] or tag images to real identities 
. This template is either a one-hot encoding or an adversarial perturbation [40, 54, 41].
Motivated by improving the generalization of manipulation detection, as well as the proactive scheme for other tasks, this paper proposes a proactive scheme for the purpose of image manipulation detection, which works as follows. When an image is captured, our algorithm adds an imperceptible signal (termed as template) to it, serving as an encryption. If this encrypted image is shared and manipulated through a GM, our algorithm accurately distinguishes between the encrypted image and its manipulated version by recovering the added template. Ideally, this encryption process could be incorporated into the camera hardware to protect all images after being captured. In comparison, our approach differs from related proactive works [40, 54, 41, 46] in its purpose (detection vs other tasks), template learning (learnable vs predefined), the number of templates, and the generalization ability.
Our key enabling technique is to learn a template set, which is a non-trivial task. First, there is no ground truth template for supervision. Second, recovering the template from manipulated images is challenging. Third, using one template can be risky as the attackers may reverse engineer the template. Lastly, image editing operations such as blurring or compression could be applied to encrypted images, diminishing the efficacy of the added template.
To overcome these challenges, we propose a template estimation framework to learn a set of orthogonal templates
. We perform image manipulation detection based on the recovery of the template from encrypted real and manipulated images. Unlike prior works, we use unsupervised learning to estimate this template set based on certain constraints. We define different loss functions to incorporate properties including small magnitude, more high frequency content, orthogonality and classification ability as constraints to learn the template set. We show that our framework achieves superior manipulation detection than State-of-The-Art (SoTA) methods[46, 59, 10, 28]. We propose a novel evaluation protocol with different GMs, where we train on images manipulated by one GM and test on unseen GMs. In summary, the contributions of this paper include:
We propose a novel proactive scheme for image manipulation detection.
We propose to learn a set of templates with desired properties, achieving higher performance than a single template approach.
Our method substantially outperforms the prior works on image manipulation detection. Our method is more generalizable to different GMs showing an improvement of average precision averaged across GMs.
2 Related Works
Passive deepfake detection. Most deepfake detection methods are passive. Wang et al. 
perform binary detection by exploring frequency domain patterns from images. Zhanget al.  propose to extract the median and high frequencies to detect the upsampling artifacts by GANs. Asnani et al. 
propose to estimate fingerprint using certain desired properties for generative models which produce fake images. Others use autoencoders, hand-crafted features , face-context discrepancies , mouth and face motion , steganalysis features , xception-net , frequency domain  and attention mechanisms . These aforementioned passive deepfake detection methods suffer from generalization. We propose a novel proactive scheme for manipulation detection, aiming to improve the generalization.
Proactive schemes. Recently, some proactive methods are proposed by adding an adversarial noise onto the real image. Ruiz et al.  perform deepfake disruption by using adversarial attack in image translation networks. Yeh et al.  disrupt deepfakes to low quality images by performing adversarial attacks on real images. Segalis et al.  disrupt manipulations related to face-swapping by adding small perturbations. Wang et al.  propose a method to tag images by embedding messages and recovering them after manipulation. Wang et al.  use a one-hot encoding message instead of adversarial perturbations. Compared with these works, our method focuses on image manipulation detection rather than deepfake disruption or deepfake tagging. Our method learns a set of templates and recovers the added template for image manipulation detection. Our method also generalizes better to unseen GMs than prior works. Tab. 1 summarizes the comparison with prior works.
Watermarking and cryptography methods.
Digital watermarking methods have been evolving from using classic image transformation techniques to deep learning techniques. Prior work have explored different ways to embed watermarks through pixel values and spatial domain . Others [18, 20, 52]
use frequency domains including transformation coefficients obtained via SVD, discrete wavelet transform (DWT), discrete cosine transform (DCT) and discrete fourier transform (DFT) to embed watermarks. Recently, deep learning techniques proposed by Zhuet al. , Baluja et al.  and Tancik et al.  use an encoder-decoder architecture to embed watermarks into an image. All of these methods aim to either hide sensitive information or protect the ownership of digital images. While our algorithm shares the high-level idea of image encryption, we develop a novel framework for an entirely different purpose, i.e., proactive image manipulation detection.
3 Proposed Approach
3.1 Problem Formulation
We only consider GMs which perform partial image manipulation that takes a real image as input for manipulation. Let be a set of real images which when given as input to a GM would output , a set of manipulated images. Conventionally, passive image manipulation detection methods perform binary classification on vs. . Denote as the set of real and manipulated images, the objective function for passive detection is formulated as follows:
where is the class label and refers to the classification network used with parameters .
In contrast, for our proactive detection scheme, we apply a transformation to a real image from set to formulate a set of encrypted real images represented as: . We perform image encryption by adding a learnable template to the image which acts as a defender’s signature. Further, the set of encrypted real images is given as input to the GM, which produces a set of manipulated images . We propose to learn a set of templates rather than a single one to increase security as it is difficult to reverse engineer all templates. Thus for a real image , we define via a set of n orthogonal templates where as follows:
After applying the transformation , the objective function defined in Eqn. 1 can be re-written as:
The goal is to find for which corresponding images in and have no significant visual difference. More importantly, if is modified by any GM, this would improve the performance for image manipulation detection.
3.2 Proposed Framework
As shown in Fig. 2, our framework consists of two stages: image encryption and recovery of template. The first stage is used for selection and addition of templates, while the second stage involves the recovery of templates from images in and . Both stages are trained in an end-to-end manner with GM parameters fixed. For inference, each stage is applied separately. The first stage is a mandatory step to encrypt the real images while the second stage would only be used when image manipulation detection is needed.
3.2.1 Image Encryption
We initialize a set of templates as shown in Fig. 2, which is optimized during training using certain constraints. As formulated in Eqn. 2, we randomly select and add a template from our template set to every real image. Our objective is to estimate an optimal template set from which any template is capable of protecting the real image in .
Although we constrain the magnitude of the templates using
loss, the added template still degrades the quality of the real image. Therefore, when adding the template to real images, we control the strength of the added template using a hyperparameterm. We re-define as follows:
We perform an ablation study of varying m in Sec. 4.3, and find that setting m at performs the best.
3.2.2 Recovery of Templates
To perform image manipulation detection as shown in Fig. 2, we attempt to recover our added template from images in using an encoder with parameters . For any real image , we define the recovered template from encrypted real image as and from manipulated image as . As template selection from the template set is random, the encoder receives more training pairs to learn how to recover any template from an image, which contributes positively to the robustness of the recovery process. We visualize our trained template set , and the recovered templates in Fig. 3.
The main intuition of our framework design is that should be much more similar to the added template and vice-versa for
. Thus, to perform image manipulation detection, we calculate the cosine similarity betweenand all learned templates in the set
rather than merely using a classification objective. For every image, we select the maximum cosine similarity across all templates as the final score. Therefore, we update logit scores in Eqn.3 by cosine similarity scores as shown below:
3.2.3 Unsupervised Training of Template Set
Since there is no ground truth for supervision, we define various constraints to guide the learning process. Let be the template selected from set to be added onto a real image. We formulate five loss functions as shown below.
Magnitude loss. The real image and the encrypted image should be as similar as possible visually as the user does not want the image quality to deteriorate after template addition. Therefore, we propose the first constraint to regularize the magnitude of the template:
Recovery loss. We use an encoder network to recover the added template. Ideally, the encoder output, i.e., the recovered template of the encrypted real image, should be the same as the original added template . Thus, we propose to maximize the cosine similarity between these two templates:
Content independent template loss. Our main aim is to learn a set of universal templates which can be used for detecting manipulated images from unseen GMs. These templates, despite being trained on one dataset, can be applied to images from a different domain. Therefore, we encourage the high frequency information in the template to be data independent. We propose a constraint to minimize low frequency information:
where is the low pass filter selecting the region in the center of the D Fourier spectrum, while assigning the high frequency region to zero. is the Fourier transform.
Separation loss. We want the recovered template from manipulated images to be different than all the templates in set . Thus, we optimize to be orthogonal to all the templates in the set . Therefore, we take the template for which the cosine similarity between and the template is maximum, and minimize its respective cosine similarity:
where is the normalizing function defined as . Since this loss minimizes the cosine similarity to be , we normalize the templates before similarity calculation.
Pair-wise set distribution loss. A template set would ensure that if the attacker is somehow able to get access to some of the templates, it would still be difficult to reverse engineer other templates. Therefore, we propose a constraint to minimize the inter-template cosine similarity to prompt the diversity of the templates in :
The overall loss function for template estimation is thus:
where , , , , are the loss weights for each term.
Experimental setup and dataset. We follow the experimental setting of Wang et al. , and compare with four baselines: , ,  and . For training,  uses images from which the manipulated images are generated by ProGAN . However, as our method requires a GM to perform partial manipulation, we choose STGAN  in training as ProGAN synthesizes entire images. We use images in CelebA-HQ  as the real images and pass them through STGAN to obtain manipulated images for training. For testing, we use real images and pass them through unseen GMs such as StarGAN , GauGAN  and CycleGAN . The real images for testing GMs are chosen from the respective dataset they are trained on, i.e. CelebA-HQ for StarGAN, Facades 
for CycleGAN, and COCO for GauGAN.
|Method||Train GM||Set||Test GM Average precision (%)|
|STGAN + AutoGAN|
|Method||Train GM||Test GM TDR (%) at low FAR (0.5%)|
To further evaluate generalization ability of our approach, we use additional unseen GMs that have diverse network architectures and loss functions, and are trained on different datasets. We manipulate each of real images with these GMs which gives manipulated images. The real images are chosen from the dataset that the respective GM is trained on. The list of GMs and their training datasets are provided in the supplementary.
|Method||Test GM Average precision (%)|
Implementation details. Our framework is trained end-to-end for epochs via Adam optimizer with a learning rate of and a batch size of . The loss weights are set to ensure similar magnitudes at the beginning of training: , , , , . If not specified, we set the template set size . We set in the content independent template loss. All experiments are conducted using one NVIDIA Tesla K GPU.
4.2 Image Manipulation Detection Results
As shown in Tab. 2, when our training GM is STGAN, we can outperform the baselines by a large margin on GauGAN-based test data, while the performance on StarGAN-based test data remains the same at . When training on STGAN, our method achieves lower performance on CycleGAN. We hypothesis that it is because AutoGAN and CycleGAN share the same model architecture. To validate this, we change our training GM to AutoGAN and observe improvement when tested on CycleGAN. However, the performance drops on other two GMs because the amount of training data is reduced ( for STGAN and for AutoGAN). Increasing the number of templates can improve the performance for when trained on STGAN and test on CycleGAN, but degrades for others. The degradation is more when train on AutoGAN. It suggests that it is challenging to find a larger template set on a smaller training set. Finally, using both STGAN and AutoGAN training data can achieve the best performance.
TDR at low FAR. We also evaluate using TDR at low FAR in Tab. 3. This is more indicative of the performance in the real world application where the number of real images are exponentially larger than manipulated images. For comparison, we evaluate the pretrained model of  on our test set. Our method performs consistently better for all three GMs, demonstrating the superiority of our approach.
Generalization ability. To test our generalization ability, we perform extensive evaluations across a large set of GMs. We compare the performance of our method with  by evaluating its pretrained model on a test set of different GMs. Our framework performs quite well on almost all the GMs compared to  as shown in Tab. 4. This further demonstrates the generalization ability of our framework in the real world where an image can be manipulated by any unknown GM. Compared to , our framework achieves an improvement in the average precision of almost averaged across all GMs.
proposes to disrupt the GM’s output, they only provide the distortion results of the manipulated image. To enable binary classification, we take their adversarial real and disrupted fake images to train a classifier with the similar network architecture as our encoder. Tab.5 shows that  works perfectly when the testing GM is the same as the training GM. Yet if the testing GM is unseen, the performance drops substantially. Our method performs much better showing the high generalizability.
Comparison with steganography works. Our method aligns with the high-level idea of digital steganographhy methods [5, 42, 52, 63, 4] which are used to hide an image onto other images. We compare our approach to the recent deep learning-based steganography method, Baluja et al. , with its publicly available code. We hide and retrieve the template using the pre-trained model provided by . Our approach has far better average precision for each test GM compared to  as shown in Tab. 6. This validates the effectiveness of template learning and concludes that the digital steganography methods are less generalizable across unknown GMs than our approach.
Comparison with benign adversarial attacks. Adversarial attacks are used to optimize a perturbation to change the class of the image. The learning of the template using our framework is similar to a benign usage of adversarial attacks. We conduct an ablation study to compare our method with common attacks such as benign PGD and FGSM. We remove the losses in Eqs. 6, 8, and 10 responsible for learning the template and replace them with an adversarial noise constraint. Our approach has better average precision for each test GM than both adversarial attacks as shown in Tab. 6. We observe that adversarial noise performed similar to passive schemes offering poor generalization to unknown GMs. This shows the importance of using our proposed constraints to learn the universal template set.
|Method||Type||Test GM Average precision (%)|
Data augmentation. We apply various data augmentation schemes to evaluate the robustness of our method. We adopt some of the image editing techniques from Wang et al. , including (1) Gaussian blurring, (2) JPEG compression, (3) blur JPEG (), and (4) blur JPEG (), where and
are the probabilities of applying these image editing operations. In addition, we add resizing, cropping, and Gaussian noise. The implementation details of these techniques are in the supplementary. These techniques are applied after addition of our template to the real images.
We evaluate in three scenarios when augmentation is applied in training, testing, both training and testing. As shown in Tab. 7, for the augmentation techniques adopted from , we outperform  in almost all techniques. We observe significant improvement when blurring or JPEG compression is applied jointly but the improvement is less when they are applied separately.
As for the different scenarios on when data augmentation is applied, scenario performs the worst because the augmentation applied in testing has not been seen during training. Scenario performs better than scenario in most cases. There is a much larger performance drop when blurring and JPEG are applied together than separately. Cropping performs the worst for both Scenario and .
4.3 Ablation Studies
Template set size. We study the effects of the template set size. As shown in Fig. 4, the average precision increases as the set size is expanding from and saturates around the set size . In the meantime, the average cosine similarity between templates within the set increases consistently, as it gets harder to find many orthogonal templates. We also test our framework’s run-time for different set sizes. On a Tesla K GPU, for the set size of and , the per-image run-time of our manipulation detection is , , , , and ms respectively. Thus, despite increasing the set size enhances our accuracy and security, there is a trade-off with the detection speed which is a important factor too. For comparison, we also test the pretrained model of  which gives a per-image run-time of ms. Our framework is much faster even with a larger set size which is due to the shallow network in our proactive scheme compared to a deeper network in passive scheme.
Template strength. We use a hyperparameter to control the strength of our added template. We ablate and show the results in Fig. 5. Intuitively, the lower the strength of the template added, the lower the detection performance since it would be harder for the encoder to recover the original template. Our results support this intuition. For all three GMs, the precision increases as we enlarge the template strength, and converges after strength. We also show the PSNR between the encrypted real image and the original real image. The PSNR decreases as we enlarge the strength as expected. We choose for a trade-off between the detection precision and the visual quality.
|Loss removed||Test GM Average precision (%)|
|Pair-wise set distribution loss|
|Content independent template loss|
|, and (fixed template)|
|and (removing encoder)|
Loss functions. Our training process is guided by an objective function with five losses (Eqn. 11). To demonstrate the necessity of each loss, we ablate by removing each loss and compare with our full model. As shown in Tab. 8, removing any one of the losses results in performance degradation. Specifically, removing the pair-wise set distribution loss, recovery loss or separation loss causes a larger drop.
To better understand the importance of the data-driven template set, we fix the template set during training, i.e., removing the three losses directly operating on the template and only considering recovery and separation losses for training. We observe a significant performance drop, which shows that the learnable template is indeed crucial for effective image manipulation detection.
Finally, we remove the encoder from our framework and use a classification network with similar number of layers. Instead of recovering templates, the classification network is directly trained to perform binary image manipulation detection via cross-entropy loss. The performance drops significantly. This observation aligns with the previous works [47, 10, 59] stating that CNN networks trained on images from one GM show poor generalizability to unseen GMs. The performance drops for all three GMs but CycleGAN and GauGAN are affected the most, as the datasets are different. For our proposed approach, when we are recovering the template, the encoder ignores all the low frequency information of the images which are data dependent. Thus, being more data (i.e., image content) independent, our encoder is able to achieve a higher generalizability.
Template selection. Given a real image, we randomly select a template from the learnt template set to add to the image. Thus, every image has an equal chance of selecting any one template from the set, resulting in many combinations for the entire test set. This raises the question of finding a worst and best combination of templates for all images in the test set. To answer this, we experiment with a template set size of as a large size may offer higher variation in performance. For each image in and , we calculate the cosine similarity between added template and recovered template . For the worst/best case of every image, we select the template with the minimum/maximum difference between the real and manipulated image cosine similarities. As shown in Tab. 9, GauGAN gives much more variation in the performance compared to CycleGAN and StarGAN. This shows that the template selection is an important step for image manipulation detection. This brings up the idea of training a network to select the best template for a specific image, by using the best case described above as a pseudo ground truth to supervise the network. We hypothesis template selection could be important, but with experiments, the difference of performance among different templates is nearly zero and the network’s selection doesn’t help in the performance compared with selecting the template randomly as shown in Tab. 9. Therefore, we cannot have a pseudo ground truth to train another network for template selection.
Another option for template selection is to select the same template for every test image which is equivalent to using one template compromising the security of our method. Nevertheless, we test this option to see the performance variation of biasing one template for all images. The performance variation is larger than our random selection scheme. This shows that each template has a similar contribution to image manipulation detection.
|Selection scheme||Test GM Average precision (%)|
|Biasing one template|
In this paper, we propose a proactive scheme for image manipulation detection. The main objective is to estimate a set of templates, which when added to the real images improves the performance for image manipulation detection. This template set is estimated using certain constraints and any template can be added onto the image right after it is being captured by any camera. Our framework is able to achieve better image manipulation detection performance on different unseen GMs, compared to prior works. We also show the results on a diverse set of additional GMs to demonstrate the generalizability of our proposed method.
First, although our work aims to protect real images in a proactive manner and can detect whether an image has been manipulated or not, it cannot perform general deepfake detection on entirely synthesized images. Second, we try our best to collect a diverse set of GMs to validate the generalization of our approach. However, there are many other GMs that do not have open-sourced codes to be evaluated in our framework. Lastly, how to supervise the training of a network for template selection is still an unanswered question.
Potential societal impact. We propose a proactive scheme which uses encrypted real images and their manipulated versions to perform manipulation detection. While this offers more generalizable detection, the encrypted real images might be used for training GMs in the future, which could make the manipulated images more robust against our framework, and thus warrents more research.
-  Darius Afchar, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. MesoNet: a compact facial video forgery detection network. In WIFS, 2018.
Eirikur Agustsson and Radu Timofte.
NTIRE 2017 challenge on single image super-resolution: Dataset and study.In CVPRW, 2017.
-  Vishal Asnani, Xi Yin, Tal Hassner, and Xiaoming Liu. Reverse engineering of generative models: Inferring model hyperparameters from generated images. arXiv preprint arXiv:2106.07873, 2021.
-  Shumeet Baluja. Hiding images in plain sight: Deep steganography. 2017.
-  Abdullah Bamatraf, Rosziati Ibrahim, and Mohd Najib B Mohd Salleh. Digital watermarking algorithm using LSB. In ICCAIE, 2010.
-  Holger Caesar, Jasper Uijlings, and Vittorio Ferrari. Coco-stuff: Thing and stuff classes in context. In CVPR, 2018.
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul
StarGAN: Unified generative adversarial networks for multi-domain image-to-image translation.In CVPR, 2018.
-  Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. StarGAN v2: Diverse image synthesis for multiple domains. In CVPR, 2020.
-  François Chollet. Xception: Deep learning with depthwise separable convolutions. In CVPR, 2017.
-  Davide Cozzolino, Justus Thies, Andreas Rössler, Christian Riess, Matthias Nießner, and Luisa Verdoliva. Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:1812.02510, 2018.
-  Hao Dang, Feng Liu, Joel Stehouwer, Xiaoming Liu, and Anil K Jain. On the detection of digital face manipulation. In CVPR, 2020.
-  Debayan Deb, Xiaoming Liu, and Anil Jain. Unified detection of digital and physical face attacks. In arXiv preprint arXiv:2104.02156, 2021.
-  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014.
-  Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
-  Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In ECCV, 2018.
-  Zhi-Jing Huang, Shan Cheng, Li-Hua Gong, and Nan-Run Zhou. Nonlinear optical multi-image encryption scheme with two-dimensional linear canonical transform. Optics and Lasers in Engineering, 124:105821, 2020.
-  Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
-  Mei Jiansheng, Li Sukang, and Tan Xiaomei. A digital watermarking algorithm based on DCT and DWT. In WISA, 2009.
-  Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In ICLR, 2018.
-  Mohammad Ibrahim Khan, Md Maklachur Rahman, and Md Iqbal Hasan Sarker. Digital watermarking for image authentication based on combined DCT, DWT and SVD transformation. International Journal of Computer Science Issues, 10:223, 2013.
-  Ming Liu, Yukang Ding, Min Xia, Xiao Liu, Errui Ding, Wangmeng Zuo, and Shilei Wen. STGAN: A unified selective transfer network for arbitrary image attribute editing. In CVPR, 2019.
-  Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. In NeurIPS, 2017.
-  Xiaohong Liu, Yaojie Liu, Jun Chen, and Xiaoming Liu. PSCC-Net: Progressive spatio-channel correlation network for image manipulation detection and localization. In arXiv preprint arXiv:2103.10596, 2021.
-  Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015.
-  Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
-  Iacopo Masi, Aditya Killekar, Royston Marian Mascarenhas, Shenoy Pratik Gurudatt, and Wael AbdAlmageed. Two-branch recurrent network for isolating deepfakes in videos. In ECCV, 2020.
-  Safa C. Medin, Bernhard Egger, Anoop Cherian, Ye Wang, Joshua B. Tenenbaum, Xiaoming Liu, and Tim K. Marks. MOST-GAN: 3D morphable StyleGAN for disentangled face image manipulation. In AAAI, 2022.
-  Lakshmanan Nataraj, Tajuddin Manhar Mohammed, BS Manjunath, Shivkumar Chandrasekaran, Arjuna Flenner, Jawadul H Bappy, and Amit K Roy-Chowdhury. Detecting GAN generated fake images using co-occurrence matrices. Electronic Imaging, 2019:532–1, 2019.
-  Yuval Nirkin, Lior Wolf, Yosi Keller, and Tal Hassner. Deepfake detection based on discrepancies between faces and their context. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP:1–1, 2021.
-  Ori Nizan and Ayellet Tal. Breaking the cycle - colleagues are all you need. In CVPR, 2020.
-  Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. GauGAN: semantic image synthesis with spatially adaptive normalization. In ACM, 2019.
-  Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell, and Alexei Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
-  Stanislav Pidhorskyi, Donald A Adjeroh, and Gianfranco Doretto. Adversarial latent autoencoders. In CVPR, 2020.
-  Albert Pumarola, Antonio Agudo, Aleix M Martinez, Alberto Sanfeliu, and Francesc Moreno-Noguer. GANimation: One-shot anatomically consistent facial animation. International Journal of Computer Vision, 128:698–713, 2020.
-  Yuyang Qian, Guojun Yin, Lu Sheng, Zixuan Chen, and Jing Shao. Thinking in frequency: Face forgery detection by mining frequency-aware clues. In ECCV, 2020.
-  Zhang Qiu-yu, Jitian Han, and Yutong Ye. Multi‐image encryption algorithm based on image hash, bit‐plane decomposition and dynamic DNA coding. IET Image Processing, 15:885–896, 2020.
Weize Quan, Kai Wang, Dong-Ming Yan, and Xiaopeng Zhang.
Distinguishing between natural and computer-generated images using convolutional neural networks.IEEE Transactions on Information Forensics and Security, 13:2772–2787, 2018.
-  Stephan R. Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In ECCV, 2016.
-  Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. Faceforensics++: Learning to detect manipulated facial images. In CVPR, 2019.
-  Nataniel Ruiz, Sarah Adel Bargal, and Stan Sclaroff. Disrupting deepfakes: Adversarial attacks against conditional image translation networks and facial manipulation systems. In ECCV, 2020.
-  Eran Segalis and Eran Galili. OGAN: Disrupting deepfakes with an adversarial attack that survives training. arXiv preprint arXiv:2006.12247, 2020.
-  Amit Kumar Singh, Nomit Sharma, Mayank Dave, and Anand Mohan. A novel technique for digital image watermarking in spatial domain. In PDGC, 2012.
-  Matthew Tancik, Ben Mildenhall, and Ren Ng. StegaStamp: Invisible hyperlinks in physical photographs. In CVPR, 2020.
-  Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning GAN for pose-invariant face recognition. In CVPR, 2017.
-  Radim Tyleček and Radim Šára. Spatial pattern templates for recognition of objects with regular structure. In GCPR, 2013.
-  Run Wang, Felix Juefei-Xu, Meng Luo, Yang Liu, and Lina Wang. FakeTagger: Robust safeguards against deepfake dissemination via provenance tracking. In ACMM, 2021.
-  Run Wang, Felix Juefei-Xu, Lei Ma, Xiaofei Xie, Yihao Huang, Jian Wang, and Yang Liu. FakeSpotter: A simple yet robust baseline for spotting ai-synthesized fake faces. In IJCAI, 2020.
-  Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. CNN-generated images are surprisingly easy to spot… for now. In CVPR, 2020.
-  Xiaogang Wang and Xiaoou Tang. Face photo-sketch synthesis and recognition. IEEE transactions on pattern analysis and machine intelligence, 31:1955–1967, 2008.
-  Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan. Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data. In CVPR, 2021.
-  Xi Wu, Zhen Xie, YuTao Gao, and Yu Xiao. SSTNET: Detecting manipulated faces through spatial, steganalysis and temporal features. In ICASSP, 2020.
-  Erkan Yavuz and Ziya Telatar. Improved SVD-DWT based digital image watermarking against watermark ambiguity. In SAC, 2007.
-  Huo-Sheng Ye, Nan-Run Zhou, and Li-Hua Gong. Multi-image compression-encryption scheme based on quaternion discrete fractional hartley transform and improved pixel adaptive diffusion. Signal Processing, 175:107652, 2020.
-  Chin-Yuan Yeh, Hsi-Wen Chen, Shang-Lun Tsai, and Sheng-De Wang. Disrupting image-translation-based deepfake algorithms with adversarial attacks. In WACVW, 2020.
-  Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. DualGAN: Unsupervised dual learning for image-to-image translation. In CVPR, 2017.
-  A. Yu and K. Grauman. Fine-grained visual comparisons with local learning. In CVPR, 2014.
-  A. Yu and K. Grauman. Semantic jitter: Dense supervision for visual comparisons via synthetic images. In ICCV, 2017.
-  Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. LSUN: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
-  Xu Zhang, Svebor Karaman, and Shih-Fu Chang. Detecting and simulating artifacts in GAN fake images. In WIFS, 2019.
-  Jiren Zhu, Russell Kaplan, Justin Johnson, and Li Fei-Fei. HiDDeN: Hiding data with deep networks. In ECCV, 2018.
-  Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017.
-  Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, and Eli Shechtman. Toward multimodal image-to-image translation. In NeurIPS, 2017.
-  Peihao Zhu, Rameen Abdal, Yipeng Qin, and Peter Wonka. SEAN: Image synthesis with semantic region-adaptive normalization. In CVPR, 2020.
Proactive Image Manipulation Detection
– Supplementary material –
1 Cross Encoder-Template Set Evaluation
Our framework encrypts a real image using a template from the template set. This encryption would aid in the image manipulation detection if the image is corrupted by any unseen GM. The framework is divided in two stages namely, image encryption and recovery of template where each stage works independently in inference. We therefore provide an ablation to study the performance using different encoder and template set, i.e., we evaluate recovering ability of an encoder using a template set trained with different initialization seeds. The results are shown in Tab. 1. We observe that even though the template set and the encoder are initialized with different seeds, the performance of our framework doesn’t vary much. This shows the stability of our framework even though the initialization seeds of both stages during training are different.
2 Template Strength
We provide the ablation for hyperparameter m used to control the strength of the added template in Sec. . We observe that the performance is better if we increase the template strength. However, this comes at a trade-off with PSNR which declines if the template strength increases. This is also justified in Fig. 1 which shows the images with different strength of added template. The images become noisier as the template strength is increased. This is not desirable as there shouldn’t be much distortion in the encrypted real image due to our added template. Therefore for our experiments, we select as the strength for the added template.
3 Implementation Details
Image editing techniques We use various image editing techniques in Sec. . All the techniques are applied after addition of our template. We provide the implementation details for all these techniques below:
Blur: We apply Gaussian blur to the image with 50% probability using sampled from ,
JPEG: We JPEG-compress the image with 50% probability images using Imaging Library (PIL), with quality sampled from .
Blur JPEG (p): The image is possibly blurred and JPEG-compressed, each with probability p.
Resizing: We perform the training using of the images with resolution and rest with resolution images in CelebA-HQ dataset.
Crop: We randomly crop the images with 50% probability on each side with pixels sampled from . The images are resized to resolution.
Gaussian noise: We add Gaussian noise with zero mean and unit variance to the images with 50% probability.
|Initialization seed||Test GM Average precision (%)|
|GM||STGAN ||StarGAN ||CycleGAN ||GauGAN ||UNIT ||MUNIT ||StarGAN2 ||BicycleGAN |
|Dataset||CelebA-HQ ||CelebA-HQ ||Facades ||COCO ||GTACity ||EdgesShoes [56, 57]||CelebA-HQ ||Facades |
|GM||CONT_Encoder ||SEAN ||ALAE||Pix2Pix||DualGAN||CouncilGAN||ESRGAN||GANimation|
|Dataset||Paris Street-View ||CelebA-HQ ||CelebA-HQ ||Facades ||Sketch-Photo ||CelebA ||CelebA ||CelebA |
Network architecture Fig. 2 shows the network architecture used in different experiments for our framework’s evaluation. For our framework, our encoder has stem convolution layers and
In ablation experiments for Table , we use a classification network with the similar number of layers as our encoder. This is done to show the importance of recovering templates using encoder. This classification networks has convolution blocks followed by three fully connected layers with ReLU activation in between the layers. The network outputs dimension logits used for image manipulation detection.
4 List of GMs
We use a variety of GMs to test the generalization ability of our framework. These GMs have varied network architectures and many of them are trained on different datasets. We summarize all the GMs in Tab. 2. We also provide visualization for different real image samples used in evaluating the performance for all these GMs in Fig. 3 - 18. We show the added template and the recovered templates in “gist_rainbow” cmap for better visualization and indicate the cosine similarity of the recovered template with the added template. As shown in Fig. 3 for training with STGAN, the encrypted real images have higher cosine similarity compared to their manipulated counterparts. However, during testing, the difference between the two cosine similarities decreases as shown in Fig. 4 - 18 for different GMs.
5 Dataset License Information
We use diverse datasets for our experiments which include face and non-face datasets. For face datasets, we use existing datasets including CelebA  and CelebA-HQ . The CelebA dataset contains images entirely from the internet and has no associated IRB approval. The authors mention that the dataset is available for non-commercial research purposes only, which we strictly adhere to. We only use the database internally for our work and primarily for evaluation. CelebA-HQ consists images collected from the internet. Although there is no associated IRB approval, the authors assert in the dataset agreement that the dataset is only to be used for non-commercial research purposes, which we strictly adhere to.
We use some non-face datasets too for our experiments. The Facades  dataset was collected at the Center for Machine Perception and is provided under Attribution-ShareAlike license. Edges2Shoes [56, 57] is a large shoe dataset consisting of images collected from https://www.zappos.com. The authors mention that this dataset is for academic, non-commercial use only. GTA2City  dataset consists of a large number of densely labelled frames extracted from computer games. The authors mention that the data is for research and educational use only. The sketch-photo  datset refers to the CUHK face sketch FERET database. The authors assert in the dataset agreement that the dataset is only to be used for noncommercial research purposes, which we strictly adhere to. Paris street-view  dataset contains images collected using google street view and is to be used for noncommercial research purposes.