The recent emergence of artificial intelligence (AI) powered media manipulations has widespread societal implications for journalism and democracy, national security, and art . AI models have the potential to scale misinformation to unprecedented levels by creating various forms of synthetic media . For example, AI systems can synthesize realistic video portraits of an individual with full control of facial expressions including eye and lip movement[5, 6, 7, 8, 9], can clone a speaker’s voice with few training samples and generate new natural sounding audio of something the speaker never previously said, can synthesize visually indicated sound effects , can generate high quality, relevant text based on an initial prompt , can produce photorealistic images of a variety of objects from text inputs[13, 14, 15], and can generate photorealistic videos of people expressing emotions from only a single image [16, 17]. The technologies for producing entirely machine-generated, fake media online are rapidly outpacing the ability to manually detect and respond to such media.
Media manipulation and misinformation are topics of considerable interest within the computational and social sciences [18, 19, 20, 21], partially because of their historical significance. For a particular kind of media manipulation, there’s a Latin term, damnatio memoriae, which refers to the erasure of an individual from official accounts, often in service of dominant political agendas. The earliest known instances of damnatio memoriae were discovered in ancient Egyptian artifacts and similar patterns of removal have appeared since [22, 23]. Figure SI8 presents iconic examples of damnatio memoriae throughout modern history. Historically, visual and audio manipulations required both skilled experts and a significant investment of time and resources. Today, an AI model can produce photorealistic manipulations nearly instantaneously, which magnifies the potential scale of misinformation. This growing capability calls for understanding individuals’ abilities to differentiate between real and fake content.
To interrogate these questions directly, we engineer an AI system for photorealistic image manipulation and host the model and its outputs online as an experiment to study participants’ abilities to differentiate between unmodified and manipulated images. Our AI system consists of an end-to-end neural network architecture that can plausibly disappear objects from images. For example, consider an image of a boat sailing on the ocean. The AI model detects the boat, removes the boat, and replaces the boat’s pixels with pixels that approximate what the ocean might have looked like without the boat present. Figure 1 presents four examples of participant submitted images and their transformations. We host this AI model and its image outputs on a custom-designed website called Deep Angel. Since Deep Angel launched in August 2018, over 110,000 individuals have visited the website and interacted with the model and its outputs. Within the Deep Angel platform, we embedded a randomized experiment to examine how repeated exposure to machine-manipulated images affects individuals’ ability to accurately identify manipulated imagery.
In the “Detect Fakes” feature on Deep Angel, individuals are presented with two images and asked a single question: “Which image has something removed by Deep Angel?” See Figure 7 in the Supplementary Information for a screenshot of this interaction. One image has an object removed by our AI model. The other image is an unaltered image from the 2014 MS-COCO data . After a participant answers the question by selecting an image, the manipulated image is revealed to the participant and the participant is offered the option to try again on a new pair of images.
Most participants interacted with “Detect Fakes” multiple times; the interquartile range of the number of guesses per participant is from 3 to 18 with a median of 8. Each interaction followed the same randomization with replacement, which ensured that the images displayed did not depend on what the individual had previously seen.
From August 2018 to May 2019, 242,216 guesses were submitted from 16,542 unique IP addresses with a mean identification accuracy of 86%. Deep Angel did not require participant sign-in, so we study participant behavior under the assumption that each IP address represents a single individual. 7,576 participants submitted at least 10 guesses. Each image appears as the first image an average of 35 times and the tenth image an average of 15 times. In the sample of participants who saw at least ten images, the mean percentage correct classification is 78% on the first image seen and 88% on the tenth image seen. The majority of manipulated images were identified correctly more than 90% of the time. Figure 2a shows the distribution of identification accuracy over images, and Figure 2b shows the distribution of image positions seen over participants.
By plotting participant identification accuracy against the order in which participants see images, Figure 3a reveals a logarithmic relationship between accuracy and overall exposure to manipulated images. Accuracy increases fairly linearly over the first ten images after which accuracy plateaus around 88%.
We randomly select the “Detect Fakes” images from two samples of images. One sample contains 440 images manipulated by Deep Angel that participants submitted to be shared publicly. The other pool of images contains 5,008 images from the MS-COCO dataset 
. Such randomization at the image dyad level is equivalent to randomization of the image position - the order in which images appear to the participant. Based on the randomized image position, we can causally evaluate the effect of image position on rating accuracy. We test the causal effects with the following linear probability models:
where is the binary accuracy (correct or incorrect guess) of participant on manipulated image . represents a matrix of covariates, represents the order in which manipulated image appears to participant , represents the manipulated image fixed effects, represents the participant fixed effects, and represents the error term. The first model fits a logarithmic transformation of to
With 242,216 observations, we run an ordinary least squares regression with user and image fixed effects on the likelihood of guessing the manipulated image correctly. The results of these regressions are presented in Tables1 and 2 in the Appendix. Each column in Table 1 and 2 adds an incremental filter to offer a series of robustness checks. The first column shows all observations. The second column drops all users who submitted fewer than 10 guesses and removes all control images where nothing was removed. The third column drops all observations where a user has already seen a particular image. The fourth column drops all images qualitatively judged as below very high quality.
Across all four robustness checks with and without fixed-effects, our models show a positive and statistically significant relationship between and . In the linear-log model, a one unit increase in is associated with a 3 percentage point increase in . In the model that estimates Equation 2, we find a 1 percentage point average marginal treatment effect size of image position on . In other words, users improve their ability to guess by 1 percentage point for each of the first 10 guesses. Figure 3 shows these results graphically.
Participants’ overall and marginal accuracy by image order with error bars showing a 95% confidence interval for each image position – (a) overall accuracy for all users with no fixed effects (b) marginal accuracy (relative to the first image position) for all users who saw at least 10 images controlling for user and image fixed effects and clustering errors at the image level. In (b), the 11th position includes all images positions beyond the 10th.
We find little evidence of heterogeneous effects of the manipulation quality on the learning rate. Retrospectively, we rated each image’s manipulation as high or low quality based on whether large and noticeable artifacts were created by the image manipulation. While participants are better at identifying low quality manipulations than high quality manipulations, we find statistically significant differences in the learning rates in only 3 of 10 image positions. These results are displayed in Figure 4a and indicate that the main effect is not simply driven by participants becoming proficient at guessing low-quality images in our data.
We do not find lasting heterogeneous effects on the learning rate based on participants initial accuracy. In Figure 4b, we compare subsequent learning rates of participants who correctly identified a manipulation on their first attempt to participants who failed on their first attempt and succeeded on their second. In this comparison, the omitted position for each learning curve represents perfect accuracy, which makes the marginal effects of subsequent image positions negative relative to these omitted image positions. On the first 3 of 4 image positions in this comparison, which correspond to the 3rd through 6th image positions, we find that initially successful participants perform statistically better than initially unsuccessful participants. However, this heterogeneous effect does not persist in subsequent image positions. In an additional test for heterogeneous treatment effects based on the number of images beyond the first ten that participants saw, we do not find a statistically significant differences in accuracy rates.
The statistically significant improvement in accurately identifying manipulations suggests that within the context of Deep Angel, exposure to media manipulation and feedback on what has been manipulated can successfully prepare individuals to detect faked media. When trained for 1 minute and 14 seconds on average, across ten images, participants improved ability to detect manipulations by ten percentage points. As users are exposed to image manipulations on Deep Angel, they quickly learn to spot the vast majority of the manipulations.
While AI models can improve clinical diagnoses [25, 26, 27] and bring about autonomous driving , they also have the potential to scale censorship , amplify polarization , and spread both fake news  and manipulated media. We present results from a large scale randomized experiment that show the combination of exposure to manipulated media and feedback on what media has been manipulated improves individuals’ ability to detect media manipulations. Direct interaction with cutting edge technologies for content creation might enable more discerning media consumption across society. In practice, the news media has exposed high-profile AI manipulated media including fake videos of the Speaker of the House of Representatives, Nancy Pelosi, and the CEO of Facebook, Mark Zuckerberg, which serves as feedback to everyone on what manipulations look like [31, 32]. Our results build on recent research that suggests human intuition can be a reliable source of information about adversarial perturbations to images  and recent research that provides evidence that familiarising people with how fake news is produced may confer cognitive immunity to people when they are later exposed to misinformation .
The generalizability of our results is limited to the images produced by our AI model, and a promising avenue for future research could expand the domains and models studied. Likewise, future research could explore to what degree individuals’ ability to adaptively detect manipulated media comes from learning-by-doing, direct feedback, and awareness that anything is manipulated at all.
Our results suggest a need to re-examine the precautionary principle that is commonly applied to content generation technologies. In 2018, Google published BigGAN, which can generate realistic appearing objects in images, but while they hosted the generator for anyone to explore, they explicitly withheld the discriminator for their model 
. Similarly, OpenAI restricted access to their GPT-2 model, which can generate plausible long-form stories given an initial text prompt, by only providing a pared down model of GPT-2 trained with fewer parameters. If exposure to manipulated content can vaccinate people from future manipulations, then censoring dissemination of AI research on content generation may prove harmful to society by leaving it unprepared for a future of ubiquitous AI-mediated content.
We engineered a Target Object Removal
pipeline to remove objects in images and replace those objects with a plausible background. We combine a convolutional neural network (CNN) trained to detect objects with a generative adversarial network (GAN) trained to inpaint missing pixels in an image[35, 36, 37, 38]
. Specifically, we generate object masks with a CNN based on a RoIAlign bilinear interpolation on nearby points in the feature map. We crop the object masks from the image and apply a generative inpainting architecture to fill in the object masks [39, 40]
. The generative inpainting architecture is based on dilated CNNs with an adversarial loss function which allows the generative inpainting architecture to learn semantic information from large scale datasets and generate missing content that makes contextual sense in the masked portion of the image.
Target Object Removal Pipeline
Our end-to-end targeted object removal pipeline consists of three interfacing neural networks:
Object Mask Generator (G): This network creates a segmentation mask given an input image and a target class . In our experiments, we initialize G from a semantic segmentation network trained on the 2014 MS-COCO dataset following the Mask-RCNN algorithm . The network generates masks for all object classes present in an image, and we select only the correct masks based on input . This network was trained on 60 object classes.
Local Discriminator (D): The final discriminator network takes in the inpainted image and determines the validity of the image. Following the training of a GAN discriminator, D is trained simultaneously on I where are images from the MIT Places 2 dataset and are the same images with randomly assigned holes following [41, 40].
For every input image and class label pair, we first generate an object mask using G, which is paired with the image and inputted to the inpainting network I that produces the generated image. The inpainter is trained from the loss of the discriminator D, following the typical GAN pipeline. An illustration of our neural network architecture is provided in Figure 5.
We designed an interactive website called Deep Angel to make the Target Object Removal pipeline publicly available.222We retained the Cyberlaw Clinic from the Harvard Law School and Berkman Klein Center for Internet & Society to advise and support us throughout the Deep Angel experiment. The API for the Target Object Removal pipeline is served by a single Nvidia Geforce GTX Titan X. In addition to the “Detect Fakes” user interaction, Deep Angel has a user interaction “Erase with AI,” where people can apply the Target Object Removal pipeline on their own images. See Figure 6 for a screen shot of this user interface.
In “Erase with AI,” people first select a category of object that they seek to remove and then they either upload an image or select an Instagram account from which to upload the three most recent images. After the user submits his or her selections, Deep Angel returns both the original image and a transformation of the original image with the selected objects removed.
Users uploaded 18,152 unique images from mobile phones and computers. In addition, user directed the crawling of 12,580 unique images from Instagram. The most frequently selected objects for removal are displayed in Table SI3. The overwhelming majority of images uploaded and Instagram accounts selected were unique. 88% of the usernames entered for targeted Instagram crawls were unique.
We can surface the most plausible object removal manipulations by examining the images with the lowest guessing accuracy. Ultimately, plausible manipulations are relatively rare and image dependent.
The Target Object Removal model can produce plausible content but it is not perfect. For the Target Object Removal model, plausible manipulations are confined to specific domains. Objects are only plausibly removed when they are a small portion of the image and the background is natural and uncluttered by other objects. Likewise, the model often generates model-specific artifacts that humans can learn to detect.
Acknowledgments: We thank Abhimanyu Dubey, Mohit Tiwari, and David McKenzie for their helpful comments and feedback.
Author contributions: M.G. implemented the methods, M.G., Z.E., N.O. analyzed data and wrote the paper. All authors conceived the original idea, designed the research, and provided critical feedback on the analysis and manuscript.
-  R. Chesney and D. K. Citron, “Deep fakes: A looming challenge for privacy, democracy, and national security,” 2018.
-  G. Allen and T. Chan, Artificial intelligence and national security. Belfer Center for Science and International Affairs Cambridge, MA, 2017.
-  A. Hertzmann, “Can computers create art?,” in Arts, vol. 7, p. 18, Multidisciplinary Digital Publishing Institute, 2018.
-  D. M. Lazer, M. A. Baum, Y. Benkler, A. J. Berinsky, K. M. Greenhill, F. Menczer, M. J. Metzger, B. Nyhan, G. Pennycook, D. Rothschild, et al., “The science of fake news,” Science, vol. 359, no. 6380, pp. 1094–1096, 2018.
-  J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner, “Face2face: Real-time face capture and reenactment of rgb videos,” in , pp. 2387–2395, 2016.
-  S. Suwajanakorn, S. M. Seitz, and I. Kemelmacher-Shlizerman, “Synthesizing obama: learning lip sync from audio,” ACM Transactions on Graphics (TOG), vol. 36, no. 4, p. 95, 2017.
-  H. Kim, P. Garrido, A. Tewari, W. Xu, J. Thies, M. Nießner, P. Pérez, C. Richardt, M. Zollhöfer, and C. Theobalt, “Deep video portraits,” arXiv preprint arXiv:1805.11714, 2018.
-  S. Saito, L. Wei, L. Hu, K. Nagano, and H. Li, “Photorealistic facial texture inference using deep neural networks,” CoRR, vol. abs/1612.00523, 2016.
-  P. Garrido, L. Valgaerts, H. Sarmadi, I. Steiner, K. Varanasi, P. Perez, and C. Theobalt, “Vdub: Modifying face video of actors for plausible visual alignment to a dubbed audio track,” in Computer Graphics Forum, vol. 34, pp. 193–204, Wiley Online Library, 2015.
-  S. O. Arik, J. Chen, K. Peng, W. Ping, and Y. Zhou, “Neural voice cloning with a few samples,” arXiv preprint arXiv:1802.06006, 2018.
-  A. Owens, P. Isola, J. H. McDermott, A. Torralba, E. H. Adelson, and W. T. Freeman, “Visually indicated sounds,” CoRR, vol. abs/1512.08512, 2015.
-  A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” tech. rep.
-  A. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune, “Plug & play generative networks: Conditional iterative generation of images in latent space,” CoRR, vol. abs/1612.00005, 2016.
-  A. Brock, J. Donahue, and K. Simonyan, “Large scale gan training for high fidelity natural image synthesis,” arXiv preprint arXiv:1809.11096, 2018.
-  T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” arXiv preprint arXiv:1812.04948, 2018.
-  H. Averbuch-Elor, D. Cohen-Or, J. Kopf, and M. F. Cohen, “Bringing portraits to life,” ACM Transactions on Graphics (TOG), vol. 36, no. 6, p. 196, 2017.
-  E. Zakharov, A. Shysheya, E. Burkov, and V. Lempitsky, “Few-shot adversarial learning of realistic neural talking head models,” 2019.
-  S. Vosoughi, D. Roy, and S. Aral, “The spread of true and false news online,” Science, vol. 359, no. 6380, pp. 1146–1151, 2018.
-  Y. Benkler, R. Faris, and H. Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. Oxford University Press, 2018.
-  N. A. Cooke, “Posttruth, truthiness, and alternative facts: Information behavior and critical information consumption for a new age,” The Library Quarterly, vol. 87, no. 3, pp. 211–221, 2017.
-  A. Marwick and R. Lewis, “Media manipulation and disinformation online,” New York: Data & Society Research Institute, 2017.
-  E. R. Varner, Monumenta Graeca et Romana: Mutilation and transformation: damnatio memoriae and Roman imperial portraiture, vol. 10. Brill, 2004.
-  D. Freedberg, The power of images: Studies in the history and theory of response. University of Chicago Press Chicago, 1989.
-  T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European conference on computer vision, pp. 740–755, Springer, 2014.
-  A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and S. Thrun, “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, p. 115, 2017.
R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,”Nature Biomedical Engineering, vol. 2, no. 3, p. 158, 2018.
-  T. Kooi, G. Litjens, B. Van Ginneken, A. Gubern-Mérida, C. I. Sánchez, R. Mann, A. den Heeten, and N. Karssemeijer, “Large scale deep learning for computer aided detection of mammographic lesions,” Medical image analysis, vol. 35, pp. 303–312, 2017.
-  C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “Deepdriving: Learning affordance for direct perception in autonomous driving,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 2722–2730, 2015.
-  M. E. Roberts, Censored: distraction and diversion inside China’s Great Firewall. Princeton University Press, 2018.
-  E. Bakshy, S. Messing, and L. A. Adamic, “Exposure to ideologically diverse news and opinion on facebook,” Science, vol. 348, no. 6239, pp. 1130–1132, 2015.
-  S. Mervosh, “Distorted videos of nancy pelosi spread on facebook and twitter, helped by trump.” https://www.nytimes.com/2019/05/24/us/politics/pelosi-doctored-video.html, May 2019. Accessed: 2019-06-20.
-  C. Metz, “Distorted videos of nancy pelosi spread on facebook and twitter, helped by trump.” https://www.nytimes.com/2019/06/11/technology/fake-zuckerberg-video-facebook.html, June 2019. Accessed: 2019-06-20.
-  Z. Zhou and C. Firestone, “Humans can decipher adversarial images,” Nature communications, vol. 10, no. 1, p. 1334, 2019.
-  J. Roozenbeek and S. van der Linden, “Fake news game confers psychological resistance against online misinformation,” Palgrave Communications, vol. 5, 2019.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” pp. 2672–2680, 2014.
-  T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” arXiv preprint arXiv:1710.10196, 2017.
-  K. He, G. Gkioxari, P. Dollár, and R. B. Girshick, “Mask R-CNN,” CoRR, vol. abs/1703.06870, 2017.
-  Y. LeCun, Y. Bengio, and G. E. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
-  S. Iizuka, E. Simo-Serra, and H. Ishikawa, “Globally and Locally Consistent Image Completion,” ACM Transactions on Graphics (Proc. of SIGGRAPH 2017), vol. 36, no. 4, 2017.
-  J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image inpainting with contextual attention,” arXiv preprint arXiv:1801.07892, 2018.
B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, “Places: A 10 million image database for scene recognition,”IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
-  A. M. Elgammal, B. Liu, M. Elhoseiny, and M. Mazzone, “CAN: creative adversarial networks, generating "art" by learning about styles and deviating from style norms,” CoRR, vol. abs/1706.07068, 2017.
-  S. Carter and M. Nielsen, “Using artificial intelligence to augment human intelligence,” Distill, vol. 2, no. 12, p. e9, 2017.
-  P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” CoRR, vol. abs/1611.07004, 2016.
-  T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, p. 5, 2018.
-  D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: the missing ingredient for fast stylization. cscv,” arXiv preprint arXiv:1607.08022, 2017.
-  arXiv preprint, 2017.
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” inEuropean Conference on Computer Vision, pp. 694–711, Springer, 2016.
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
Appendix I: Regression Tables
|Mean Accuracy on Image||0.73||0.78||0.78||0.74|
|Mean Accuracy on Image||0.88||0.88||0.88||0.83|
|More than 10||0.1106***||0.1215***||0.1197***||0.0985***|
|Mean Accuracy on Image||0.73||0.78||0.78||0.74|
|Mean Accuracy on Image||0.88||0.88||0.88||0.83|
Appendix II: Supplementary Information
|Instagram Directed Crawls|
Appendix III: Unanchored Object Conjuring
While posing a risk to information online, these generative AI systems can also offer new possibilities for creative expression. For example, the Creative Adversarial Network learns art by its styles and generates new art by deviating from the styles’ norms . Likewise, interactive GANs (iGANS) can augment human creativity for artistic and design applications .
Thus, if objects can be plausibly removed from images, then it is reasonable to imagine objects can be plausibly generated in an image from which they never existed. As an extension to the Deep Angel pipeline, we approached adding objects to images using image-to-image translation with conditional adversarial networks 
. Since these neural networks learn a mapping from an input to an output image, we can train an image-to-image model using the manipulated images as inputs and the original submissions are outputs. While the model does not re-appear objects as they were, the model produces resemblances of the missing objects. Images produced by Deep Angel and AI Spirits (the reversal of Deep Angel) are on display at the online art gallery hosted by the 2018 NeurIPS Workshop on Machine Learning for Creativity and Design. As large-scale, paired datasets of creative content (such as the one presented here) become increasingly common, and neural network architectures for content generation become more powerful, automated object insertion into existing media will become a rich area for future work.
With image-to-image translation, a latent representation of the structure an image can be efficiently expressed in and generated for different contexts [43, 44, 45]. This latent structure is encode in information like edges, shape, size, texture, and color that are anchored across contexts. By applying image-to-image translation to the results of the Target Object Removal pipeline, we force the model to learn both the structural representation for removed objects and their contextual location. We call this process unanchored object conjuring.
For the unanchored object conjuring extension, the global component () (46]) in the top right of Figure 10 is first trained on downsampled images, then local component () is concatenated to and they are jointly trained on full resolution images. We follow the original pix2pixHD loss function which takes the form
where is adversarial loss , is the feature matching loss pix2pixHD used to stabilize training and is the perceptual loss based on VGG features [48, 49]. We train the model using the Adam solver with a learning rate
for 200 epochs. is fixed for the first half of training (epochs 0 to 100) and then
linearly decays to 0 for the second half (epochs 101 to 200). All weights were initialized by sampling from a Gaussian distribution withand 
. We used a PyTorch implementation with a batch size of 4 on an Nvidia Geforce GTX Titan X with 8 cores.
We filtered all images uploaded to Deep Angel to 5,634 images where people were selected to be removed. We manually filtered these images to the 1000 best manipulations based on qualitative judgements. Then, we resized and cropped images to . We trained these images following the pix2pixHD image-to-image translation architecture, which yields improved photorealism due to its coarse-to-fine generators, multi-scale discrimination and improved adversarial loss . Figure 10 shows the architecture for this extended pipeline.
This unanchored object conjuring technique can be used to create a new class of art that combine existing photographs with GAN-style imagery. In addition, the reconstructions provide a technique for interpreting the model and the underlying dataset by revealing where removed objects systematically appear.