Human detection of machine manipulated media

07/06/2019 ∙ by Matthew Groh, et al. ∙ MIT 3

Recent advances in neural networks for content generation enable artificial intelligence (AI) models to generate high-quality media manipulations. Here we report on a randomized experiment designed to study the effect of exposure to media manipulations on over 15,000 individuals' ability to discern machine-manipulated media. We engineer a neural network to plausibly and automatically remove objects from images, and we deploy this neural network online with a randomized experiment where participants can guess which image out of a pair of images has been manipulated. The system provides participants feedback on the accuracy of each guess. In the experiment, we randomize the order in which images are presented, allowing causal identification of the learning curve surrounding participants' ability to detect fake content. We find sizable and robust evidence that individuals learn to detect fake content through exposure to manipulated media when provided iterative feedback on their detection attempts. Over a succession of only ten images, participants increase their rating accuracy by over ten percentage points. Our study provides initial evidence that human ability to detect fake, machine-generated content may increase alongside the prevalence of such media online.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 13

page 14

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

The recent emergence of artificial intelligence (AI) powered media manipulations has widespread societal implications for journalism and democracy[1], national security[2], and art [3]. AI models have the potential to scale misinformation to unprecedented levels by creating various forms of synthetic media [4]. For example, AI systems can synthesize realistic video portraits of an individual with full control of facial expressions including eye and lip movement[5, 6, 7, 8, 9], can clone a speaker’s voice with few training samples and generate new natural sounding audio of something the speaker never previously said[10], can synthesize visually indicated sound effects [11], can generate high quality, relevant text based on an initial prompt [12], can produce photorealistic images of a variety of objects from text inputs[13, 14, 15], and can generate photorealistic videos of people expressing emotions from only a single image [16, 17]. The technologies for producing entirely machine-generated, fake media online are rapidly outpacing the ability to manually detect and respond to such media.

Media manipulation and misinformation are topics of considerable interest within the computational and social sciences [18, 19, 20, 21], partially because of their historical significance. For a particular kind of media manipulation, there’s a Latin term, damnatio memoriae, which refers to the erasure of an individual from official accounts, often in service of dominant political agendas. The earliest known instances of damnatio memoriae were discovered in ancient Egyptian artifacts and similar patterns of removal have appeared since [22, 23]. Figure SI8 presents iconic examples of damnatio memoriae throughout modern history. Historically, visual and audio manipulations required both skilled experts and a significant investment of time and resources. Today, an AI model can produce photorealistic manipulations nearly instantaneously, which magnifies the potential scale of misinformation. This growing capability calls for understanding individuals’ abilities to differentiate between real and fake content.

To interrogate these questions directly, we engineer an AI system for photorealistic image manipulation and host the model and its outputs online as an experiment to study participants’ abilities to differentiate between unmodified and manipulated images. Our AI system consists of an end-to-end neural network architecture that can plausibly disappear objects from images. For example, consider an image of a boat sailing on the ocean. The AI model detects the boat, removes the boat, and replaces the boat’s pixels with pixels that approximate what the ocean might have looked like without the boat present. Figure 1 presents four examples of participant submitted images and their transformations. We host this AI model and its image outputs on a custom-designed website called Deep Angel. Since Deep Angel launched in August 2018, over 110,000 individuals have visited the website and interacted with the model and its outputs. Within the Deep Angel platform, we embedded a randomized experiment to examine how repeated exposure to machine-manipulated images affects individuals’ ability to accurately identify manipulated imagery.

Figure 1: Examples of original images on the top row and manipulated images on the bottom row.

Experimental Design

User Interface

In the “Detect Fakes” feature on Deep Angel, individuals are presented with two images and asked a single question: “Which image has something removed by Deep Angel?” See Figure 7 in the Supplementary Information for a screenshot of this interaction. One image has an object removed by our AI model. The other image is an unaltered image from the 2014 MS-COCO data [24]. After a participant answers the question by selecting an image, the manipulated image is revealed to the participant and the participant is offered the option to try again on a new pair of images.

Usage

Most participants interacted with “Detect Fakes” multiple times; the interquartile range of the number of guesses per participant is from 3 to 18 with a median of 8. Each interaction followed the same randomization with replacement, which ensured that the images displayed did not depend on what the individual had previously seen.

From August 2018 to May 2019, 242,216 guesses were submitted from 16,542 unique IP addresses with a mean identification accuracy of 86%. Deep Angel did not require participant sign-in, so we study participant behavior under the assumption that each IP address represents a single individual. 7,576 participants submitted at least 10 guesses. Each image appears as the first image an average of 35 times and the tenth image an average of 15 times. In the sample of participants who saw at least ten images, the mean percentage correct classification is 78% on the first image seen and 88% on the tenth image seen. The majority of manipulated images were identified correctly more than 90% of the time. Figure 2a shows the distribution of identification accuracy over images, and Figure 2b shows the distribution of image positions seen over participants.

By plotting participant identification accuracy against the order in which participants see images, Figure 3a reveals a logarithmic relationship between accuracy and overall exposure to manipulated images. Accuracy increases fairly linearly over the first ten images after which accuracy plateaus around 88%.

(a)
(b)
Figure 2: (a) Histogram of mean identification accuracies by participants per image (b) Bar chart plotting number of individuals over image position.

Randomization

We randomly select the “Detect Fakes” images from two samples of images. One sample contains 440 images manipulated by Deep Angel that participants submitted to be shared publicly. The other pool of images contains 5,008 images from the MS-COCO dataset [24]

. Such randomization at the image dyad level is equivalent to randomization of the image position - the order in which images appear to the participant. Based on the randomized image position, we can causally evaluate the effect of image position on rating accuracy. We test the causal effects with the following linear probability models:

(1)

and

(2)

where is the binary accuracy (correct or incorrect guess) of participant on manipulated image . represents a matrix of covariates, represents the order in which manipulated image appears to participant , represents the manipulated image fixed effects, represents the participant fixed effects, and represents the error term. The first model fits a logarithmic transformation of to

. The second model estimates treatment effects separately for each image position. Both models use Huber-White (robust) standard errors, and errors are clustered at the image level.

Results

With 242,216 observations, we run an ordinary least squares regression with user and image fixed effects on the likelihood of guessing the manipulated image correctly. The results of these regressions are presented in Tables

1 and 2 in the Appendix. Each column in Table 1 and 2 adds an incremental filter to offer a series of robustness checks. The first column shows all observations. The second column drops all users who submitted fewer than 10 guesses and removes all control images where nothing was removed. The third column drops all observations where a user has already seen a particular image. The fourth column drops all images qualitatively judged as below very high quality.

Across all four robustness checks with and without fixed-effects, our models show a positive and statistically significant relationship between and . In the linear-log model, a one unit increase in is associated with a 3 percentage point increase in . In the model that estimates Equation 2, we find a 1 percentage point average marginal treatment effect size of image position on . In other words, users improve their ability to guess by 1 percentage point for each of the first 10 guesses. Figure 3 shows these results graphically.

(a)
(b)
Figure 3:

Participants’ overall and marginal accuracy by image order with error bars showing a 95% confidence interval for each image position – (a) overall accuracy for all users with no fixed effects (b) marginal accuracy (relative to the first image position) for all users who saw at least 10 images controlling for user and image fixed effects and clustering errors at the image level. In (b), the 11th position includes all images positions beyond the 10th.

(a)
(b)
Figure 4: Heterogeneous effects on accuracy by participants who saw at least 10 images from (a) independent manipulation rating and (b) position of first correct guess while controlling for user and image fixed effects. In (b), the omitted position for each learning curve represents perfect accuracy. The marginal effects of subsequent image positions are negative relative to these omitted image positions. The error bars represent the 95% confidence interval for each image position and errors are clustered by images.

We find little evidence of heterogeneous effects of the manipulation quality on the learning rate. Retrospectively, we rated each image’s manipulation as high or low quality based on whether large and noticeable artifacts were created by the image manipulation. While participants are better at identifying low quality manipulations than high quality manipulations, we find statistically significant differences in the learning rates in only 3 of 10 image positions. These results are displayed in Figure 4a and indicate that the main effect is not simply driven by participants becoming proficient at guessing low-quality images in our data.

We do not find lasting heterogeneous effects on the learning rate based on participants initial accuracy. In Figure 4b, we compare subsequent learning rates of participants who correctly identified a manipulation on their first attempt to participants who failed on their first attempt and succeeded on their second. In this comparison, the omitted position for each learning curve represents perfect accuracy, which makes the marginal effects of subsequent image positions negative relative to these omitted image positions. On the first 3 of 4 image positions in this comparison, which correspond to the 3rd through 6th image positions, we find that initially successful participants perform statistically better than initially unsuccessful participants. However, this heterogeneous effect does not persist in subsequent image positions. In an additional test for heterogeneous treatment effects based on the number of images beyond the first ten that participants saw, we do not find a statistically significant differences in accuracy rates.

The statistically significant improvement in accurately identifying manipulations suggests that within the context of Deep Angel, exposure to media manipulation and feedback on what has been manipulated can successfully prepare individuals to detect faked media. When trained for 1 minute and 14 seconds on average, across ten images, participants improved ability to detect manipulations by ten percentage points. As users are exposed to image manipulations on Deep Angel, they quickly learn to spot the vast majority of the manipulations.

Discussion

While AI models can improve clinical diagnoses [25, 26, 27] and bring about autonomous driving [28], they also have the potential to scale censorship [29], amplify polarization [30], and spread both fake news [18] and manipulated media. We present results from a large scale randomized experiment that show the combination of exposure to manipulated media and feedback on what media has been manipulated improves individuals’ ability to detect media manipulations. Direct interaction with cutting edge technologies for content creation might enable more discerning media consumption across society. In practice, the news media has exposed high-profile AI manipulated media including fake videos of the Speaker of the House of Representatives, Nancy Pelosi, and the CEO of Facebook, Mark Zuckerberg, which serves as feedback to everyone on what manipulations look like [31, 32]. Our results build on recent research that suggests human intuition can be a reliable source of information about adversarial perturbations to images [33] and recent research that provides evidence that familiarising people with how fake news is produced may confer cognitive immunity to people when they are later exposed to misinformation [34].

The generalizability of our results is limited to the images produced by our AI model, and a promising avenue for future research could expand the domains and models studied. Likewise, future research could explore to what degree individuals’ ability to adaptively detect manipulated media comes from learning-by-doing, direct feedback, and awareness that anything is manipulated at all.

Our results suggest a need to re-examine the precautionary principle that is commonly applied to content generation technologies. In 2018, Google published BigGAN, which can generate realistic appearing objects in images, but while they hosted the generator for anyone to explore, they explicitly withheld the discriminator for their model [14]

. Similarly, OpenAI restricted access to their GPT-2 model, which can generate plausible long-form stories given an initial text prompt, by only providing a pared down model of GPT-2 trained with fewer parameters

[12]. If exposure to manipulated content can vaccinate people from future manipulations, then censoring dissemination of AI research on content generation may prove harmful to society by leaving it unprepared for a future of ubiquitous AI-mediated content.

Methods

We engineered a Target Object Removal

pipeline to remove objects in images and replace those objects with a plausible background. We combine a convolutional neural network (CNN) trained to detect objects with a generative adversarial network (GAN) trained to inpaint missing pixels in an image

[35, 36, 37, 38]

. Specifically, we generate object masks with a CNN based on a RoIAlign bilinear interpolation on nearby points in the feature map

[37]. We crop the object masks from the image and apply a generative inpainting architecture to fill in the object masks [39, 40]

. The generative inpainting architecture is based on dilated CNNs with an adversarial loss function which allows the generative inpainting architecture to learn semantic information from large scale datasets and generate missing content that makes contextual sense in the masked portion of the image

[40].

Target Object Removal Pipeline

Our end-to-end targeted object removal pipeline consists of three interfacing neural networks:

  • Object Mask Generator (G): This network creates a segmentation mask given an input image and a target class . In our experiments, we initialize G from a semantic segmentation network trained on the 2014 MS-COCO dataset following the Mask-RCNN algorithm [37]. The network generates masks for all object classes present in an image, and we select only the correct masks based on input . This network was trained on 60 object classes.

  • Generative Inpainter (I): This network creates an inpainted version of the input image and the object mask . I is initialized following the DeepFill algorithm trained on the MIT Places 2 dataset [40, 41].

  • Local Discriminator (D): The final discriminator network takes in the inpainted image and determines the validity of the image. Following the training of a GAN discriminator, D is trained simultaneously on I where are images from the MIT Places 2 dataset and are the same images with randomly assigned holes following [41, 40].

For every input image and class label pair, we first generate an object mask using G, which is paired with the image and inputted to the inpainting network I that produces the generated image. The inpainter is trained from the loss of the discriminator D, following the typical GAN pipeline. An illustration of our neural network architecture is provided in Figure 5.

Live Deployment

We designed an interactive website called Deep Angel to make the Target Object Removal pipeline publicly available.222We retained the Cyberlaw Clinic from the Harvard Law School and Berkman Klein Center for Internet & Society to advise and support us throughout the Deep Angel experiment. The API for the Target Object Removal pipeline is served by a single Nvidia Geforce GTX Titan X. In addition to the “Detect Fakes” user interaction, Deep Angel has a user interaction “Erase with AI,” where people can apply the Target Object Removal pipeline on their own images. See Figure 6 for a screen shot of this user interface.

In “Erase with AI,” people first select a category of object that they seek to remove and then they either upload an image or select an Instagram account from which to upload the three most recent images. After the user submits his or her selections, Deep Angel returns both the original image and a transformation of the original image with the selected objects removed.

Users uploaded 18,152 unique images from mobile phones and computers. In addition, user directed the crawling of 12,580 unique images from Instagram. The most frequently selected objects for removal are displayed in Table SI3. The overwhelming majority of images uploaded and Instagram accounts selected were unique. 88% of the usernames entered for targeted Instagram crawls were unique.

We can surface the most plausible object removal manipulations by examining the images with the lowest guessing accuracy. Ultimately, plausible manipulations are relatively rare and image dependent.

The Target Object Removal model can produce plausible content but it is not perfect. For the Target Object Removal model, plausible manipulations are confined to specific domains. Objects are only plausibly removed when they are a small portion of the image and the background is natural and uncluttered by other objects. Likewise, the model often generates model-specific artifacts that humans can learn to detect.

Acknowledgments: We thank Abhimanyu Dubey, Mohit Tiwari, and David McKenzie for their helpful comments and feedback.

Author contributions: M.G. implemented the methods, M.G., Z.E., N.O. analyzed data and wrote the paper. All authors conceived the original idea, designed the research, and provided critical feedback on the analysis and manuscript.

References

Appendix I: Regression Tables

(1) (2) (3) (4)
Log(Image Position) 0.0261*** 0.0259*** 0.0259*** 0.0255***
(0.0012) (0.0012) (0.0013) (0.0029)

242216 192665 172434 55692
Mean Accuracy on Image 0.73 0.78 0.78 0.74
Mean Accuracy on Image 0.88 0.88 0.88 0.83
0.29 0.19 0.20 0.26
Table 1: Ordinary least squares regression with user and image fixed effects evaluating image position on users’ accuracy in identifying manipulated images. Standard errors in parentheses. *, **, and *** indicates statistical significance at the 90, 95, and 99 percent confidence intervals, respectively. All columns include user and image fixed effects. Column (1) includes all images (2) drops all users who submitted fewer than 10 guesses and removes all control images where nothing was removed (3) drops all observations where a user has already seen a particular image (4) keeps only the images qualitatively judged as very high quality.
(1) (2) (3) (4)
2nd 0.0507*** 0.0569*** 0.0571*** 0.0378***
(0.0042) (0.0059) (0.0060) (0.0131)
3rd 0.0672*** 0.0744*** 0.0746*** 0.0454***
(0.0048) (0.0060) (0.0059) (0.0123)
4th 0.0775*** 0.0888*** 0.0885*** 0.0686***
(0.0050) (0.0058) (0.0058) (0.0121)
5th 0.0859*** 0.0978*** 0.0967*** 0.0749***
(0.0052) (0.0062) (0.0064) (0.0129)
6th 0.0817*** 0.0962*** 0.0963*** 0.0613***
(0.0057) (0.0064) (0.0064) (0.0130)
7th 0.0900*** 0.1032*** 0.1039*** 0.0741***
(0.0056) (0.0064) (0.0065) (0.0134)
8th 0.1019*** 0.1120*** 0.1106*** 0.0904***
(0.0055) (0.0065) (0.0065) (0.0137)
9th 0.1028*** 0.1136*** 0.1134*** 0.0959***
(0.0055) (0.0063) (0.0063) (0.0142)
10th 0.1030*** 0.1135*** 0.1123*** 0.1014***
(0.0056) (0.0062) (0.0064) (0.0135)
More than 10 0.1106*** 0.1215*** 0.1197*** 0.0985***
(0.0051) (0.0059) (0.0059) (0.0122)
242216 192665 172434 55692
Mean Accuracy on Image 0.73 0.78 0.78 0.74
Mean Accuracy on Image 0.88 0.88 0.88 0.83
0.29 0.20 0.20 0.26
Table 2: Ordinary least squares regression with user and image fixed effects evaluating image position on users’ accuracy in identifying manipulated images. Standard errors in parentheses. *, **, and *** indicates statistical significance at the 90, 95, and 99 percent confidence intervals, respectively. All columns include user and image fixed effects. Column (1) includes all images (2) drops all users who submitted fewer than 10 guesses and removes all control images where nothing was removed (3) drops all observations where a user has already seen a particular image (4) keeps only the images qualitatively judged as very high quality.

Appendix II: Supplementary Information

Figure 5: End-to-end pipeline for targeted object removal following [37, 40]
Figure 6: “Erase with AI” User Interfaces
Figure 7: “Detect Fakes” User Interfaces
Image Uploads
Object Count Order
Person 13450 1
Car 1229 6
Dog 1086 2
Cat 1082 3
Elephant 185 4
Bicycle 158 7
Bird 139 22
Tie 120 31
Airplane 106 13
Stop Sign 99 8
Instagram Directed Crawls
Object Count Order
Person 6944 1
Cat 725 2
Dog 493 3
Elephant 170 4
Car 162 6
Bicycle 71 7
Sheep 52 5
Stop Sign 31 8
Airplane 29 13
Skateboard 25 10
Table 3: Top 10 Target Object Removal Selections for Uploaded Images and Targeted Instagram Crawls on Deep Angel. Each selection of an Instagram username initiated a targeted crawl of Instagram for the three most recently uploaded images of the selected user.
Figure 8: Photographic manipulation has long been a tool of fascist governments. On the left, Joseph Stalin is standing next to Nikolai Yezhov who Stalin later ordered to be executed and disappeared from the photograph. In the middle, Mao Zedong is standing beside the “Gang of Four” who were arrested a month after Mao’s death and subsequently erased. On the right, Benito Mussolini strikes a heroic pose on a horse while his trainer holds the horse steady.
Figure 9: Heat map of the world showing how many users came from each country. 23% of users are from the United States of America, 9% from France, 9% from United Kingdom, 6% from Germany, 4% from Spain, 3% from China, 3% from Canada, 2% from Brazil, 2% from Australia, and 2% from Finland.

Appendix III: Unanchored Object Conjuring

While posing a risk to information online, these generative AI systems can also offer new possibilities for creative expression. For example, the Creative Adversarial Network learns art by its styles and generates new art by deviating from the styles’ norms [42]. Likewise, interactive GANs (iGANS) can augment human creativity for artistic and design applications [43].

Thus, if objects can be plausibly removed from images, then it is reasonable to imagine objects can be plausibly generated in an image from which they never existed. As an extension to the Deep Angel pipeline, we approached adding objects to images using image-to-image translation with conditional adversarial networks [44]

. Since these neural networks learn a mapping from an input to an output image, we can train an image-to-image model using the manipulated images as inputs and the original submissions are outputs. While the model does not re-appear objects as they were, the model produces resemblances of the missing objects. Images produced by Deep Angel and AI Spirits (the reversal of Deep Angel) are on display at the online art gallery hosted by the 2018 NeurIPS Workshop on Machine Learning for Creativity and Design. As large-scale, paired datasets of creative content (such as the one presented here) become increasingly common, and neural network architectures for content generation become more powerful, automated object insertion into existing media will become a rich area for future work.

Model

With image-to-image translation, a latent representation of the structure an image can be efficiently expressed in and generated for different contexts [43, 44, 45]. This latent structure is encode in information like edges, shape, size, texture, and color that are anchored across contexts. By applying image-to-image translation to the results of the Target Object Removal pipeline, we force the model to learn both the structural representation for removed objects and their contextual location. We call this process unanchored object conjuring.

For the unanchored object conjuring extension, the global component () (

Convolution-InstanceNorm ReLU layer with 32 filters and stride 1

[46]) in the top right of Figure 10 is first trained on downsampled images, then local component () is concatenated to and they are jointly trained on full resolution images. We follow the original pix2pixHD loss function which takes the form

where is adversarial loss [47], is the feature matching loss pix2pixHD used to stabilize training and is the perceptual loss based on VGG features [48, 49]. We train the model using the Adam solver with a learning rate

for 200 epochs

[50]. is fixed for the first half of training (epochs 0 to 100) and then

linearly decays to 0 for the second half (epochs 101 to 200). All weights were initialized by sampling from a Gaussian distribution with

and [45]

. We used a PyTorch implementation with a batch size of 4 on an Nvidia Geforce GTX Titan X with 8 cores.

Data

We filtered all images uploaded to Deep Angel to 5,634 images where people were selected to be removed. We manually filtered these images to the 1000 best manipulations based on qualitative judgements. Then, we resized and cropped images to . We trained these images following the pix2pixHD image-to-image translation architecture, which yields improved photorealism due to its coarse-to-fine generators, multi-scale discrimination and improved adversarial loss [45]. Figure 10 shows the architecture for this extended pipeline.

This unanchored object conjuring technique can be used to create a new class of art that combine existing photographs with GAN-style imagery. In addition, the reconstructions provide a technique for interpreting the model and the underlying dataset by revealing where removed objects systematically appear.

Figure 10: The top row displays 4 input images and the bottom rows displays the modeled output based on the unanchored object conjuring pipeline. The images on the left are considered reconstructions because they are part of the paired training sample and the images on the right are considered creations because they are not part of the training dataset.