Not My Deepfake: Towards Plausible Deniability for Machine-Generated Media

08/20/2020
by   Baiwu Zhang, et al.
7

Progress in generative modelling, especially generative adversarial networks, have made it possible to efficiently synthesize and alter media at scale. Malicious individuals now rely on these machine-generated media, or deepfakes, to manipulate social discourse. In order to ensure media authenticity, existing research is focused on deepfake detection. Yet, the very nature of frameworks used for generative modeling suggests that progress towards detecting deepfakes will enable more realistic deepfake generation. Therefore, it comes at no surprise that developers of generative models are under the scrutiny of stakeholders dealing with misinformation campaigns. As such, there is a clear need to develop tools that ensure the transparent use of generative modeling, while minimizing the harm caused by malicious applications. We propose a framework to provide developers of generative models with plausible deniability. We introduce two techniques to provide evidence that a model developer did not produce media that they are being accused of. The first optimizes over the source of entropy of each generative model to probabilistically attribute a deepfake to one of the models. The second involves cryptography to maintain a tamper-proof and publicly-broadcasted record of all legitimate uses of the model. We evaluate our approaches on the seminal example of face synthesis, demonstrating that our first approach achieves 97.62 and adversarial examples. In cases where a machine learning approach is unable to provide plausible deniability, we find that involving cryptography as done in our second approach is required. We also discuss the ethical implications of our work, and highlight that a more meaningful legislative framework is required for a more transparent and ethical use of generative modeling.

READ FULL TEXT

page 1

page 4

page 6

page 10

page 11

page 17

research
07/21/2021

Generative Models for Security: Attacks, Defenses, and Opportunities

Generative models learn the distribution of data from a sample dataset a...
research
06/07/2020

Steganography GAN: Cracking Steganography with Cycle Generative Adversarial Networks

For as long as humans have participated in the act of communication, con...
research
02/22/2017

Adversarial examples for generative models

We explore methods of producing adversarial examples on deep generative ...
research
08/31/2023

Detecting Out-of-Context Image-Caption Pairs in News: A Counter-Intuitive Method

The growth of misinformation and re-contextualized media in social media...
research
01/12/2018

Comparative Study on Generative Adversarial Networks

In recent years, there have been tremendous advancements in the field of...
research
04/03/2023

Coincidental Generation

Generative A.I. models have emerged as versatile tools across diverse in...
research
06/26/2023

Mono-to-stereo through parametric stereo generation

Generating a stereophonic presentation from a monophonic audio signal is...

Please sign up or login with your details

Forgot password? Click here to reset