Amplifying The Uncanny

02/17/2020
by   Terence Broad, et al.
0

Deep neural networks have become remarkably good at producing realistic deepfakes, images of people that are (to the untrained eye) indistinguishable from real images. These are produced by algorithms that learn to distinguish between real and fake images and are optimised to generate samples that the system deems realistic. This paper, and the resulting series of artworks Being Foiled explore the aesthetic outcome of inverting this process and instead optimising the system to generate images that it sees as being fake. Maximising the unlikelihood of the data and in turn, amplifying the uncanny nature of these machine hallucinations.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 6

page 7

12/05/2020

Spectral Distribution aware Image Generation

Recent advances in deep generative models for photo-realistic images hav...
07/04/2021

Auxiliary-Classifier GAN for Malware Analysis

Generative adversarial networks (GAN) are a class of powerful machine le...
11/15/2019

Human Annotations Improve GAN Performances

Generative Adversarial Networks (GANs) have shown great success in many ...
03/19/2020

Leveraging Frequency Analysis for Deep Fake Image Recognition

Deep neural networks can generate images that are astonishingly realisti...
07/13/2018

TequilaGAN: How to easily identify GAN samples

In this paper we show strategies to easily identify fake samples generat...
12/06/2018

ForensicTransfer: Weakly-supervised Domain Adaptation for Forgery Detection

Distinguishing fakes from real images is becoming increasingly difficult...
11/16/2016

Image Credibility Analysis with Effective Domain Transferred Deep Networks

Numerous fake images spread on social media today and can severely jeopa...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.