Amplifying The Uncanny

02/17/2020
by   Terence Broad, et al.
0

Deep neural networks have become remarkably good at producing realistic deepfakes, images of people that are (to the untrained eye) indistinguishable from real images. These are produced by algorithms that learn to distinguish between real and fake images and are optimised to generate samples that the system deems realistic. This paper, and the resulting series of artworks Being Foiled explore the aesthetic outcome of inverting this process and instead optimising the system to generate images that it sees as being fake. Maximising the unlikelihood of the data and in turn, amplifying the uncanny nature of these machine hallucinations.

READ FULL TEXT

page 4

page 5

page 6

page 7

research
05/31/2022

A Survey of Deep Fake Detection for Trial Courts

Recently, image manipulation has achieved rapid growth due to the advanc...
research
12/05/2020

Spectral Distribution aware Image Generation

Recent advances in deep generative models for photo-realistic images hav...
research
07/04/2021

Auxiliary-Classifier GAN for Malware Analysis

Generative adversarial networks (GAN) are a class of powerful machine le...
research
11/15/2019

Human Annotations Improve GAN Performances

Generative Adversarial Networks (GANs) have shown great success in many ...
research
04/10/2023

Deepfake Detection of Occluded Images Using a Patch-based Approach

DeepFake involves the use of deep learning and artificial intelligence t...
research
07/13/2018

TequilaGAN: How to easily identify GAN samples

In this paper we show strategies to easily identify fake samples generat...
research
06/24/2022

Protecting President Zelenskyy against Deep Fakes

The 2022 Russian invasion of Ukraine is being fought on two fronts: a br...

Please sign up or login with your details

Forgot password? Click here to reset