Out-domain examples for generative models

03/07/2019
by   Dario Pasquini, et al.
18

Deep generative models are being increasingly used in a wide variety of applications. However, the generative process is not fully predictable and at times, it produces an unexpected output. We will refer to those outputs as out-domain examples. In the present paper we show that an attacker can force a pre-trained generator to reproduce an arbitrary out-domain example if fed by a suitable adversarial input. The main assumption is that those outputs lie in an unexplored region of the generator's codomain and hence they have a very low probability of being naturally generated. Moreover, we show that this adversarial input can be shaped so as to be statistically indistinguishable from the set of genuine inputs. The goal is to look for an efficient way of finding these inputs in the generator's latent space.

READ FULL TEXT

page 1

page 4

page 7

page 9

page 15

research
10/31/2017

Latent Space Oddity: on the Curvature of Deep Generative Models

Deep generative models provide a systematic way to learn nonlinear data ...
research
11/29/2019

Transflow Learning: Repurposing Flow Models Without Retraining

It is well known that deep generative models have a rich latent space, a...
research
05/25/2017

Latent Geometry and Memorization in Generative Models

It can be difficult to tell whether a trained generative model has learn...
research
05/25/2023

Securing Deep Generative Models with Universal Adversarial Signature

Recent advances in deep generative models have led to the development of...
research
06/09/2021

Generative Models as a Data Source for Multiview Representation Learning

Generative models are now capable of producing highly realistic images t...
research
04/03/2019

Image Generation from Small Datasets via Batch Statistics Adaptation

Thanks to the recent development of deep generative models, it is becomi...

Please sign up or login with your details

Forgot password? Click here to reset