Low Distortion Block-Resampling with Spatially Stochastic Networks

06/09/2020
by   Sarah Jane Hong, et al.
0

We formalize and attack the problem of generating new images from old ones that are as diverse as possible, only allowing them to change without restrictions in certain parts of the image while remaining globally consistent. This encompasses the typical situation found in generative modelling, where we are happy with parts of the generated data, but would like to resample others ("I like this generated castle overall, but this tower looks unrealistic, I would like a new one"). In order to attack this problem we build from the best conditional and unconditional generative models to introduce a new network architecture, training procedure, and algorithm for resampling parts of the image as desired.

READ FULL TEXT

page 2

page 6

page 9

page 10

research
03/18/2019

Generating Adversarial Examples With Conditional Generative Adversarial Net

Recently, deep neural networks have significant progress and successful ...
research
01/13/2019

RNN-based Generative Model for Fine-Grained Sketching

Deep generative models have shown great promise when it comes to synthes...
research
03/04/2020

Type I Attack for Generative Models

Generative models are popular tools with a wide range of applications. N...
research
12/07/2021

CG-NeRF: Conditional Generative Neural Radiance Fields

While recent NeRF-based generative models achieve the generation of dive...
research
09/27/2018

Conditional WaveGAN

Generative models are successfully used for image synthesis in the recen...
research
12/19/2022

Tokenization Consistency Matters for Generative Models on Extractive NLP Tasks

Generative models have been widely applied to solve extractive tasks, wh...
research
04/03/2019

Creating new distributions using integration and summation by parts

Methods for generating new distributions from old can be thought of as t...

Please sign up or login with your details

Forgot password? Click here to reset