A Simple Generative Network

06/17/2021
by   Daniel N. Nissani, et al.
0

Generative neural networks are able to mimic intricate probability distributions such as those of handwritten text, natural images, etc. Since their inception several models were proposed. The most successful of these were based on adversarial (GAN), auto-encoding (VAE) and maximum mean discrepancy (MMD) relatively complex architectures and schemes. Surprisingly, a very simple architecture (a single feed-forward neural network) in conjunction with an obvious optimization goal (Kullback_Leibler divergence) was apparently overlooked. This paper demonstrates that such a model (denoted SGN for its simplicity) is able to generate samples visually and quantitatively competitive as compared with the fore-mentioned state of the art methods.

READ FULL TEXT
research
04/20/2021

VideoGPT: Video Generation using VQ-VAE and Transformers

We present VideoGPT: a conceptually simple architecture for scaling like...
research
08/05/2017

Parametrization and Generation of Geological Models with Generative Adversarial Networks

One of the main challenges in the parametrization of geological models i...
research
05/20/2020

Tessellated Wasserstein Auto-Encoders

Non-adversarial generative models such as variational auto-encoder (VAE)...
research
11/14/2016

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy

We propose a method to optimize the representation and distinguishabilit...
research
06/18/2018

Nonparametric Topic Modeling with Neural Inference

This work focuses on combining nonparametric topic models with Auto-Enco...
research
04/07/2019

Teaching GANs to Sketch in Vector Format

Sketching is more fundamental to human cognition than speech. Deep Neura...
research
10/24/2021

A deep learning based surrogate model for stochastic simulators

We propose a deep learning-based surrogate model for stochastic simulato...

Please sign up or login with your details

Forgot password? Click here to reset