DeepAI
Log In Sign Up

A Simple Generative Network

06/17/2021
by   Daniel N. Nissani, et al.
0

Generative neural networks are able to mimic intricate probability distributions such as those of handwritten text, natural images, etc. Since their inception several models were proposed. The most successful of these were based on adversarial (GAN), auto-encoding (VAE) and maximum mean discrepancy (MMD) relatively complex architectures and schemes. Surprisingly, a very simple architecture (a single feed-forward neural network) in conjunction with an obvious optimization goal (Kullback_Leibler divergence) was apparently overlooked. This paper demonstrates that such a model (denoted SGN for its simplicity) is able to generate samples visually and quantitatively competitive as compared with the fore-mentioned state of the art methods.

READ FULL TEXT
04/20/2021

VideoGPT: Video Generation using VQ-VAE and Transformers

We present VideoGPT: a conceptually simple architecture for scaling like...
08/05/2017

Parametrization and Generation of Geological Models with Generative Adversarial Networks

One of the main challenges in the parametrization of geological models i...
05/20/2020

Tessellated Wasserstein Auto-Encoders

Non-adversarial generative models such as variational auto-encoder (VAE)...
11/14/2016

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy

We propose a method to optimize the representation and distinguishabilit...
06/18/2018

Nonparametric Topic Modeling with Neural Inference

This work focuses on combining nonparametric topic models with Auto-Enco...
10/24/2021

A deep learning based surrogate model for stochastic simulators

We propose a deep learning-based surrogate model for stochastic simulato...
03/13/2018

Learning to Maintain Natural Image Statistics

Maintaining natural image statistics is a crucial factor in restoration ...