Algorithms that get old : the case of generative algorithms

02/07/2022
by   Gabriel Turinici, et al.
0

Generative IA networks, like the Variational Auto-Encoders (VAE), and Generative Adversarial Networks (GANs) produce new objects each time when asked to do so. However, this behavior is unlike that of human artists that change their style as times go by and seldom return to the initial point. We investigate a situation where VAEs are requested to sample from a probability measure described by some empirical set. Based on recent works on Radon-Sobolev statistical distances, we propose a numerical paradigm, to be used in conjunction with a generative algorithm, that satisfies the two following requirements: the objects created do not repeat and evolve to fill the entire target probability measure.

READ FULL TEXT
research
11/29/2019

X-Ray Sobolev Variational Auto-Encoders

The quality of the generative models (Generative adversarial networks, V...
research
06/15/2017

Variational Approaches for Auto-Encoding Generative Adversarial Networks

Auto-encoding generative adversarial networks (GANs) combine the standar...
research
10/30/2017

Generative Adversarial Source Separation

Generative source separation methods such as non-negative matrix factori...
research
12/08/2018

Counterfactuals uncover the modular structure of deep generative models

Deep generative models such as Generative Adversarial Networks (GANs) an...
research
09/28/2019

Wasserstein-2 Generative Networks

Modern generative learning is mainly associated with Generative Adversar...
research
07/26/2020

Efficient Generation of Structured Objects with Constrained Adversarial Networks

Generative Adversarial Networks (GANs) struggle to generate structured o...
research
03/11/2022

Reprogramming FairGANs with Variational Auto-Encoders: A New Transfer Learning Model

Fairness-aware GANs (FairGANs) exploit the mechanisms of Generative Adve...

Please sign up or login with your details

Forgot password? Click here to reset