Creative divergent synthesis with generative models

Machine learning approaches now achieve impressive generation capabilities in numerous domains such as image, audio or video. However, most training & evaluation frameworks revolve around the idea of strictly modelling the original data distribution rather than trying to extrapolate from it. This precludes the ability of such models to diverge from the original distribution and, hence, exhibit some creative traits. In this paper, we propose various perspectives on how this complicated goal could ever be achieved, and provide preliminary results on our novel training objective called Bounded Adversarial Divergence (BAD).

READ FULL TEXT

page 2

page 7

research
12/13/2022

Score-based Generative Modeling Secretly Minimizes the Wasserstein Distance

Score-based generative models are shown to achieve remarkable empirical ...
research
06/14/2022

Towards Goal, Feasibility, and Diversity-Oriented Deep Generative Models in Design

Deep Generative Machine Learning Models (DGMs) have been growing in popu...
research
03/31/2021

CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields

Tremendous progress in deep generative models has led to photorealistic ...
research
03/06/2023

HiGeN: Hierarchical Multi-Resolution Graph Generative Networks

In real world domains, most graphs naturally exhibit a hierarchical stru...
research
02/27/2021

A Brief Introduction to Generative Models

We introduce and motivate generative modeling as a central task for mach...
research
07/12/2021

Active Divergence with Generative Deep Learning – A Survey and Taxonomy

Generative deep learning systems offer powerful tools for artefact gener...
research
05/21/2022

Principled Knowledge Extrapolation with GANs

Human can extrapolate well, generalize daily knowledge into unseen scena...

Please sign up or login with your details

Forgot password? Click here to reset