Compositional GAN: Learning Conditional Image Composition

by   Samaneh Azadi, et al.

Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism, but are generally modeled to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene. Capturing such complex interactions between different objects in the world, including their relative scaling, spatial layout, occlusion, or viewpoint transformation is a challenging problem. In this work, we propose to model object composition in a GAN framework as a self-consistent composition-decomposition network. Our model is conditioned on the object images from their marginal distributions to generate a realistic image from their joint distribution by explicitly learning the possible interactions. We evaluate our model through qualitative experiments and user evaluations in both the scenarios when either paired or unpaired examples for the individual object images and the joint scenes are given during training. Our results reveal that the learned model captures potential interactions between the two object domains given as input to output new instances of composed scene at test time in a reasonable fashion.


page 6

page 7

page 9

page 10

page 11


Relationship-Aware Spatial Perception Fusion for Realistic Scene Layout Generation

The significant progress on Generative Adversarial Networks (GANs) have ...

Composition and decomposition of GANs

In this work, we propose a composition/decomposition framework for adver...

Semantic Palette: Guiding Scene Generation with Class Proportions

Despite the recent progress of generative adversarial networks (GANs) at...

A Layer-Based Sequential Framework for Scene Generation with GANs

The visual world we sense, interpret and interact everyday is a complex ...

Towards Audio to Scene Image Synthesis using Generative Adversarial Network

Humans can imagine a scene from a sound. We want machines to do so by us...

Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis

Despite the success of Generative Adversarial Networks (GANs) in image s...

Compositional Transformers for Scene Generation

We introduce the GANformer2 model, an iterative object-oriented transfor...

Code Repositories


Compositional GAN: Learning Image-Conditional Binary Composition at IJCV 2020

view repo



view repo

Please sign up or login with your details

Forgot password? Click here to reset