-
Self-labeled Conditional GANs
This paper introduces a novel and fully unsupervised framework for condi...
read it
-
Variational Conditional GAN for Fine-grained Controllable Image Generation
In this paper, we propose a novel variational generator framework for co...
read it
-
RGBD-GAN: Unsupervised 3D Representation Learning From Natural Image Datasets via RGBD Image Synthesis
Understanding three-dimensional (3D) geometries from two-dimensional (2D...
read it
-
CcGAN: Continuous Conditional Generative Adversarial Networks for Image Generation
This work proposes the continuous conditional generative adversarial net...
read it
-
Conditional Image Generation with PixelCNN Decoders
This work explores conditional image generation with a new image density...
read it
-
JGAN: A Joint Formulation of GAN for Synthesizing Images and Labels
Image generation with explicit condition or label generally works better...
read it
-
CircleGAN: Generative Adversarial Learning across Spherical Circles
We present a novel discriminator for GANs that improves realness and div...
read it
Diverse Image Generation via Self-Conditioned GANs
We introduce a simple but effective unsupervised method for generating realistic and diverse images. We train a class-conditional GAN model without using manually annotated class labels. Instead, our model is conditional on labels automatically derived from clustering in the discriminator's feature space. Our clustering step automatically discovers diverse modes, and explicitly requires the generator to cover them. Experiments on standard mode collapse benchmarks show that our method outperforms several competing methods when addressing mode collapse. Our method also performs well on large-scale datasets such as ImageNet and Places365, improving both image diversity and standard quality metrics, compared to previous methods.
READ FULL TEXT
Comments
There are no comments yet.