-
Diversity-Sensitive Conditional Generative Adversarial Networks
We propose a simple yet highly effective method that addresses the mode-...
read it
-
Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis
Most conditional generation tasks expect diverse outputs given a single ...
read it
-
Diverse Conditional Image Generation by Stochastic Regression with Latent Drop-Out Codes
Recent advances in Deep Learning and probabilistic modeling have led to ...
read it
-
Comparison of Generative Adversarial Networks Architectures Which Reduce Mode Collapse
Generative Adversarial Networks are known for their high quality outputs...
read it
-
Conditional Generative Modeling via Learning the Latent Space
Although deep learning has achieved appealing results on several machine...
read it
-
Modal Uncertainty Estimation via Discrete Latent Representation
Many important problems in the real world don't have unique solutions. I...
read it
-
Pluralistic Image Completion
Most image completion methods produce only one result for each masked in...
read it
How to train your conditional GAN: An approach using geometrically structured latent manifolds
Conditional generative modeling typically requires capturing one-to-many mappings between the inputs and outputs. However, vanilla conditional GANs (cGAN) tend to ignore the variations of the latent seeds which results in mode-collapse. As a solution, recent works have moved towards comparatively expensive models for generating diverse outputs in a conditional setting. In this paper, we argue that the limited diversity of the vanilla cGANs is not due to a lack of capacity, but a result of non-optimal training schemes. We tackle this problem from a geometrical perspective and propose a novel training mechanism that increases both the diversity and the visual quality of the vanilla cGAN. The proposed solution does not demand architectural modifications and paves the way for more efficient architectures that target conditional generation in multi-modal spaces. We validate the efficacy of our model against a diverse set of tasks and show that the proposed solution is generic and effective across multiple datasets.
READ FULL TEXT
Comments
There are no comments yet.