-
Modal Uncertainty Estimation via Discrete Latent Representation
Many important problems in the real world don't have unique solutions. I...
read it
-
Learning latent representations across multiple data domains using Lifelong VAEGAN
The problem of catastrophic forgetting occurs in deep learning models tr...
read it
-
Verification of deep probabilistic models
Probabilistic models are a critical part of the modern deep learning too...
read it
-
DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder
Variational autoencoders (VAEs) have shown a promise in data-driven conv...
read it
-
The Variational Homoencoder: Learning to learn high capacity generative models from few examples
Hierarchical Bayesian methods can unify many related tasks (e.g. k-shot ...
read it
-
How to train your conditional GAN: An approach using geometrically structured latent manifolds
Conditional generative modeling typically requires capturing one-to-many...
read it
-
Generalized Adversarially Learned Inference
Allowing effective inference of latent vectors while training GANs can g...
read it
Conditional Generative Modeling via Learning the Latent Space
Although deep learning has achieved appealing results on several machine learning tasks, most of the models are deterministic at inference, limiting their application to single-modal settings. We propose a novel general-purpose framework for conditional generation in multimodal spaces, that uses latent variables to model generalizable learning patterns while minimizing a family of regression cost functions. At inference, the latent variables are optimized to find optimal solutions corresponding to multiple output modes. Compared to existing generative solutions, in multimodal spaces, our approach demonstrates faster and stable convergence, and can learn better representations for downstream tasks. Importantly, it provides a simple generic model that can beat highly engineered pipelines tailored using domain expertise on a variety of tasks, while generating diverse outputs. Our codes will be released.
READ FULL TEXT
Comments
There are no comments yet.