Generative Models of Visually Grounded Imagination

05/30/2017
by   Ramakrishna Vedantam, et al.
0

It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before. We call the ability to create images of novel semantic concepts visually grounded imagination. In this paper, we show how we can modify variational auto-encoders to perform this task. Our method uses a novel training objective, and a novel product-of-experts inference network, which can handle partially specified (abstract) concepts in a principled and efficient way. We also propose a set of easy-to-compute evaluation metrics that capture our intuitive notions of what it means to have good visual imagination, namely correctness, coverage, and compositionality (the 3 C's). Finally, we perform a detailed comparison of our method with two existing joint image-attribute VAE methods (the JMVAE method of Suzuki et.al. and the BiVCCA method of Wang et.al.) by applying them to two datasets: the MNIST-with-attributes dataset (which we introduce here), and the CelebA dataset.

READ FULL TEXT

page 11

page 12

page 13

page 14

research
03/17/2020

Characterizing and Avoiding Problematic Global Optima of Variational Autoencoders

Variational Auto-encoders (VAEs) are deep generative latent variable mod...
research
04/16/2021

Q^2: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question Answering

Neural knowledge-grounded generative models for dialogue often produce c...
research
03/04/2019

The StreetLearn Environment and Dataset

Navigation is a rich and well-grounded problem domain that drives progre...
research
10/08/2020

ALFWorld: Aligning Text and Embodied Environments for Interactive Learning

Given a simple request (e.g., Put a washed apple in the kitchen fridge),...
research
05/29/2021

Understanding Instance-based Interpretability of Variational Auto-Encoders

Instance-based interpretation methods have been widely studied for super...
research
09/25/2020

Visually Grounded Compound PCFGs

Exploiting visual groundings for language understanding has recently bee...
research
05/04/2020

What is Learned in Visually Grounded Neural Syntax Acquisition

Visual features are a promising signal for learning bootstrap textual mo...

Please sign up or login with your details

Forgot password? Click here to reset