DeepAI AI Chat
Log In Sign Up

Learning to Predict: A Fast Re-constructive Method to Generate Multimodal Embeddings

03/25/2017
by   Guillem Collell, et al.
KU Leuven
0

Integrating visual and linguistic information into a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision. In this paper, we present a simple method to build multimodal representations by learning a language-to-vision mapping and using its output to build multimodal embeddings. In this sense, our method provides a cognitively plausible way of building representations, consistent with the inherently re-constructive and associative nature of human memory. Using seven benchmark concept similarity tests we show that the mapped vectors not only implicitly encode multimodal information, but also outperform strong unimodal baselines and state-of-the-art multimodal methods, thus exhibiting more "human-like" judgments---particularly in zero-shot settings.

READ FULL TEXT
06/17/2018

Multimodal Grounding for Language Processing

This survey discusses how recent developments in multimodal processing f...
04/04/2017

From Modal to Multimodal Ambiguities: a Classification Approach

This paper deals with classifying ambiguities for Multimodal Languages. ...
11/10/2019

Multimodal Intelligence: Representation Learning, Information Fusion, and Applications

Deep learning has revolutionized speech recognition, image recognition, ...
02/22/2021

Probing Multimodal Embeddings for Linguistic Properties: the Visual-Semantic Case

Semantic embeddings have advanced the state of the art for countless nat...
05/14/2019

Strong and Simple Baselines for Multimodal Utterance Embeddings

Human language is a rich multimodal signal consisting of spoken words, f...
05/07/2022

Odor Descriptor Understanding through Prompting

Embeddings from contemporary natural language processing (NLP) models ar...