DeepAI AI Chat
Log In Sign Up

Learning to Predict: A Fast Re-constructive Method to Generate Multimodal Embeddings

by   Guillem Collell, et al.
KU Leuven

Integrating visual and linguistic information into a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision. In this paper, we present a simple method to build multimodal representations by learning a language-to-vision mapping and using its output to build multimodal embeddings. In this sense, our method provides a cognitively plausible way of building representations, consistent with the inherently re-constructive and associative nature of human memory. Using seven benchmark concept similarity tests we show that the mapped vectors not only implicitly encode multimodal information, but also outperform strong unimodal baselines and state-of-the-art multimodal methods, thus exhibiting more "human-like" judgments---particularly in zero-shot settings.


Multimodal Grounding for Language Processing

This survey discusses how recent developments in multimodal processing f...

From Modal to Multimodal Ambiguities: a Classification Approach

This paper deals with classifying ambiguities for Multimodal Languages. ...

Multimodal Intelligence: Representation Learning, Information Fusion, and Applications

Deep learning has revolutionized speech recognition, image recognition, ...

Probing Multimodal Embeddings for Linguistic Properties: the Visual-Semantic Case

Semantic embeddings have advanced the state of the art for countless nat...

Strong and Simple Baselines for Multimodal Utterance Embeddings

Human language is a rich multimodal signal consisting of spoken words, f...

Odor Descriptor Understanding through Prompting

Embeddings from contemporary natural language processing (NLP) models ar...