Language with Vision: a Study on Grounded Word and Sentence Embeddings

06/17/2022
by   Hassan Shahmohammadi, et al.
0

Language grounding to vision is an active field of research aiming to enrich text-based representations of word meanings by leveraging perceptual knowledge from vision. Despite many attempts at language grounding, it is still unclear how to effectively inject visual knowledge into the word embeddings of a language in such a way that a proper balance of textual and visual knowledge is maintained. Some common concerns are the following. Is visual grounding beneficial for abstract words or is its contribution only limited to concrete words? What is the optimal way of bridging the gap between text and vision? How much do we gain by visually grounding textual embeddings? The present study addresses these questions by proposing a simple yet very effective grounding approach for pre-trained word embeddings. Our model aligns textual embeddings with vision while largely preserving the distributional statistics that characterize word use in text corpora. By applying a learned alignment, we are able to generate visually grounded embeddings for unseen words, including abstract words. A series of evaluations on word similarity benchmarks shows that visual grounding is beneficial not only for concrete words, but also for abstract words. We also show that our method for visual grounding offers advantages for contextualized embeddings, but only when these are trained on corpora of relatively modest size. Code and grounded embeddings for English are available at https://github.com/Hazel1994/Visually_Grounded_Word_Embeddings_2.

READ FULL TEXT

page 2

page 4

page 12

research
06/30/2022

Visual grounding of abstract and concrete words: A response to Günther et al. (2020)

Current computational models capturing words' meaning mostly rely on tex...
research
11/22/2015

Visual Word2Vec (vis-w2v): Learning Visually Grounded Word Embeddings Using Abstract Scenes

We propose a model to learn visually grounded word embeddings (vis-w2v) ...
research
04/15/2021

Learning Zero-Shot Multifaceted Visually Grounded Word Embeddingsvia Multi-Task Training

Language grounding aims at linking the symbolic representation of langua...
research
09/08/2022

Visual Grounding of Inter-lingual Word-Embeddings

Visual grounding of Language aims at enriching textual representations o...
research
06/14/2023

World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Language Models

The ability to connect language units to their referents in the physical...
research
06/27/2018

Learning Visually-Grounded Semantics from Contrastive Adversarial Samples

We study the problem of grounding distributional representations of text...
research
10/30/2020

Domain-Specific Lexical Grounding in Noisy Visual-Textual Documents

Images can give us insights into the contextual meanings of words, but c...

Please sign up or login with your details

Forgot password? Click here to reset