Visual Grounding Strategies for Text-Only Natural Language Processing

03/25/2021
by   Damien Sileo, et al.
0

Visual grounding is a promising path toward more robust and accurate Natural Language Processing (NLP) models. Many multimodal extensions of BERT (e.g., VideoBERT, LXMERT, VL-BERT) allow a joint modeling of texts and images that lead to state-of-the-art results on multimodal tasks such as Visual Question Answering. Here, we leverage multimodal modeling for purely textual tasks (language modeling and classification) with the expectation that the multimodal pretraining provides a grounding that can improve text processing accuracy. We propose possible strategies in this respect. A first type of strategy, referred to as transferred grounding consists in applying multimodal models to text-only tasks using a placeholder to replace image input. The second one, which we call associative grounding, harnesses image retrieval to match texts with related images during both pretraining and text-only downstream tasks. We draw further distinctions into both strategies and then compare them according to their impact on language modeling and commonsense-related downstream tasks, showing improvement over text-only baselines.

READ FULL TEXT
research
09/21/2021

Does Vision-and-Language Pretraining Improve Lexical Grounding?

Linguistic representations derived from text alone have been criticized ...
research
06/17/2018

Multimodal Grounding for Language Processing

This survey discusses how recent developments in multimodal processing f...
research
10/24/2022

Are Current Decoding Strategies Capable of Facing the Challenges of Visual Dialogue?

Decoding strategies play a crucial role in natural language generation s...
research
05/20/2023

Patton: Language Model Pretraining on Text-Rich Networks

A real-world text corpus sometimes comprises not only text documents but...
research
08/18/2023

Differentiable Retrieval Augmentation via Generative Language Modeling for E-commerce Query Intent Classification

Retrieval augmentation, which enhances downstream models by a knowledge ...
research
02/16/2023

What A Situated Language-Using Agent Must be Able to Do: A Top-Down Analysis

Even in our increasingly text-intensive times, the primary site of langu...
research
08/06/2019

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

We present ViLBERT (short for Vision-and-Language BERT), a model for lea...

Please sign up or login with your details

Forgot password? Click here to reset