Probing Contextualized Sentence Representations with Visual Awareness

11/07/2019
by   Zhuosheng Zhang, et al.
10

We present a universal framework to model contextualized sentence representations with visual awareness that is motivated to overcome the shortcomings of the multimodal parallel data with manual annotations. For each sentence, we first retrieve a diversity of images from a shared cross-modal embedding space, which is pre-trained on a large-scale of text-image pairs. Then, the texts and images are respectively encoded by transformer encoder and convolutional neural network. The two sequences of representations are further fused by a simple and effective attention layer. The architecture can be easily applied to text-only natural language processing tasks without manually annotating multimodal parallel corpora. We apply the proposed method on three tasks, including neural machine translation, natural language inference and sequence labeling and experimental results verify the effectiveness.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

research
12/30/2020

Accurate Word Representations with Universal Visual Guidance

Word representation is a fundamental component in neural language unders...
research
04/29/2020

Learning Better Universal Representations from Pre-trained Contextualized Language Models

Pre-trained contextualized language models such as BERT have shown great...
research
05/18/2018

Metric for Automatic Machine Translation Evaluation based on Universal Sentence Representations

Sentence representations can capture a wide range of information that ca...
research
03/22/2019

Pre-trained Language Model Representations for Language Generation

Pre-trained language model representations have been successful in a wid...
research
06/02/2023

A Simple yet Effective Self-Debiasing Framework for Transformer Models

Current Transformer-based natural language understanding (NLU) models he...
research
06/24/2016

Efficient Parallel Learning of Word2Vec

Since its introduction, Word2Vec and its variants are widely used to lea...
research
09/08/2019

MULE: Multimodal Universal Language Embedding

Existing vision-language methods typically support two languages at a ti...

Please sign up or login with your details

Forgot password? Click here to reset