pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference

10/20/2018
by   Mandar Joshi, et al.
0

Reasoning about implied relationships (e.g. paraphrastic, common sense, encyclopedic) between pairs of words is crucial for many cross-sentence inference problems. This paper proposes new methods for learning and using embeddings of word pairs that implicitly represent background knowledge about such relationships. Our pairwise embeddings are computed as a compositional function of each word's representation, which is learned by maximizing the pointwise mutual information (PMI) with the contexts in which the the two words co-occur. We add these representations to the cross-sentence attention layer of existing inference models (e.g. BiDAF for QA, ESIM for NLI), instead of extending or replacing existing word embeddings. Experiments show a gain of 2.72 representations also aid in better generalization with gains of around 6-7 adversarial SQuAD datasets, and 8.8 Glockner et al.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2021

Spanish Biomedical and Clinical Language Embeddings

We computed both Word and Sub-word Embeddings using FastText. For Sub-wo...
research
02/12/2018

Evaluating Compositionality in Sentence Embeddings

An important frontier in the quest for human-like AI is compositional se...
research
08/20/2022

Lost in Context? On the Sense-wise Variance of Contextualized Word Embeddings

Contextualized word embeddings in language models have given much advanc...
research
03/07/2017

Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features

The recent tremendous success of unsupervised word embeddings in a multi...
research
04/26/2022

From Hyperbolic Geometry Back to Word Embeddings

We choose random points in the hyperbolic disc and claim that these poin...
research
06/09/2016

Sentence Similarity Measures for Fine-Grained Estimation of Topical Relevance in Learner Essays

We investigate the task of assessing sentence-level prompt relevance in ...
research
04/02/2019

Attentive Mimicking: Better Word Embeddings by Attending to Informative Contexts

Learning high-quality embeddings for rare words is a hard problem becaus...

Please sign up or login with your details

Forgot password? Click here to reset