Counter-fitting Word Vectors to Linguistic Constraints

03/02/2016
by   Nikola Mrkšić, et al.
0

In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors' capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements across different dialogue domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/06/2016

Desiderata for Vector-Space Word Representations

A plethora of vector-space representations for words is currently availa...
research
04/21/2018

Extrofitting: Enriching Word Representation and its Vector Space with Semantic Lexicons

We propose post-processing method for enriching not only word representa...
research
06/01/2017

Morph-fitting: Fine-Tuning Word Vector Spaces with Simple Language-Specific Rules

Morphologically rich languages accentuate two properties of distribution...
research
11/17/2018

Unsupervised Post-processing of Word Vectors via Conceptor Negation

Word vectors are at the core of many natural language processing tasks. ...
research
11/02/2016

Fuzzy paraphrases in learning word representations with a lexicon

A synonym of a polysemous word is usually only the paraphrase of one sen...
research
05/11/2020

Multidirectional Associative Optimization of Function-Specific Word Representations

We present a neural framework for learning associations between interrel...
research
10/11/2018

Towards Understanding Linear Word Analogies

A surprising property of word vectors is that vector algebra can often b...

Please sign up or login with your details

Forgot password? Click here to reset