Scoring Lexical Entailment with a Supervised Directional Similarity Network

05/23/2018
by   Marek Rei, et al.
8

We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of general-purpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the state-of-the-art on the HyperLex dataset by approximately 25

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2014

Tailoring Word Embeddings for Bilexical Predictions: An Experimental Comparison

We investigate the problem of inducing word embeddings that are tailored...
research
10/17/2017

Specialising Word Vectors for Lexical Entailment

We present LEAR (Lexical Entailment Attract-Repel), a novel post-process...
research
12/02/2014

Tiered Clustering to Improve Lexical Entailment

Many tasks in Natural Language Processing involve recognizing lexical en...
research
04/24/2018

Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment

Supervised distributional methods are applied successfully in lexical en...
research
01/31/2014

Experiments with Three Approaches to Recognizing Lexical Entailment

Inference in natural language often involves recognizing lexical entailm...
research
10/10/2022

Language Models Are Poor Learners of Directional Inference

We examine LMs' competence of directional predicate entailments by super...
research
11/25/2015

Towards Universal Paraphrastic Sentence Embeddings

We consider the problem of learning general-purpose, paraphrastic senten...

Please sign up or login with your details

Forgot password? Click here to reset