Unsupervised Hypernym Detection by Distributional Inclusion Vector Embedding

10/02/2017
by   Haw-Shiuan Chang, et al.
0

Modeling hypernymy, such as poodle is-a dog, is an important generalization aid to many NLP tasks, such as entailment, relation extraction, and question answering. Supervised learning from labeled hypernym sources, such as WordNet, limit the coverage of these models, which can be addressed by learning hypernyms from unlabeled text. Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy. This paper introduces distributional inclusion vector embedding (DIVE), a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings learned by modeling diversity of word context with specialized negative sampling. In an experimental evaluation more comprehensive than any previous literature of which we are aware - evaluating on 11 datasets using multiple existing as well as newly proposed scoring metrics - we find that our method can provide up to double or triple the precision of previous unsupervised methods, and also sometimes outperforms previous semi-supervised methods, yielding many new state-of-the-art results.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2018

Efficient Graph-based Word Sense Induction by Distributional Inclusion Vector Embeddings

Word sense induction (WSI), which addresses polysemy by unsupervised dis...
research
10/06/2017

Learning Word Embeddings for Hyponymy with Entailment-Based Distributional Semantics

Lexical entailment, such as hyponymy, is a fundamental issue in the sema...
research
12/11/2018

Delta Embedding Learning

Learning from corpus and learning from supervised NLP tasks both give us...
research
07/23/2017

Hierarchical Embeddings for Hypernymy Detection and Directionality

We present a novel neural model HyperVec to learn hierarchical embedding...
research
06/05/2020

DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations

We present DeCLUTR: Deep Contrastive Learning for Unsupervised Textual R...
research
11/16/2017

FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension

This paper introduces a new neural structure called FusionNet, which ext...
research
04/24/2018

Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment

Supervised distributional methods are applied successfully in lexical en...

Please sign up or login with your details

Forgot password? Click here to reset