Learning Taxonomies of Concepts and not Words using Contextualized Word Representations: A Position Paper

01/31/2019
by   Lukas Schmelzeisen, et al.
0

Taxonomies are semantic hierarchies of concepts. One limitation of current taxonomy learning systems is that they define concepts as single words. This position paper argues that contextualized word representations, which recently achieved state-of-the-art results on many competitive NLP tasks, are a promising method to address this limitation. We outline a novel approach for taxonomy learning that (1) defines concepts as synsets, (2) learns density-based approximations of contextualized word representations, and (3) can measure similarity and hypernymy among them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2016

Semantic Representations of Word Senses and Concepts

Representing the semantics of linguistic items in a machine-interpretabl...
research
01/21/2022

Taxonomy Enrichment with Text and Graph Vector Representations

Knowledge graphs such as DBpedia, Freebase or Wikidata always contain a ...
research
06/28/2021

Word2Box: Learning Word Representation Using Box Embeddings

Learning vector representations for words is one of the most fundamental...
research
08/05/2016

De-Conflated Semantic Representations

One major deficiency of most semantic representation techniques is that ...
research
02/01/2020

Concept Embedding for Information Retrieval

Concepts are used to solve the term-mismatch problem. However, we need a...
research
04/12/2020

Bayesian Hierarchical Words Representation Learning

This paper presents the Bayesian Hierarchical Words Representation (BHWR...
research
11/20/2012

A New Similarity Measure for Taxonomy Based on Edge Counting

This paper introduces a new similarity measure based on edge counting in...

Please sign up or login with your details

Forgot password? Click here to reset