Distilling Semantic Concept Embeddings from Contrastively Fine-Tuned Language Models

05/16/2023
by   Na Li, et al.
0

Learning vectors that capture the meaning of concepts remains a fundamental challenge. Somewhat surprisingly, perhaps, pre-trained language models have thus far only enabled modest improvements to the quality of such concept embeddings. Current strategies for using language models typically represent a concept by averaging the contextualised representations of its mentions in some corpus. This is potentially sub-optimal for at least two reasons. First, contextualised word vectors have an unusual geometry, which hampers downstream tasks. Second, concept embeddings should capture the semantic properties of concepts, whereas contextualised word vectors are also affected by other factors. To address these issues, we propose two contrastive learning strategies, based on the view that whenever two sentences reveal similar properties, the corresponding contextualised vectors should also be similar. One strategy is fully unsupervised, estimating the properties which are expressed in a sentence from the neighbourhood structure of the contextualised word embeddings. The second strategy instead relies on a distant supervision signal from ConceptNet. Our experimental results show that the resulting vectors substantially outperform existing concept embeddings in predicting the semantic properties of concepts, with the ConceptNet-based strategy achieving the best results. These findings are furthermore confirmed in a clustering task and in the downstream task of ontology completion.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/15/2021

Deriving Word Vectors from Contextualized Language Models using Topic-Aware Mention Selection

One of the long-standing challenges in lexical semantics consists in lea...
research
12/04/2020

Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings

While the success of pre-trained language models has largely eliminated ...
research
07/05/2023

Leveraging multilingual transfer for unsupervised semantic acoustic word embeddings

Acoustic word embeddings (AWEs) are fixed-dimensional vector representat...
research
10/06/2022

Modelling Commonsense Properties using Pre-Trained Bi-Encoders

Grasping the commonsense properties of everyday concepts is an important...
research
12/22/2020

Improved Biomedical Word Embeddings in the Transformer Era

Biomedical word embeddings are usually pre-trained on free text corpora ...
research
09/12/2023

Learning to Predict Concept Ordering for Common Sense Generation

Prior work has shown that the ordering in which concepts are shown to a ...
research
01/25/2021

A Hybrid Approach to Measure Semantic Relatedness in Biomedical Concepts

Objective: This work aimed to demonstrate the effectiveness of a hybrid ...

Please sign up or login with your details

Forgot password? Click here to reset