Unsupervised Attention-based Sentence-Level Meta-Embeddings from Contextualised Language Models

04/16/2022
by   Keigo Takahashi, et al.
0

A variety of contextualised language models have been proposed in the NLP community, which are trained on diverse corpora to produce numerous Neural Language Models (NLMs). However, different NLMs have reported different levels of performances in downstream NLP applications when used as text representations. We propose a sentence-level meta-embedding learning method that takes independently trained contextualised word embedding models and learns a sentence embedding that preserves the complementary strengths of the input source NLMs. Our proposed method is unsupervised and is not tied to a particular downstream task, which makes the learnt meta-embeddings in principle applicable to different tasks that require sentence representations. Specifically, we first project the token-level embeddings obtained by the individual NLMs and learn attention weights that indicate the contributions of source embeddings towards their token-level meta-embeddings. Next, we apply mean and max pooling to produce sentence-level meta-embeddings from token-level meta-embeddings. Experimental results on semantic textual similarity benchmarks show that our proposed unsupervised sentence-level meta-embedding method outperforms previously proposed sentence-level meta-embedding methods as well as a supervised baseline.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2019

Sentence Meta-Embeddings for Unsupervised Semantic Textual Similarity

We address the task of unsupervised Semantic Textual Similarity (STS) by...
research
09/19/2017

Think Globally, Embed Locally --- Locally Linear Meta-embedding of Words

Distributed word embeddings have shown superior performances in numerous...
research
04/26/2022

Learning Meta Word Embeddings by Unsupervised Weighted Concatenation of Source Embeddings

Given multiple source word embeddings learnt using diverse algorithms an...
research
06/16/2023

Meta-Personalizing Vision-Language Models to Find Named Instances in Video

Large-scale vision-language models (VLM) have shown impressive results f...
research
10/14/2022

Holistic Sentence Embeddings for Better Out-of-Distribution Detection

Detecting out-of-distribution (OOD) instances is significant for the saf...
research
01/23/2021

Debiasing Pre-trained Contextualised Embeddings

In comparison to the numerous debiasing methods proposed for the static ...
research
03/14/2023

Finding the Needle in a Haystack: Unsupervised Rationale Extraction from Long Text Classifiers

Long-sequence transformers are designed to improve the representation of...

Please sign up or login with your details

Forgot password? Click here to reset