Frustratingly Easy Meta-Embedding -- Computing Meta-Embeddings by Averaging Source Word Embeddings

04/14/2018
by   Joshua Coates, et al.
0

Creating accurate meta-embeddings from pre-trained source embeddings has received attention lately. Methods based on global and locally-linear transformation and concatenation have shown to produce accurate meta-embeddings. In this paper, we show that the arithmetic mean of two distinct word embedding sets yields a performant meta-embedding that is comparable or better than more complex meta-embedding learning methods. The result seems counter-intuitive given that vector spaces in different source embeddings are not comparable and cannot be simply averaged. We give insight into why averaging can still produce accurate meta-embedding despite the incomparability of the source vector spaces.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2022

Learning Meta Word Embeddings by Unsupervised Weighted Concatenation of Source Embeddings

Given multiple source word embeddings learnt using diverse algorithms an...
research
03/03/2020

Meta-Embeddings Based On Self-Attention

Creating meta-embeddings for better performance in language modelling ha...
research
05/19/2022

Gender Bias in Meta-Embeddings

Combining multiple source embeddings to create meta-embeddings is consid...
research
08/18/2015

Learning Meta-Embeddings by Using Ensembles of Embedding Sets

Word embeddings -- distributed representations of words -- in deep learn...
research
04/25/2022

A Survey on Word Meta-Embedding Learning

Meta-embedding (ME) learning is an emerging approach that attempts to le...
research
08/13/2018

Angular-Based Word Meta-Embedding Learning

Ensembling word embeddings to improve distributed word representations h...
research
06/04/2018

Absolute Orientation for Word Embedding Alignment

We propose a new technique to align word embeddings which are derived fr...

Please sign up or login with your details

Forgot password? Click here to reset