Locality Preserving Loss to Align Vector Spaces

04/07/2020
by   Ashwinkumar Ganesan, et al.
0

We present a locality preserving loss (LPL)that improves the alignment between vector space representations (i.e., word or sentence embeddings) while separating (increasing distance between) uncorrelated representations as compared to the standard method that minimizes the mean squared error (MSE) only. The locality preserving loss optimizes the projection by maintaining the local neighborhood of embeddings that are found in the source, in the target domain as well. This reduces the overall size of the dataset required to the train model. We argue that vector space alignment (with MSE and LPL losses) acts as a regularizer in certain language-based classification tasks, leading to better accuracy than the base-line, especially when the size of the training set is small. We validate the effectiveness ofLPL on a cross-lingual word alignment task, a natural language inference task, and a multi-lingual inference task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2019

Context-Aware Cross-Lingual Mapping

Cross-lingual word vectors are typically obtained by fitting an orthogon...
research
03/08/2019

Context-Aware Crosslingual Mapping

Cross-lingual word vectors are typically obtained by fitting an orthogon...
research
09/15/2018

CLUSE: Cross-Lingual Unsupervised Sense Embeddings

This paper proposes a modularized sense induction and representation lea...
research
10/09/2022

Cross-Align: Modeling Deep Cross-lingual Interactions for Word Alignment

Word alignment which aims to extract lexicon translation equivalents bet...
research
06/04/2019

Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization

Cross-lingual word embeddings (CLWE) underlie many multilingual natural ...

Please sign up or login with your details

Forgot password? Click here to reset