Robust Cross-lingual Embeddings from Parallel Sentences

12/28/2019
by   Ali Sabet, et al.
0

Recent advances in cross-lingual word embeddings have primarily relied on mapping-based methods, which project pretrained word embeddings from different languages into a shared space through a linear transformation. However, these approaches assume word embedding spaces are isomorphic between different languages, which has been shown not to hold in practice (Søgaard et al., 2018), and fundamentally limits their performance. This motivates investigating joint learning methods which can overcome this impediment, by simultaneously learning embeddings across languages via a cross-lingual term in the training objective. Given the abundance of parallel data available (Tiedemann, 2012), we propose a bilingual extension of the CBOW method which leverages sentence-aligned corpora to obtain robust cross-lingual word and sentence representations. Our approach significantly improves cross-lingual sentence retrieval performance over all other approaches, as well as convincingly outscores mapping methods while maintaining parity with jointly trained methods on word-translation. It also achieves parity with a deep RNN method on a zero-shot cross-lingual document classification task, requiring far fewer computational resources for training and inference. As an additional advantage, our bilingual method also improves the quality of monolingual word vectors despite training on much smaller datasets. We make our code and models publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/08/2019

Context-Aware Cross-Lingual Mapping

Cross-lingual word vectors are typically obtained by fitting an orthogon...
research
03/08/2019

Context-Aware Crosslingual Mapping

Cross-lingual word vectors are typically obtained by fitting an orthogon...
research
10/09/2014

BilBOWA: Fast Bilingual Distributed Representations without Word Alignments

We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simp...
research
10/31/2018

Aligning Very Small Parallel Corpora Using Cross-Lingual Word Embeddings and a Monogamy Objective

Count-based word alignment methods, such as the IBM models or fast-align...
research
08/10/2018

Learning to Represent Bilingual Dictionaries

Bilingual word embeddings have been widely used to capture the similarit...
research
03/04/2018

Concatenated p-mean Word Embeddings as Universal Cross-Lingual Sentence Representations

Average word embeddings are a common baseline for more sophisticated sen...
research
01/11/2016

Trans-gram, Fast Cross-lingual Word-embeddings

We introduce Trans-gram, a simple and computationally-efficient method t...

Please sign up or login with your details

Forgot password? Click here to reset