Word Translation Without Parallel Data

10/11/2017
by   Alexis Conneau, et al.
0

State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent works showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging results, they are not on par with their supervised counterparts and are limited to pairs of languages sharing a common alphabet. In this work, we show that we can build a bilingual dictionary between two languages without using any parallel corpora, by aligning monolingual word embedding spaces in an unsupervised way. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. Our experiments demonstrate that our method works very well also for distant language pairs, like English-Russian or English-Chinese. We finally show that our method is a first step towards fully unsupervised machine translation and describe experiments on the English-Esperanto language pair, on which there only exists a limited amount of parallel data.

READ FULL TEXT
research
05/02/2018

Unsupervised Cross-Lingual Information Retrieval using Monolingual Data Only

We propose a fully unsupervised framework for ad-hoc cross-lingual infor...
research
05/31/2022

Don't Forget Cheap Training Signals Before Building Unsupervised Bilingual Word Embeddings

Bilingual Word Embeddings (BWEs) are one of the cornerstones of cross-li...
research
09/04/2019

Do We Really Need Fully Unsupervised Cross-Lingual Embeddings?

Recent efforts in cross-lingual word embedding (CLWE) learning have pred...
research
07/06/2020

Bilingual Dictionary Based Neural Machine Translation without Using Parallel Sentences

In this paper, we propose a new task of machine translation (MT), which ...
research
06/09/2021

Crosslingual Embeddings are Essential in UNMT for Distant Languages: An English to IndoAryan Case Study

Recent advances in Unsupervised Neural Machine Translation (UNMT) have m...
research
05/21/2020

MultiMWE: Building a Multi-lingual Multi-Word Expression (MWE) Parallel Corpora

Multi-word expressions (MWEs) are a hot topic in research in natural lan...
research
10/11/2022

Cross-Lingual Speaker Identification Using Distant Supervision

Speaker identification, determining which character said each utterance ...

Please sign up or login with your details

Forgot password? Click here to reset