Don't Forget Cheap Training Signals Before Building Unsupervised Bilingual Word Embeddings

05/31/2022
by   Silvia Severini, et al.
0

Bilingual Word Embeddings (BWEs) are one of the cornerstones of cross-lingual transfer of NLP models. They can be built using only monolingual corpora without supervision leading to numerous works focusing on unsupervised BWEs. However, most of the current approaches to build unsupervised BWEs do not compare their results with methods based on easy-to-access cross-lingual signals. In this paper, we argue that such signals should always be considered when developing unsupervised BWE methods. The two approaches we find most effective are: 1) using identical words as seed lexicons (which unsupervised approaches incorrectly assume are not available for orthographically distinct language pairs) and 2) combining such lexicons with pairs extracted by matching romanized versions of words with an edit distance threshold. We experiment on thirteen non-Latin languages (and English) and show that such cheap signals work well and that they outperform using more complex unsupervised methods on distant language pairs such as Chinese, Japanese, Kannada, Tamil, and Thai. In addition, they are even competitive with the use of high-quality lexicons in supervised approaches. Our results show that these training signals should not be neglected when building BWEs, even for distant languages.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2017

Word Translation Without Parallel Data

State-of-the-art methods for learning cross-lingual word embeddings have...
research
08/21/2019

On the Robustness of Unsupervised and Semi-supervised Cross-lingual Word Embedding Learning

Cross-lingual word embeddings are vector representations of words in dif...
research
09/04/2019

Do We Really Need Fully Unsupervised Cross-Lingual Embeddings?

Recent efforts in cross-lingual word embedding (CLWE) learning have pred...
research
04/20/2019

Weakly-Supervised Concept-based Adversarial Learning for Cross-lingual Word Embeddings

Distributed representations of words which map each word to a continuous...
research
07/18/2020

On a Novel Application of Wasserstein-Procrustes for Unsupervised Cross-Lingual Learning

The emergence of unsupervised word embeddings, pre-trained on very large...
research
06/09/2021

Crosslingual Embeddings are Essential in UNMT for Distant Languages: An English to IndoAryan Case Study

Recent advances in Unsupervised Neural Machine Translation (UNMT) have m...
research
10/08/2021

Unsupervised Cross-Lingual Transfer of Structured Predictors without Source Data

Providing technologies to communities or domains where training data is ...

Please sign up or login with your details

Forgot password? Click here to reset