Machine Translation with Cross-lingual Word Embeddings

12/10/2019
by   Marco Berlot, et al.
0

Learning word embeddings using distributional information is a task that has been studied by many researchers, and a lot of studies are reported in the literature. On the contrary, less studies were done for the case of multiple languages. The idea is to focus on a single representation for a pair of languages such that semantically similar words are closer to one another in the induced representation irrespective of the language. In this way, when data are missing for a particular language, classifiers from another language can be used.

READ FULL TEXT
research
11/11/2021

Training Cross-Lingual embeddings for Setswana and Sepedi

African languages still lag in the advances of Natural Language Processi...
research
05/15/2020

Cross-lingual Transfer of Twitter Sentiment Models Using a Common Vector Space

Word embeddings represent words in a numeric space in such a way that se...
research
07/09/2018

Predicting Concreteness and Imageability of Words Within and Across Languages via Word Embeddings

The notions of concreteness and imageability, traditionally important in...
research
11/03/2020

Cross-lingual Word Embeddings beyond Zero-shot Machine Translation

We explore the transferability of a multilingual neural machine translat...
research
10/27/2020

Learning Contextualised Cross-lingual Word Embeddings for Extremely Low-Resource Languages Using Parallel Corpora

We propose a new approach for learning contextualised cross-lingual word...
research
06/30/2020

Traceability Support for Multi-Lingual Software Projects

Software traceability establishes associations between diverse software ...
research
12/02/2020

A Computational Approach to Measuring the Semantic Divergence of Cognates

Meaning is the foundation stone of intercultural communication. Language...

Please sign up or login with your details

Forgot password? Click here to reset