Language-agnostic BERT Sentence Embedding

07/03/2020
by   Fangxiaoyu Feng, et al.
0

We adapt multilingual BERT to produce language-agnostic sentence embeddings for 109 languages. multilingual NLP tasks is masked language model (MLM) pretraining followed by task specific fine-tuning. While English sentence embeddings have been obtained by fine-tuning a pretrained BERT model, such models have not been applied to multilingual sentence embeddings. Our model combines masked language model (MLM) and translation language model (TLM) pretraining with a translation ranking task using bi-directional dual encoders. The resulting multilingual sentence embeddings improve average bi-text retrieval accuracy over 112 languages to 83.7 on Tatoeba. Our sentence embeddings also establish new state-of-the-art results on BUCC and UN bi-text retrieval.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset