DeepAI AI Chat
Log In Sign Up

Unsupervised Domain Clusters in Pretrained Language Models

04/05/2020
by   Roee Aharoni, et al.
0

The notion of "in-domain data" in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pre-trained language models implicitly learn sentence representations that cluster by domains without supervision – suggesting a simple data-driven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and by precision and recall of sentence selection with respect to an oracle.

READ FULL TEXT

page 15

page 16

09/16/2021

Translation Transformers Rediscover Inherent Data Domains

Many works proposed methods to improve the performance of Neural Machine...
01/22/2020

Unsupervised Domain Adaptation for Neural Machine Translation with Iterative Back Translation

State-of-the-art neural machine translation (NMT) systems are data-hungr...
09/11/2021

Multilingual Translation via Grafting Pre-trained Language Models

Can pre-trained BERT for one language and GPT for another be glued toget...
09/09/2021

Generalised Unsupervised Domain Adaptation of Neural Machine Translation with Cross-Lingual Data Selection

This paper considers the unsupervised domain adaptation problem for neur...
08/27/2019

Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings

The recent success of neural machine translation models relies on the av...
08/05/2022

Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models

We present Branch-Train-Merge (BTM), a communication-efficient algorithm...