Log In Sign Up

Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval

by   Devang Kulshreshtha, et al.

In this paper, we propose a new domain adaptation method called back-training, a superior alternative to self-training. While self-training results in synthetic training data of the form quality inputs aligned with noisy outputs, back-training results in noisy inputs aligned with quality outputs. Our experimental results on unsupervised domain adaptation of question generation and passage retrieval models from Natural Questions domain to the machine learning domain show that back-training outperforms self-training by a large margin: 9.3 BLEU-1 points on generation, and 7.9 accuracy points on top-1 retrieval. We release MLQuestions, a domain-adaptation dataset for the machine learning domain containing 50K unaligned passages and 35K unaligned questions, and 3K aligned passage and question pairs. Our data and code are available at


page 1

page 2

page 3

page 4


BUDA: Boundless Unsupervised Domain Adaptation in Semantic Segmentation

In this work, we define and address "Boundless Unsupervised Domain Adapt...

Model Selection with Nonlinear Embedding for Unsupervised Domain Adaptation

Domain adaptation deals with adapting classifiers trained on data from a...

Instance Adaptive Self-Training for Unsupervised Domain Adaptation

The divergence between labeled training data and unlabeled testing data ...

Unsupervised Domain Adaptation via Regularized Conditional Alignment

We propose a method for unsupervised domain adaptation that trains a sha...

Restyling Data: Application to Unsupervised Domain Adaptation

Machine learning is driven by data, yet while their availability is cons...

Slimmable Domain Adaptation

Vanilla unsupervised domain adaptation methods tend to optimize the mode...

Learning Condensed and Aligned Features for Unsupervised Domain Adaptation Using Label Propagation

Unsupervised domain adaptation aiming to learn a specific task for one d...