Transformer Based Multi-Source Domain Adaptation

09/16/2020
by   Dustin Wright, et al.
17

In practical machine learning settings, the data on which a model must make predictions often come from a different distribution than the data it was trained on. Here, we investigate the problem of unsupervised multi-source domain adaptation, where a model is trained on labelled data from multiple source domains and must make predictions on a domain for which no labelled data has been seen. Prior work with CNNs and RNNs has demonstrated the benefit of mixture of experts, where the predictions of multiple domain expert classifiers are combined; as well as domain adversarial training, to induce a domain agnostic representation space. Inspired by this, we investigate how such methods can be effectively applied to large pretrained transformer models. We find that domain adversarial training has an effect on the learned representations of these models while having little effect on their performance, suggesting that large transformer-based models are already relatively robust across domains. Additionally, we show that mixture of experts leads to significant performance improvements by comparing several variants of mixing functions, including one novel mixture based on attention. Finally, we demonstrate that the predictions of large pretrained transformer based domain experts are highly homogenous, making it challenging to learn effective functions for mixing their predictions.

READ FULL TEXT

page 7

page 8

research
09/29/2020

Tackling unsupervised multi-source domain adaptation with optimism and consistency

It has been known for a while that the problem of multi-source domain ad...
research
09/07/2018

Multi-Source Domain Adaptation with Mixture of Experts

We propose a mixture-of-experts approach for unsupervised domain adaptat...
research
09/29/2022

Increasing Model Generalizability for Unsupervised Domain Adaptation

A dominant approach for addressing unsupervised domain adaptation is to ...
research
04/16/2022

Safe Self-Refinement for Transformer-based Domain Adaptation

Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich sourc...
research
05/16/2018

What's in a Domain? Learning Domain-Robust Text Representations using Adversarial Training

Most real world language problems require learning from heterogenous cor...
research
06/16/2022

A Simple Baseline for Adversarial Domain Adaptation-based Unsupervised Flood Forecasting

Flood disasters cause enormous social and economic losses. However, both...
research
10/11/2022

Synthetic Model Combination: An Instance-wise Approach to Unsupervised Ensemble Learning

Consider making a prediction over new test data without any opportunity ...

Please sign up or login with your details

Forgot password? Click here to reset