Multi-Source Domain Adaptation with Mixture of Experts

09/07/2018
by   Jiang Guo, et al.
0

We propose a mixture-of-experts approach for unsupervised domain adaptation from multiple sources. The key idea is to explicitly capture the relationship between a target example and different source domains. This relationship, expressed by a point-to-set metric, determines how to combine predictors trained on various domains. The metric is learned in an unsupervised fashion using meta-training. Experimental results on sentiment analysis and part-of-speech tagging demonstrate that our approach consistently outperforms multiple baselines and can robustly handle negative transfer.

READ FULL TEXT
research
06/10/2020

Adversarial Training Based Multi-Source Unsupervised Domain Adaptation for Sentiment Analysis

Multi-source unsupervised domain adaptation (MS-UDA) for sentiment analy...
research
01/13/2020

Multi-Source Domain Adaptation for Text Classification via DistanceNet-Bandits

Domain adaptation performance of a learning algorithm on a target domain...
research
09/16/2020

Transformer Based Multi-Source Domain Adaptation

In practical machine learning settings, the data on which a model must m...
research
06/23/2021

Secure Domain Adaptation with Multiple Sources

Multi-source unsupervised domain adaptation (MUDA) is a recently explore...
research
05/26/2020

Unsupervised Domain Expansion from Multiple Sources

Given an existing system learned from previous source domains, it is des...
research
07/17/2020

Learning to Combine: Knowledge Aggregation for Multi-Source Domain Adaptation

Transferring knowledges learned from multiple source domains to target d...
research
05/18/2020

Domain Adaptive Relational Reasoning for 3D Multi-Organ Segmentation

In this paper, we present a novel unsupervised domain adaptation (UDA) m...

Please sign up or login with your details

Forgot password? Click here to reset