Source-Free Domain Adaptation for Question Answering with Masked Self-training

12/19/2022
by   M. Yin, et al.
0

Most previous unsupervised domain adaptation (UDA) methods for question answering(QA) require access to source domain data while fine-tuning the model for the target domain. Source domain data may, however, contain sensitive information and may be restricted. In this study, we investigate a more challenging setting, source-free UDA, in which we have only the pretrained source model and target domain data, without access to source domain data. We propose a novel self-training approach to QA models that integrates a unique mask module for domain adaptation. The mask is auto-adjusted to extract key domain knowledge while trained on the source domain. To maintain previously learned domain knowledge, certain mask weights are frozen during adaptation, while other weights are adjusted to mitigate domain shifts with pseudo-labeled samples generated in the target domain. we generate pseudo-labeled samples in the target domain based on models trained in the source domain. Our empirical results on four benchmark datasets suggest that our approach significantly enhances the performance of pretrained QA models on the target domain, and even outperforms models that have access to the source data during adaptation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/12/2022

Domain Adaptation for Question Answering via Question Classification

Question answering (QA) has demonstrated impressive progress in answerin...
research
05/04/2023

DomainInv: Domain Invariant Fine Tuning and Adversarial Label Correction For QA Domain Adaptation

Existing Question Answering (QA) systems limited by the capability of an...
research
03/19/2023

AdaptGuard: Defending Against Universal Attacks for Model Adaptation

Model adaptation aims at solving the domain transfer problem under the c...
research
10/06/2022

Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

Prompt tuning, or the conditioning of a frozen pretrained language model...
research
06/24/2019

Transfer of Machine Learning Fairness across Domains

If our models are used in new or unexpected cases, do we know if they wi...
research
03/19/2016

DASA: Domain Adaptation in Stacked Autoencoders using Systematic Dropout

Domain adaptation deals with adapting behaviour of machine learning base...

Please sign up or login with your details

Forgot password? Click here to reset