DeepAI AI Chat
Log In Sign Up

Source-Free Domain Adaptation for Question Answering with Masked Self-training

by   M. Yin, et al.

Most previous unsupervised domain adaptation (UDA) methods for question answering(QA) require access to source domain data while fine-tuning the model for the target domain. Source domain data may, however, contain sensitive information and may be restricted. In this study, we investigate a more challenging setting, source-free UDA, in which we have only the pretrained source model and target domain data, without access to source domain data. We propose a novel self-training approach to QA models that integrates a unique mask module for domain adaptation. The mask is auto-adjusted to extract key domain knowledge while trained on the source domain. To maintain previously learned domain knowledge, certain mask weights are frozen during adaptation, while other weights are adjusted to mitigate domain shifts with pseudo-labeled samples generated in the target domain. we generate pseudo-labeled samples in the target domain based on models trained in the source domain. Our empirical results on four benchmark datasets suggest that our approach significantly enhances the performance of pretrained QA models on the target domain, and even outperforms models that have access to the source data during adaptation.


page 1

page 2

page 3

page 4


Domain Adaptation for Question Answering via Question Classification

Question answering (QA) has demonstrated impressive progress in answerin...

AdaptGuard: Defending Against Universal Attacks for Model Adaptation

Model adaptation aims at solving the domain transfer problem under the c...

Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

Prompt tuning, or the conditioning of a frozen pretrained language model...

Transfer of Machine Learning Fairness across Domains

If our models are used in new or unexpected cases, do we know if they wi...

Self-Labeling of Fully Mediating Representations by Graph Alignment

To be able to predict a molecular graph structure (W) given a 2D image o...

Unsupervised Domain Adaptation for Sparse Retrieval by Filling Vocabulary and Word Frequency Gaps

IR models using a pretrained language model significantly outperform lex...