DeepAI AI Chat
Log In Sign Up

Source-Free Domain Adaptation for Question Answering with Masked Self-training

12/19/2022
by   M. Yin, et al.
0

Most previous unsupervised domain adaptation (UDA) methods for question answering(QA) require access to source domain data while fine-tuning the model for the target domain. Source domain data may, however, contain sensitive information and may be restricted. In this study, we investigate a more challenging setting, source-free UDA, in which we have only the pretrained source model and target domain data, without access to source domain data. We propose a novel self-training approach to QA models that integrates a unique mask module for domain adaptation. The mask is auto-adjusted to extract key domain knowledge while trained on the source domain. To maintain previously learned domain knowledge, certain mask weights are frozen during adaptation, while other weights are adjusted to mitigate domain shifts with pseudo-labeled samples generated in the target domain. we generate pseudo-labeled samples in the target domain based on models trained in the source domain. Our empirical results on four benchmark datasets suggest that our approach significantly enhances the performance of pretrained QA models on the target domain, and even outperforms models that have access to the source data during adaptation.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/12/2022

Domain Adaptation for Question Answering via Question Classification

Question answering (QA) has demonstrated impressive progress in answerin...
03/19/2023

AdaptGuard: Defending Against Universal Attacks for Model Adaptation

Model adaptation aims at solving the domain transfer problem under the c...
10/06/2022

Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

Prompt tuning, or the conditioning of a frozen pretrained language model...
06/24/2019

Transfer of Machine Learning Fairness across Domains

If our models are used in new or unexpected cases, do we know if they wi...
03/25/2021

Self-Labeling of Fully Mediating Representations by Graph Alignment

To be able to predict a molecular graph structure (W) given a 2D image o...
11/08/2022

Unsupervised Domain Adaptation for Sparse Retrieval by Filling Vocabulary and Word Frequency Gaps

IR models using a pretrained language model significantly outperform lex...