DeepAI
Log In Sign Up

Learning to Perturb Word Embeddings for Out-of-distribution QA

05/06/2021
by   Seanie Lee, et al.
0

QA models based on pretrained language mod-els have achieved remarkable performance onv arious benchmark datasets.However, QA models do not generalize well to unseen data that falls outside the training distribution, due to distributional shifts.Data augmentation(DA) techniques which drop/replace words have shown to be effective in regularizing the model from overfitting to the training data.Yet, they may adversely affect the QA tasks since they incur semantic changes that may lead to wrong answers for the QA task. To tackle this problem, we propose a simple yet effective DA method based on a stochastic noise generator, which learns to perturb the word embedding of the input questions and context without changing their semantics. We validate the performance of the QA models trained with our word embedding perturbation on a single source dataset, on five different target domains.The results show that our method significantly outperforms the baselineDA methods. Notably, the model trained with ours outperforms the model trained with more than 240K artificially generated QA pairs.

READ FULL TEXT
09/12/2019

Retrofitting Contextualized Word Embeddings with Paraphrases

Contextualized word embedding models, such as ELMo, generate meaningful ...
04/22/2018

Word Embedding Perturbation for Sentence Classification

In this technique report, we aim to mitigate the overfitting problem of ...
10/30/2020

CliniQG4QA: Generating Diverse Questions for Domain Adaptation of Clinical Question Answering

Clinical question answering (QA) aims to automatically answer questions ...
06/09/2018

Diachronic word embeddings and semantic shifts: a survey

Recent years have witnessed a surge of publications aimed at tracing tem...
06/16/2022

TransDrift: Modeling Word-Embedding Drift using Transformer

In modern NLP applications, word embeddings are a crucial backbone that ...