DeepAI
Log In Sign Up

Improving Lexical Embeddings for Robust Question Answering

02/28/2022
by   Weiwen Xu, et al.
0

Recent techniques in Question Answering (QA) have gained remarkable performance improvement with some QA models even surpassed human performance. However, the ability of these models in truly understanding the language still remains dubious and the models are revealing limitations when facing adversarial examples. To strengthen the robustness of QA models and their generalization ability, we propose a representation Enhancement via Semantic and Context constraints (ESC) approach to improve the robustness of lexical embeddings. Specifically, we insert perturbations with semantic constraints and train enhanced contextual representations via a context-constraint loss to better distinguish the context clues for the correct answer. Experimental results show that our approach gains significant robustness improvement on four adversarial test sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/21/2019

Domain-agnostic Question-Answering with Adversarial Training

Adapting models to new domain without finetuning is a challenging proble...
04/09/2020

Natural Perturbation for Robust Question Answering

While recent models have achieved human-level scores on many NLP dataset...
04/30/2020

Robust Question Answering Through Sub-part Alignment

Current textual question answering models achieve strong performance on ...
11/20/2020

What do we expect from Multiple-choice QA Systems?

The recent success of machine learning systems on various QA datasets co...
02/05/2021

Model Agnostic Answer Reranking System for Adversarial Question Answering

While numerous methods have been proposed as defenses against adversaria...
10/12/2020

Counterfactual Variable Control for Robust and Interpretable Question Answering

Deep neural network based question answering (QA) models are neither rob...