Log In Sign Up

Improving Lexical Embeddings for Robust Question Answering

by   Weiwen Xu, et al.

Recent techniques in Question Answering (QA) have gained remarkable performance improvement with some QA models even surpassed human performance. However, the ability of these models in truly understanding the language still remains dubious and the models are revealing limitations when facing adversarial examples. To strengthen the robustness of QA models and their generalization ability, we propose a representation Enhancement via Semantic and Context constraints (ESC) approach to improve the robustness of lexical embeddings. Specifically, we insert perturbations with semantic constraints and train enhanced contextual representations via a context-constraint loss to better distinguish the context clues for the correct answer. Experimental results show that our approach gains significant robustness improvement on four adversarial test sets.


page 1

page 2

page 3

page 4


Domain-agnostic Question-Answering with Adversarial Training

Adapting models to new domain without finetuning is a challenging proble...

Natural Perturbation for Robust Question Answering

While recent models have achieved human-level scores on many NLP dataset...

Robust Question Answering Through Sub-part Alignment

Current textual question answering models achieve strong performance on ...

What do we expect from Multiple-choice QA Systems?

The recent success of machine learning systems on various QA datasets co...

Model Agnostic Answer Reranking System for Adversarial Question Answering

While numerous methods have been proposed as defenses against adversaria...

Counterfactual Variable Control for Robust and Interpretable Question Answering

Deep neural network based question answering (QA) models are neither rob...