Generating Semantically Valid Adversarial Questions for TableQA

05/26/2020
by   Yi Zhu, et al.
0

Adversarial attack on question answering systems over tabular data (TableQA) can help evaluate to what extent they can understand natural language questions and reason with tables. However, generating natural language adversarial questions is difficult, because even a single character swap could lead to huge semantic difference in human perception. In this paper, we propose SAGE (Semantically valid Adversarial GEnerator), a Wasserstein sequence-to-sequence model for TableQA white-box attack. To preserve meaning of original questions, we apply minimum risk training with SIMILE and entity delexicalization. We use Gumbel-Softmax to incorporate adversarial loss for end-to-end training. Our experiments show that SAGE outperforms existing local attack models on semantic validity and fluency while achieving a good attack success rate. Finally, we demonstrate that adversarial training with SAGE augmented data can improve performance and robustness of TableQA systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2020

A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples

Generating adversarial examples for natural language is hard, as natural...
research
09/17/2020

Generating Label Cohesive and Well-Formed Adversarial Claims

Adversarial attacks reveal important vulnerabilities and flaws of traine...
research
04/04/2019

Answer-based Adversarial Training for Generating Clarification Questions

We present an approach for generating clarification questions with the g...
research
01/31/2020

Break It Down: A Question Understanding Benchmark

Understanding natural language questions entails the ability to break do...
research
12/05/2018

Are you tough enough? Framework for Robustness Validation of Machine Comprehension Systems

Deep Learning NLP domain lacks procedures for the analysis of model robu...
research
09/16/2020

Contextualized Perturbation for Textual Adversarial Attack

Adversarial examples expose the vulnerabilities of natural language proc...
research
04/22/2019

blessing in disguise: Designing Robust Turing Test by Employing Algorithm Unrobustness

Turing test was originally proposed to examine whether machine's behavio...

Please sign up or login with your details

Forgot password? Click here to reset