Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

03/23/2022
by   Hanjie Chen, et al.
0

Neural language models show vulnerability to adversarial examples which are semantically similar to their original counterparts with a few words replaced by their synonyms. A common way to improve model robustness is adversarial training which follows two steps-collecting adversarial examples by attacking a target model, and fine-tuning the model on the augmented dataset with these adversarial examples. The objective of traditional adversarial training is to make a model produce the same correct predictions on an original/adversarial example pair. However, the consistency between model decision-makings on two similar texts is ignored. We argue that a robust model should behave consistently on original/adversarial example pairs, that is making the same predictions (what) based on the same reasons (how) which can be reflected by consistent interpretations. In this work, we propose a novel feature-level adversarial training method named FLAT. FLAT aims at improving model robustness in terms of both predictions and interpretations. FLAT incorporates variational word masks in neural networks to learn global word importance and play as a bottleneck teaching the model to make predictions based on important words. FLAT explicitly shoots at the vulnerability problem caused by the mismatch between model understandings on the replaced words and their synonyms in original/adversarial example pairs by regularizing the corresponding global word importance scores. Experiments show the effectiveness of FLAT in improving the robustness with respect to both predictions and interpretations of four neural network models (LSTM, CNN, BERT, and DeBERTa) to two adversarial attacks on four text classification tasks. The models trained via FLAT also show better robustness than baseline models on unforeseen adversarial examples across different attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/05/2019

Adversarial Examples with Difficult Common Words for Paraphrase Identification

Despite the success of deep models for paraphrase identification on benc...
research
07/15/2021

Self-Supervised Contrastive Learning with Adversarial Perturbations for Robust Pretrained Language Models

This paper improves the robustness of the pretrained language model BERT...
research
04/13/2020

Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples

While recent efforts have shown that neural text processing models are v...
research
07/14/2023

Vulnerability-Aware Instance Reweighting For Adversarial Training

Adversarial Training (AT) has been found to substantially improve the ro...
research
03/03/2022

Improving Health Mentioning Classification of Tweets using Contrastive Adversarial Training

Health mentioning classification (HMC) classifies an input text as healt...
research
03/25/2023

Improving robustness of jet tagging algorithms with adversarial training: exploring the loss surface

In the field of high-energy physics, deep learning algorithms continue t...
research
02/06/2019

Fooling Neural Network Interpretations via Adversarial Model Manipulation

We ask whether the neural network interpretation methods can be fooled v...

Please sign up or login with your details

Forgot password? Click here to reset