From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework

05/29/2023
by   Yangyi Chen, et al.
0

Textual adversarial attacks can discover models' weaknesses by adding semantic-preserved but misleading perturbations to the inputs. The long-lasting adversarial attack-and-defense arms race in Natural Language Processing (NLP) is algorithm-centric, providing valuable techniques for automatic robustness evaluation. However, the existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples. In this paper, we aim to set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to further exploit the advantages of adversarial attacks. To address the above challenges, we first determine robustness evaluation dimensions based on model capabilities and specify the reasonable algorithm to generate adversarial samples for each dimension. Then we establish the evaluation protocol, including evaluation settings and metrics, under realistic demands. Finally, we use the perturbation degree of adversarial samples to control the sample validity. We implement a toolkit RobTest that realizes our automatic robustness evaluation framework. In our experiments, we conduct a robustness evaluation of RoBERTa models to demonstrate the effectiveness of our evaluation framework, and further show the rationality of each component in the framework. The code will be made public at <https://github.com/thunlp/RobTest>.

READ FULL TEXT

page 13

page 17

page 18

page 19

page 20

page 21

page 22

page 23

research
11/08/2021

Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning

Adversarial attacks on graphs have posed a major threat to the robustnes...
research
03/15/2019

On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models

Adversarial examples --- perturbations to the input of a model that elic...
research
04/29/2020

TextAttack: A Framework for Adversarial Attacks in Natural Language Processing

TextAttack is a library for running adversarial attacks against natural ...
research
03/10/2022

Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack

Defense models against adversarial attacks have grown significantly, but...
research
04/22/2021

Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions

Recently, adversarial attack methods have been developed to challenge th...
research
03/19/2022

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

Recently, the problem of robustness of pre-trained language models (PrLM...
research
11/23/2022

Reliable Robustness Evaluation via Automatically Constructed Attack Ensembles

Attack Ensemble (AE), which combines multiple attacks together, provides...

Please sign up or login with your details

Forgot password? Click here to reset