Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

07/31/2023
by   Xinyu Zhang, et al.
0

The language models, especially the basic text classification models, have been shown to be susceptible to textual adversarial attacks such as synonym substitution and word insertion attacks. To defend against such attacks, a growing body of research has been devoted to improving the model robustness. However, providing provable robustness guarantees instead of empirical robustness is still widely unexplored. In this paper, we propose Text-CRS, a generalized certified robustness framework for natural language processing (NLP) based on randomized smoothing. To our best knowledge, existing certified schemes for NLP can only certify the robustness against ℓ_0 perturbations in synonym substitution attacks. Representing each word-level adversarial operation (i.e., synonym substitution, word reordering, insertion, and deletion) as a combination of permutation and embedding transformation, we propose novel smoothing theorems to derive robustness bounds in both permutation and embedding space against such adversarial operations. To further improve certified accuracy and radius, we consider the numerical relationships between discrete words and select proper noise distributions for the randomized smoothing. Finally, we conduct substantial experiments on multiple language models and datasets. Text-CRS can address all four different word-level adversarial operations and achieve a significant accuracy improvement. We also provide the first benchmark on certified accuracy and radius of four word-level operations, besides outperforming the state-of-the-art certification against synonym substitution attacks.

READ FULL TEXT
research
12/20/2022

In and Out-of-Domain Text Adversarial Robustness via Label Smoothing

Recently it has been shown that state-of-the-art NLP models are vulnerab...
research
02/28/2022

Robust Textual Embedding against Word-level Adversarial Attacks

We attribute the vulnerability of natural language processing models to ...
research
05/24/2022

Certified Robustness Against Natural Language Attacks by Causal Intervention

Deep learning models have achieved great success in many fields, yet the...
research
05/29/2020

SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions

State-of-the-art NLP models can often be fooled by human-unaware transfo...
research
10/01/2020

Assessing Robustness of Text Classification through Maximal Safe Radius Computation

Neural network NLP models are vulnerable to small modifications of the i...
research
02/06/2023

Less is More: Understanding Word-level Textual Adversarial Attack via n-gram Frequency Descend

Word-level textual adversarial attacks have achieved striking performanc...
research
12/20/2021

Unifying Model Explainability and Robustness for Joint Text Classification and Rationale Extraction

Recent works have shown explainability and robustness are two crucial in...

Please sign up or login with your details

Forgot password? Click here to reset