SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions

05/29/2020
by   Mao Ye, et al.
0

State-of-the-art NLP models can often be fooled by human-unaware transformations such as synonymous word substitution. For security reasons, it is of critical importance to develop models with certified robustness that can provably guarantee that the prediction is can not be altered by any possible synonymous word substitution. In this work, we propose a certified robust method based on a new randomized smoothing technique, which constructs a stochastic ensemble by applying random word substitutions on the input sentences, and leverage the statistical properties of the ensemble to provably certify the robustness. Our method is simple and structure-free in that it only requires the black-box queries of the model outputs, and hence can be applied to any pre-trained models (such as BERT) and any types of models (world-level or subword-level). Our method significantly outperforms recent state-of-the-art methods for certified robustness on both IMDB and Amazon text classification tasks. To the best of our knowledge, we are the first work to achieve certified robustness on large systems such as BERT with practically meaningful certified accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/03/2019

Certified Robustness to Adversarial Word Substitutions

State-of-the-art NLP models can often be fooled by adversaries that appl...
research
11/25/2022

Invariance-Aware Randomized Smoothing Certificates

Building models that comply with the invariances inherent to different d...
research
07/31/2023

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

The language models, especially the basic text classification models, ha...
research
12/20/2022

In and Out-of-Domain Text Adversarial Robustness via Label Smoothing

Recently it has been shown that state-of-the-art NLP models are vulnerab...
research
10/28/2022

BEBERT: Efficient and robust binary ensemble BERT

Pre-trained BERT models have achieved impressive accuracy on natural lan...
research
10/28/2022

Localized Randomized Smoothing for Collective Robustness Certification

Models for image segmentation, node classification and many other tasks ...
research
10/12/2020

EFSG: Evolutionary Fooling Sentences Generator

Large pre-trained language representation models (LMs) have recently col...

Please sign up or login with your details

Forgot password? Click here to reset