Attacking Text Classifiers via Sentence Rewriting Sampler

04/17/2021
by   Lei Xu, et al.
0

Most adversarial attack methods on text classification are designed to change the classifier's prediction by modifying few words or characters. Few try to attack classifiers by rewriting a whole sentence, due to the difficulties inherent in sentence-level rephrasing and the problem of maintaining high semantic similarity and sentence quality. To tackle this problem, we design a general sentence rewriting sampler (SRS) framework, which can conditionally generate meaningful sentences. Then we customize SRS to attack text classification models. Our method can effectively rewrite the original sentence in multiple ways while maintaining high semantic similarity and good sentence quality. Experimental results show that many of these rewritten sentences are misclassified by the classifier. Our method achieves a better attack success rate on 4 out of 7 datasets, as well as significantly better sentence quality on all 7 datasets.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset