A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation

08/29/2023
by   Sahar Sadrizadeh, et al.
0

Neural Machine Translation (NMT) models have been shown to be vulnerable to adversarial attacks, wherein carefully crafted perturbations of the input can mislead the target model. In this paper, we introduce ACT, a novel adversarial attack framework against NMT systems guided by a classifier. In our attack, the adversary aims to craft meaning-preserving adversarial examples whose translations by the NMT model belong to a different class than the original translations in the target language. Unlike previous attacks, our new approach has a more substantial effect on the translation by altering the overall meaning, which leads to a different class determined by a classifier. To evaluate the robustness of NMT models to this attack, we propose enhancements to existing black-box word-replacement-based attacks by incorporating output translations of the target NMT model and the output logits of a classifier within the attack process. Extensive experiments in various settings, including a comparison with existing untargeted attacks, demonstrate that the proposed attack is considerably more successful in altering the class of the output translation and has more effect on the translation. This new paradigm can show the vulnerabilities of NMT systems by focusing on the class of translation rather than the mere translation quality as studied traditionally.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2023

TransFool: An Adversarial Attack against Neural Machine Translation Models

Deep neural networks have been shown to be vulnerable to small perturbat...
research
11/02/2020

Targeted Poisoning Attacks on Black-Box Neural Machine Translation

As modern neural machine translation (NMT) systems have been widely depl...
research
05/02/2023

Sentiment Perception Adversarial Attacks on Neural Machine Translation Systems

With the advent of deep learning methods, Neural Machine Translation (NM...
research
11/03/2020

Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks

Word sense disambiguation is a well-known source of translation errors i...
research
04/19/2022

Generating Authentic Adversarial Examples beyond Meaning-preserving with Doubly Round-trip Translation

Generating adversarial examples for Neural Machine Translation (NMT) wit...
research
06/14/2023

A Relaxed Optimization Approach for Adversarial Attacks against Neural Machine Translation Models

In this paper, we propose an optimization-based adversarial attack again...
research
11/09/2019

A Reinforced Generation of Adversarial Samples for Neural Machine Translation

Neural machine translation systems tend to fail on less de-cent inputs d...

Please sign up or login with your details

Forgot password? Click here to reset