Doubly-Trained Adversarial Data Augmentation for Neural Machine Translation

10/12/2021
by   Weiting Tan, et al.
0

Neural Machine Translation (NMT) models are known to suffer from noisy inputs. To make models robust, we generate adversarial augmentation samples that attack the model and preserve the source-side semantic meaning at the same time. To generate such samples, we propose a doubly-trained architecture that pairs two NMT models of opposite translation directions with a joint loss function, which combines the target-side attack and the source-side semantic similarity constraint. The results from our experiments across three different language pairs and two evaluation metrics show that these adversarial samples improve the model robustness.

READ FULL TEXT
research
06/06/2019

Robust Neural Machine Translation with Doubly Adversarial Inputs

Neural machine translation (NMT) often suffers from the vulnerability to...
research
05/10/2022

AdMix: A Mixed Sample Data Augmentation Method for Neural Machine Translation

In Neural Machine Translation (NMT), data augmentation methods such as b...
research
11/09/2019

A Reinforced Generation of Adversarial Samples for Neural Machine Translation

Neural machine translation systems tend to fail on less de-cent inputs d...
research
10/07/2019

Improving Neural Machine Translation Robustness via Data Augmentation: Beyond Back Translation

Neural Machine Translation (NMT) models have been proved strong when tra...
research
05/31/2021

Beyond Noise: Mitigating the Impact of Fine-grained Semantic Divergences on Neural Machine Translation

While it has been shown that Neural Machine Translation (NMT) is highly ...
research
11/03/2020

Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks

Word sense disambiguation is a well-known source of translation errors i...
research
06/14/2023

A Relaxed Optimization Approach for Adversarial Attacks against Neural Machine Translation Models

In this paper, we propose an optimization-based adversarial attack again...

Please sign up or login with your details

Forgot password? Click here to reset