Imitation Attacks and Defenses for Black-box Machine Translation Systems

04/30/2020
by   Eric Wallace, et al.
0

We consider an adversary looking to steal or attack a black-box machine translation (MT) system, either for financial gain or to exploit model errors. We first show that black-box MT systems can be stolen by querying them with monolingual sentences and training models to imitate their outputs. Using simulated experiments, we demonstrate that MT model stealing is possible even when imitation models have different input data or architectures than their victims. Applying these ideas, we train imitation models that reach within 0.6 BLEU of three production MT systems on both high-resource and low-resource language pairs. We then leverage the similarity of our imitation models to transfer adversarial examples to the production systems. We use gradient-based attacks that expose inputs which lead to semantically-incorrect translations, dropped content, and vulgar model outputs. To mitigate these vulnerabilities, we propose a defense that modifies translation outputs in order to misdirect the optimization of imitation models. This defense degrades imitation model BLEU and attack transfer rates at some cost in BLEU and inference speed.

READ FULL TEXT

page 15

page 16

03/28/2020

Adversarial Imitation Attack

Deep learning models are known to be vulnerable to adversarial examples....
08/29/2021

Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs

Machine-learning-as-a-service (MLaaS) has attracted millions of users to...
05/22/2020

Simplify-then-Translate: Automatic Preprocessing for Black-Box Machine Translation

Black-box machine translation systems have proven incredibly useful for ...
11/02/2020

Targeted Poisoning Attacks on Black-Box Neural Machine Translation

As modern neural machine translation (NMT) systems have been widely depl...
09/14/2022

Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models

Neural text ranking models have witnessed significant advancement and ar...
03/06/2023

On the Feasibility of Specialized Ability Extracting for Large Language Code Models

Recent progress in large language code models (LLCMs) has led to a drama...
09/15/2021

Beyond Glass-Box Features: Uncertainty Quantification Enhanced Quality Estimation for Neural Machine Translation

Quality Estimation (QE) plays an essential role in applications of Machi...

Code Repositories

adversarial-mt

Imitation Attacks and Defenses for Black-box Machine Translations Systems


view repo

Please sign up or login with your details

Forgot password? Click here to reset