Imitation Attacks and Defenses for Black-box Machine Translation Systems

by   Eric Wallace, et al.

We consider an adversary looking to steal or attack a black-box machine translation (MT) system, either for financial gain or to exploit model errors. We first show that black-box MT systems can be stolen by querying them with monolingual sentences and training models to imitate their outputs. Using simulated experiments, we demonstrate that MT model stealing is possible even when imitation models have different input data or architectures than their victims. Applying these ideas, we train imitation models that reach within 0.6 BLEU of three production MT systems on both high-resource and low-resource language pairs. We then leverage the similarity of our imitation models to transfer adversarial examples to the production systems. We use gradient-based attacks that expose inputs which lead to semantically-incorrect translations, dropped content, and vulgar model outputs. To mitigate these vulnerabilities, we propose a defense that modifies translation outputs in order to misdirect the optimization of imitation models. This defense degrades imitation model BLEU and attack transfer rates at some cost in BLEU and inference speed.


page 15

page 16


Adversarial Imitation Attack

Deep learning models are known to be vulnerable to adversarial examples....

Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs

Machine-learning-as-a-service (MLaaS) has attracted millions of users to...

Simplify-then-Translate: Automatic Preprocessing for Black-Box Machine Translation

Black-box machine translation systems have proven incredibly useful for ...

Targeted Poisoning Attacks on Black-Box Neural Machine Translation

As modern neural machine translation (NMT) systems have been widely depl...

Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models

Neural text ranking models have witnessed significant advancement and ar...

On the Feasibility of Specialized Ability Extracting for Large Language Code Models

Recent progress in large language code models (LLCMs) has led to a drama...

Beyond Glass-Box Features: Uncertainty Quantification Enhanced Quality Estimation for Neural Machine Translation

Quality Estimation (QE) plays an essential role in applications of Machi...

Code Repositories


Imitation Attacks and Defenses for Black-box Machine Translations Systems

view repo

Please sign up or login with your details

Forgot password? Click here to reset