Leveraging Extracted Model Adversaries for Improved Black Box Attacks

10/30/2020
by   Naveen Jafer Nizar, et al.
0

We present a method for adversarial input generation against black box models for reading comprehension based question answering. Our approach is composed of two steps. First, we approximate a victim black box model via model extraction (Krishna et al., 2020). Second, we use our own white box method to generate input perturbations that cause the approximate model to fail. These perturbed inputs are used against the victim. In experiments we find that our method improves on the efficacy of the AddAny—a white box attack—performed on the approximate model by 25 11

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset