Comment on Transferability and Input Transformation with Additive Noise

06/18/2022
by   Hoki Kim, et al.
0

Adversarial attacks have verified the existence of the vulnerability of neural networks. By adding small perturbations to a benign example, adversarial attacks successfully generate adversarial examples that lead misclassification of deep learning models. More importantly, an adversarial example generated from a specific model can also deceive other models without modification. We call this phenomenon “transferability". Here, we analyze the relationship between transferability and input transformation with additive noise by mathematically proving that the modified optimization can produce more transferable adversarial examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset