Associative Adversarial Learning Based on Selective Attack

12/28/2021
by   Runqi Wang, et al.
10

A human's attention can intuitively adapt to corrupted areas of an image by recalling a similar uncorrupted image they have previously seen. This observation motivates us to improve the attention of adversarial images by considering their clean counterparts. To accomplish this, we introduce Associative Adversarial Learning (AAL) into adversarial learning to guide a selective attack. We formulate the intrinsic relationship between attention and attack (perturbation) as a coupling optimization problem to improve their interaction. This leads to an attention backtracking algorithm that can effectively enhance the attention's adversarial robustness. Our method is generic and can be used to address a variety of tasks by simply choosing different kernels for the associative attention that select other regions for a specific attack. Experimental results show that the selective attack improves the model's performance. We show that our method improves the recognition accuracy of adversarial training on ImageNet by 8.32 baseline. It also increases object detection mAP on PascalVOC by 2.02 recognition accuracy of few-shot learning on miniImageNet by 1.63

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset