Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness

04/10/2020
by   Haidong Xie, et al.
0

With the growth of interest in the attack and defense of deep neural networks, researchers are focusing more on the robustness of applying them to devices with limited memory. Thus, unlike adversarial training, which only considers the balance between accuracy and robustness, we come to a more meaningful and critical issue, i.e., the balance among accuracy, efficiency and robustness (AER). Recently, some related works focused on this issue, but with different observations, and the relations among AER remain unclear. This paper first investigates the robustness of pruned models with different compression ratios under the gradual pruning process and concludes that the robustness of the pruned model drastically varies with different pruning processes, especially in response to attacks with large strength. Second, we test the performance of mixing the clean data and adversarial examples (generated with a prescribed uniform budget) into the gradual pruning process, called adversarial pruning, and find the following: the pruned model's robustness exhibits high sensitivity to the budget. Furthermore, to better balance the AER, we propose an approach called blind adversarial pruning (BAP), which introduces the idea of blind adversarial training into the gradual pruning process. The main idea is to use a cutoff-scale strategy to adaptively estimate a nonuniform budget to modify the AEs used during pruning, thus ensuring that the strengths of AEs are dynamically located within a reasonable range at each pruning step and ultimately improving the overall AER of the pruned model. The experimental results obtained using BAP for pruning classification models based on several benchmarks demonstrate the competitive performance of this method: the robustness of the model pruned by BAP is more stable among varying pruning processes, and BAP exhibits better overall AER than adversarial pruning.

READ FULL TEXT

page 6

page 7

page 8

research
04/10/2020

Blind Adversarial Training: Balance Accuracy and Robustness

Adversarial training (AT) aims to improve the robustness of deep learnin...
research
09/11/2020

Achieving Adversarial Robustness via Sparsity

Network pruning has been known to produce compact models without much ac...
research
08/19/2021

Pruning in the Face of Adversaries

The vulnerability of deep neural networks against adversarial examples -...
research
07/01/2022

Efficient Adversarial Training With Data Pruning

Neural networks are susceptible to adversarial examples-small input pert...
research
02/03/2022

Robust Binary Models by Pruning Randomly-initialized Networks

We propose ways to obtain robust models against adversarial attacks from...
research
06/16/2022

Analysis and Extensions of Adversarial Training for Video Classification

Adversarial training (AT) is a simple yet effective defense against adve...

Please sign up or login with your details

Forgot password? Click here to reset