Pruning Adversarially Robust Neural Networks without Adversarial Examples

10/09/2022
by   Tong Jian, et al.
17

Adversarial pruning compresses models while preserving robustness. Current methods require access to adversarial examples during pruning. This significantly hampers training efficiency. Moreover, as new adversarial attacks and training methods develop at a rapid rate, adversarial pruning methods need to be modified accordingly to keep up. In this work, we propose a novel framework to prune a previously trained robust neural network while maintaining adversarial robustness, without further generating adversarial examples. We leverage concurrent self-distillation and pruning to preserve knowledge in the original model as well as regularizing the pruned model via the Hilbert-Schmidt Information Bottleneck. We comprehensively evaluate our proposed framework and show its superior performance in terms of both adversarial robustness and efficiency when pruning architectures trained on the MNIST, CIFAR-10, and CIFAR-100 datasets against five state-of-the-art attacks. Code is available at https://github.com/neu-spiral/PwoA/.

READ FULL TEXT
research
02/24/2020

On Pruning Adversarially Robust Neural Networks

In safety-critical but computationally resource-constrained applications...
research
06/04/2021

Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness

We investigate the HSIC (Hilbert-Schmidt independence criterion) bottlen...
research
08/19/2021

Pruning in the Face of Adversaries

The vulnerability of deep neural networks against adversarial examples -...
research
06/07/2023

CFDP: Common Frequency Domain Pruning

As the saying goes, sometimes less is more – and when it comes to neural...
research
08/18/2022

Enhancing Targeted Attack Transferability via Diversified Weight Pruning

Malicious attackers can generate targeted adversarial examples by imposi...
research
03/20/2020

One Neuron to Fool Them All

Despite vast research in adversarial examples, the root causes of model ...
research
10/21/2022

Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks

An off-the-shelf model as a commercial service could be stolen by model ...

Please sign up or login with your details

Forgot password? Click here to reset