Log In Sign Up

Adversarially Trained Model Compression: When Robustness Meets Efficiency

by   Shupeng Gui, et al.

The robustness of deep models to adversarial attacks has gained significant attention in recent years, so has the model compactness and efficiency: yet the two have been mostly studied separately, with few relationships drawn between each other. This paper is concerned with: how can we combine the best of both worlds, obtaining a robust and compact network? The answer is not as straightforward as it may seem, since the two goals of model robustness and compactness may contradict from time to time. We formally study this new question, by proposing a novel Adversarially Trained Model Compression (ATMC) framework. A unified constrained optimization formulation is designed, with an efficient algorithm developed. An extensive group of experiments are then carefully designed and presented, demonstrating that ATMC obtains remarkably more favorable trade-off among model size, accuracy and robustness, over currently available alternatives in various settings.


page 8

page 26

page 27

page 28

page 29


ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries

Zero-shot learning (ZSL) has received extensive attention recently espec...

Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free

Adversarial training and its many variants substantially improve deep ne...

Can collaborative learning be private, robust and scalable?

We investigate the effectiveness of combining differential privacy, mode...

Distribution-induced Bidirectional Generative Adversarial Network for Graph Representation Learning

Graph representation learning aims to encode all nodes of a graph into l...

Label Smoothing and Adversarial Robustness

Recent studies indicate that current adversarial attack methods are flaw...

Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin

Deep models, while being extremely versatile and accurate, are vulnerabl...

Adversarially robust and explainable model compression with on-device personalization for NLP applications

On-device Deep Neural Networks (DNNs) have recently gained more attentio...

Code Repositories


[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”

view repo