DeepAI
Log In Sign Up

Adversarially Trained Model Compression: When Robustness Meets Efficiency

02/10/2019
by   Shupeng Gui, et al.
15

The robustness of deep models to adversarial attacks has gained significant attention in recent years, so has the model compactness and efficiency: yet the two have been mostly studied separately, with few relationships drawn between each other. This paper is concerned with: how can we combine the best of both worlds, obtaining a robust and compact network? The answer is not as straightforward as it may seem, since the two goals of model robustness and compactness may contradict from time to time. We formally study this new question, by proposing a novel Adversarially Trained Model Compression (ATMC) framework. A unified constrained optimization formulation is designed, with an efficient algorithm developed. An extensive group of experiments are then carefully designed and presented, demonstrating that ATMC obtains remarkably more favorable trade-off among model size, accuracy and robustness, over currently available alternatives in various settings.

READ FULL TEXT

page 8

page 26

page 27

page 28

page 29

10/24/2019

ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries

Zero-shot learning (ZSL) has received extensive attention recently espec...
10/22/2020

Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free

Adversarial training and its many variants substantially improve deep ne...
05/05/2022

Can collaborative learning be private, robust and scalable?

We investigate the effectiveness of combining differential privacy, mode...
12/04/2019

Distribution-induced Bidirectional Generative Adversarial Network for Graph Representation Learning

Graph representation learning aims to encode all nodes of a graph into l...
09/17/2020

Label Smoothing and Adversarial Robustness

Recent studies indicate that current adversarial attack methods are flaw...
07/26/2019

Understanding Adversarial Robustness: The Trade-off between Minimum and Average Margin

Deep models, while being extremely versatile and accurate, are vulnerabl...
01/10/2021

Adversarially robust and explainable model compression with on-device personalization for NLP applications

On-device Deep Neural Networks (DNNs) have recently gained more attentio...

Code Repositories

ATMC

[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”


view repo