An Integrated Approach to Produce Robust Models with High Efficiency

08/31/2020
by   Zhijian Li, et al.
6

Deep Neural Networks (DNNs) needs to be both efficient and robust for practical uses. Quantization and structure simplification are promising ways to adapt DNNs to mobile devices, and adversarial training is the most popular method to make DNNs robust. In this work, we try to obtain both features by applying a convergent relaxation quantization algorithm, Binary-Relax (BR), to a robust adversarial-trained model, ResNets Ensemble via Feynman-Kac Formalism (EnResNet). We also discover that high precision, such as ternary (tnn) and 4-bit, quantization will produce sparse DNNs. However, this sparsity is unstructured under advarsarial training. To solve the problems that adversarial training jeopardizes DNNs' accuracy on clean images and the struture of sparsity, we design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity. With our trade-off loss function, we achieve both goals with no reduction of resistance under weak attacks and very minor reduction of resistance under strong attcks. Together with quantized EnResNet with trade-off loss function, we provide robust models that have high efficiency.

READ FULL TEXT

page 1

page 5

page 7

page 8

page 9

research
03/02/2020

Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets

Deep neural nets (DNNs) compression is crucial for adaptation to mobile ...
research
06/09/2021

Towards the Memorization Effect of Neural Networks in Adversarial Training

Recent studies suggest that “memorization” is one important factor for o...
research
04/22/2020

QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversar...
research
10/26/2021

Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks

Deep Neural Networks (DNNs) are known to be vulnerable to adversarial at...
research
01/29/2019

On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks

Recently, there has been an abundance of works on designing Deep Neural ...
research
05/24/2022

Alleviating Robust Overfitting of Adversarial Training With Consistency Regularization

Adversarial training (AT) has proven to be one of the most effective way...
research
03/28/2022

REx: Data-Free Residual Quantization Error Expansion

Deep neural networks (DNNs) are nowadays ubiquitous in the computer visi...

Please sign up or login with your details

Forgot password? Click here to reset