EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks

04/21/2020
by   Sanchari Sen, et al.
6

Ensuring robustness of Deep Neural Networks (DNNs) is crucial to their adoption in safety-critical applications such as self-driving cars, drones, and healthcare. Notably, DNNs are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications. In this work, we propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks. EMPIR is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. EMPIR overcomes this limitation to achieve the 'best of both worlds', i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble. Further, as low precision DNN models have significantly lower computational and storage requirements than full precision models, EMPIR models only incur modest compute and memory overheads compared to a single full-precision model (<25 across a suite of DNNs for 3 different image recognition tasks (MNIST, CIFAR-10 and ImageNet) and under 4 different adversarial attacks. Our results indicate that EMPIR boosts the average adversarial accuracies by 42.6 for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively, when compared to single full-precision models, without sacrificing accuracy on the unperturbed inputs.

READ FULL TEXT
research
12/02/2021

Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?

Approximate computing is known for its effectiveness in improvising the ...
research
04/19/2022

Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks

Deep neural networks have become an integral part of our software infras...
research
11/10/2021

Robust Learning via Ensemble Density Propagation in Deep Neural Networks

Learning in uncertain, noisy, or adversarial environments is a challengi...
research
09/12/2022

Boosting Robustness Verification of Semantic Feature Neighborhoods

Deep neural networks have been shown to be vulnerable to adversarial att...
research
10/23/2018

Sparse DNNs with Improved Adversarial Robustness

Deep neural networks (DNNs) are computationally/memory-intensive and vul...
research
11/22/2018

Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles

While deep learning has led to remarkable results on a number of challen...
research
11/04/2018

SSCNets: A Selective Sobel Convolution-based Technique to Enhance the Robustness of Deep Neural Networks against Security Attacks

Recent studies have shown that slight perturbations in the input data ca...

Please sign up or login with your details

Forgot password? Click here to reset