Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks

04/25/2023
by   Ferheen Ayaz, et al.
3

Reducing the memory footprint of Machine Learning (ML) models, particularly Deep Neural Networks (DNNs), is essential to enable their deployment into resource-constrained tiny devices. However, a disadvantage of DNN models is their vulnerability to adversarial attacks, as they can be fooled by adding slight perturbations to the inputs. Therefore, the challenge is how to create accurate, robust, and tiny DNN models deployable on resource-constrained embedded devices. This paper reports the results of devising a tiny DNN model, robust to adversarial black and white box attacks, trained with an automatic quantizationaware training framework, i.e. QKeras, with deep quantization loss accounted in the learning loop, thereby making the designed DNNs more accurate for deployment on tiny devices. We investigated how QKeras and an adversarial robustness technique, Jacobian Regularization (JR), can provide a co-optimization strategy by exploiting the DNN topology and the per layer JR approach to produce robust yet tiny deeply quantized DNN models. As a result, a new DNN model implementing this cooptimization strategy was conceived, developed and tested on three datasets containing both images and audio inputs, as well as compared its performance with existing benchmarks against various white-box and black-box attacks. Experimental results demonstrated that on average our proposed DNN model resulted in 8.3 MLCommons/Tiny benchmarks in the presence of white-box and black-box attacks on the CIFAR-10 image dataset and a subset of the Google Speech Commands audio dataset respectively. It was also 6.5 the SVHN image dataset.

READ FULL TEXT

page 1

page 5

page 7

research
02/02/2018

Hardening Deep Neural Networks via Adversarial Model Cascades

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
06/14/2020

Adversarial Sparsity Attacks on Deep Neural Networks

Adversarial attacks have exposed serious vulnerabilities in Deep Neural ...
research
06/02/2022

FACM: Correct the Output of Deep Neural Network with Middle Layers Features against Adversarial Samples

In the strong adversarial attacks against deep neural network (DNN), the...
research
03/09/2021

Robust Black-box Watermarking for Deep NeuralNetwork using Inverse Document Frequency

Deep learning techniques are one of the most significant elements of any...
research
08/27/2022

RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency IoT systems

Although Deep Neural Networks (DNN) have become the backbone technology ...
research
03/31/2019

BlackMarks: Blackbox Multibit Watermarking for Deep Neural Networks

Deep Neural Networks have created a paradigm shift in our ability to com...
research
06/22/2022

ROSE: A RObust and SEcure DNN Watermarking

Protecting the Intellectual Property rights of DNN models is of primary ...

Please sign up or login with your details

Forgot password? Click here to reset