Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks

02/14/2018
by   Qi Liu, et al.
0

DNN is presenting human-level performance for many complex intelligent tasks in real-world applications. However, it also introduces ever-increasing security concerns. For example, the emerging adversarial attacks indicate that even very small and often imperceptible adversarial input perturbations can easily mislead the cognitive function of deep learning systems (DLS). Existing DNN adversarial studies are narrowly performed on the ideal software-level DNN models with a focus on single uncertainty factor, i.e. input perturbations, however, the impact of DNN model reshaping on adversarial attacks, which is introduced by various hardware-favorable techniques such as hash-based weight compression during modern DNN hardware implementation, has never been discussed. In this work, we for the first time investigate the multi-factor adversarial attack problem in practical model optimized deep learning systems by jointly considering the DNN model-reshaping (e.g. HashNet based deep compression) and the input perturbations. We first augment adversarial example generating method dedicated to the compressed DNN models by incorporating the software-based approaches and mathematical modeled DNN reshaping. We then conduct a comprehensive robustness and vulnerability analysis of deep compressed DNN models under derived adversarial attacks. A defense technique named "gradient inhibition" is further developed to ease the generating of adversarial examples thus to effectively mitigate adversarial attacks towards both software and hardware-oriented DNNs. Simulation results show that "gradient inhibition" can decrease the average success rate of adversarial attacks from 87.99 benchmark with marginal accuracy degradation across various DNNs.

READ FULL TEXT
research
05/27/2017

A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

Some recent works revealed that deep neural networks (DNNs) are vulnerab...
research
04/21/2022

A Mask-Based Adversarial Defense Scheme

Adversarial attacks hamper the functionality and accuracy of Deep Neural...
research
08/03/2020

Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks

Recent studies identify that Deep learning Neural Networks (DNNs) are vu...
research
09/29/2020

Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment

Manual annotation of ICD-9 codes is a time consuming and error-prone pro...
research
08/23/2021

Kryptonite: An Adversarial Attack Using Regional Focus

With the Rise of Adversarial Machine Learning and increasingly robust ad...
research
06/14/2020

Adversarial Sparsity Attacks on Deep Neural Networks

Adversarial attacks have exposed serious vulnerabilities in Deep Neural ...
research
02/23/2019

A Deep, Information-theoretic Framework for Robust Biometric Recognition

Deep neural networks (DNN) have been a de facto standard for nowadays bi...

Please sign up or login with your details

Forgot password? Click here to reset