Towards Efficiently Evaluating the Robustness of Deep Neural Networks in IoT Systems: A GAN-based Method

11/19/2021
by   Tao Bai, et al.
1

Intelligent Internet of Things (IoT) systems based on deep neural networks (DNNs) have been widely deployed in the real world. However, DNNs are found to be vulnerable to adversarial examples, which raises people's concerns about intelligent IoT systems' reliability and security. Testing and evaluating the robustness of IoT systems becomes necessary and essential. Recently various attacks and strategies have been proposed, but the efficiency problem remains unsolved properly. Existing methods are either computationally extensive or time-consuming, which is not applicable in practice. In this paper, we propose a novel framework called Attack-Inspired GAN (AI-GAN) to generate adversarial examples conditionally. Once trained, it can generate adversarial perturbations efficiently given input images and target classes. We apply AI-GAN on different datasets in white-box settings, black-box settings and targeted models protected by state-of-the-art defenses. Through extensive experiments, AI-GAN achieves high attack success rates, outperforming existing methods, and reduces generation time significantly. Moreover, for the first time, AI-GAN successfully scales to complex datasets e.g. CIFAR-100 and ImageNet, with about 90% success rates among all classes.

READ FULL TEXT

page 1

page 4

page 7

page 8

research
04/12/2019

Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense

In image classification of deep learning, adversarial examples where inp...
research
04/16/2019

AT-GAN: A Generative Attack Model for Adversarial Transferring on Generative Adversarial Nets

Recent studies have discovered the vulnerability of Deep Neural Networks...
research
02/06/2020

AI-GAN: Attack-Inspired Generation of Adversarial Examples

Adversarial examples that can fool deep models are mainly crafted by add...
research
12/16/2019

CAG: A Real-time Low-cost Enhanced-robustness High-transferability Content-aware Adversarial Attack Generator

Deep neural networks (DNNs) are vulnerable to adversarial attack despite...
research
05/28/2019

Cross-Domain Transferability of Adversarial Perturbations

Adversarial examples reveal the blind spots of deep neural networks (DNN...
research
02/03/2021

TAD: Trigger Approximation based Black-box Trojan Detection for AI

An emerging amount of intelligent applications have been developed with ...
research
03/28/2021

IUP: An Intelligent Utility Prediction Scheme for Solid-State Fermentation in 5G IoT

At present, SOILD-STATE Fermentation (SSF) is mainly controlled by artif...

Please sign up or login with your details

Forgot password? Click here to reset