FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications

06/28/2020 ∙ by Yunfei Song, et al. ∙ HUAWEI Technologies Co., Ltd. IEEE East China Normal University 0

Along with the proliferation of Artificial Intelligence (AI) and Internet of Things (IoT) techniques, various kinds of adversarial attacks are increasingly emerging to fool Deep Neural Networks (DNNs) used by Industrial IoT (IIoT) applications. Due to biased training data or vulnerable underlying models, imperceptible modifications on inputs made by adversarial attacks may result in devastating consequences. Although existing methods are promising in defending such malicious attacks, most of them can only deal with limited existing attack types, which makes the deployment of large-scale IIoT devices a great challenge. To address this problem, we present an effective federated defense approach named FDA3 that can aggregate defense knowledge against adversarial examples from different sources. Inspired by federated learning, our proposed cloud-based architecture enables the sharing of defense capabilities against different attacks among IIoT devices. Comprehensive experimental results show that the generated DNNs by our approach can not only resist more malicious attacks than existing attack-specific adversarial training methods, but also can prevent IIoT applications from new attacks.



There are no comments yet.


page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Deep Learning (DL) techniques are increasingly deployed in safety-critical Cyber-Physical Systems (CPS) and Internet of Things (IoT) areas such as autonomous driving, commercial surveillance, and robotics, where the prediction correctness of inputs is of crucial importance [1, 2, 3, 4]. However, along with the prosperity of Industrial IoT (IIoT) applications, they are inevitably becoming the main targets of malicious adversaries [5, 6]. No matter the adversarial attacks are intentional or unintentional, due to biased training data or overfitting/underfitting models, the slightly modified inputs often make vulnerable IoT applications demonstrate incorrect or unexpected behaviors, which may cause disastrous consequences.

Most of existing adversarial attacks focus on generating IIoT inputs with perturbations, which are named “adversarial examples” [7]

to fool Deep Neural Networks (DNNs). Such adversarial examples can mislead the classifier models to predict incorrect outputs, while they are not distinguishable by human eyes. To resist these attacks, various defense methods were proposed, e.g., ensemble diversity

[8], PuVAE [9], and adversarial training. However, most of them are not suitable for IIoT applications. This is mainly because: i) most defense methods focus on defending one specific type of attacks; and ii) IIoT applications are usually scattered in different places in face of various adversarial attacks. In this situation, IIoT devices with the same type should be equipped with different DNNs to adapt to different environments. Things become even worse when various new adversarial attacks are emerging, since it is hard for IIoT designers to quickly find a new solution to defend such attacks.

As a distributed machine learning approach, Federated Learning (FL)

[10] enables training of a high-quality centralized model over a large quantity of decentralized data residing on IIoT devices. It has been widely studied to address the fundamental problems of privacy, ownership, and locality of data for the cloud-based architecture, where the number of participating devices is huge but the Internet connection is slow or unreliable. Based on the federated averaging technique [11], FL allows the training of a DNN without revealing the data stored on IIoT devices. The weights of new DNNs are synthesized using FL in the cloud, constructing a global model which is then pushed back to different IIoT devices for inference. However, so far none of existing FL-based approaches investigated the defense of adversarial attacks for IIoT applications.

Since cloud-based architectures can extend processing capabilities of IoT devices by offloading their partial computation tasks to remote cloud servers, the combination of cloud computing and IoT is becoming a popular paradigm that enables large-scale intelligent IIoT applicationsm, where IIoT devices are connected with the cloud in a CPS context [12]. However, no matter whether the role of cloud servers is for training or inference, IIoT devices are required to send original data to cloud servers, where the network latency and data privacy issues cannot be neglected. Moreover, if devices of an IIoT application adopt DNNs with the same type, the robustness of the application can be easily violated due to the varieties of adversarial attacks. Therefore, how to generate a robust DNN for a large quantity of IIoT devices with the same type while the privacy of these devices can be protected is becoming a challenge. Inspired by the concept of federated learning, this paper presents an effective federated defense framework named for large-scale IIoT applications. It makes following three major contributions:

  1. We propose a new loss function for the adversarial training on IIoT devices, which fully takes the diversities of adversarial attacks into account.

  2. We present an efficient federated adversarial learning scheme that can derive robust DNNs to resist a wide spectrum of adversarial attacks.

  3. Based on the cloud-based architecture, we introduce a novel federated defense framework for large-scale IIoT applications.

Experimental results on two well-known benchmark datasets show that the DNNs generated by our proposed approach on IIoT devices can resist more adversarial attacks than state-of-the-art methods. Moreover, the robustness of the generated DNNs becomes better when the size of investigated IIoT applications grows larger.

The rest of this paper is organized as follows. Section II introduces the related work on defense mechanisms against adversarial attacks for DNN-based IoT designs. Section III introduces our federated defense framework in detail. Section IV presents experimental results, showing the effectiveness and scalability of our approach. Finally, Section V concludes the paper.

Ii Related Work

When more and more safety-critical IoT applications adopt DNNs, the robustness of DNNs is becoming a major concern in IoT design [13, 4, 14]. The vulnerability of DNNs has been widely investigated by various malicious adversaries, who can generate physical adversarial examples to fool DNNs [15, 16]. Typically, existing attack methods can be classified into two categories, i.e., white-box attacks which assume that DNN structures are available, and black-box attacks without the knowledge of DNN architectures. For example, as a kind of well-known white-box attack, Fast Gradient Sign Method (FGSM) [17] tries to add adversarial perturbations in the direction of the loss gradients. In [15], Kurakin et al. introduced the Basic Iterative Method (BIM) by applying FGSM multiple times with small step size. In [18], the Jacobian-based Saliency Map Attack (JSMA) is introduced to identify features of the input that most significantly impact output classification. JSMA crafts adversarial examples based on computing forward derivatives. To minimize the disturbance while achieving better attack effects, Carlini and Wagner [19] designed an optimization function-based attack, named CW2. In [20], DeepFool is proposed that uses iterative linearization and geometric formulas to generate adversarial examples. Unlike above attacks, SIMBA [21] is a kind of simple but effective black-box attack. Instead of investigating gradient directions as used in FGSM, SIMBA picks random directions to perturb images.

To address adversarial attacks, various defense mechanisms have been investigated. Typically, they can be classified into three categories [9]. The first type (e.g., ensemble diversity [8], Jacobian Regularization [22]) optimizes the gradient calculation of target classifiers. However, when processing nature images, the performance of these defense approaches may degrade. The second type (e.g., PuVAE [9], feature squeezing [23]) tries to purify the inputs of target classifiers via extra auto-encoders or filters. Nonetheless, the extra facilities inevitably increase the workload of host IIoT devices. By modifying training data, the third type can be used to regularize target classifiers. As an outstanding example of the third type, adversarial training [17, 24] targets to achieve robust classifiers based on the training on both adversarial and nature examples. However, existing adversarial training methods for specific IIoT devices only focus on a limited number of adversarial attack types. Therefore, the trained models using above methods usually cannot be directly used by devices deployed in new environment.

Aiming at improving the performance of training, federated learning [25, 10] is becoming widely used in generating centralized models while ensuring high data privacy for large-scale distributed applications [11]. Although it has been studied in IIoT design, it focuses on the data privacy issues [26] rather than adversarial attacks. To the best of our knowledge, our work is the first attempt that adopts the concept of federated learning to construct a defense framework against various adversarial attacks for large-scale IIoT applications.

Iii Our Federated Defense Approach

Due to privacy information leakage, adversaries can obtain DNN model information from IIoT devices for attack purposes. In our approach, under the help of privacy protection mechanisms provided by IIoT devices, we assume that the model cracking time is longer than the model update period. In this case, adversaries always cannot obtain the latest version of models used by IIoT devices. However, adversaries can use transfer attacks to fool DNN models based on the privacy information they obtained. This paper focuses on how to retrain the threaten model, so that it can resist various types of adversarial examples generated by adversaries. The following subsections will introduce our cloud-based federated defense architecture, loss function for device-level federated adversarial training, and model update and synchronization processes in detail.

Iii-a The Architecture of

Figure 1 details the framework of together with its workflow, which is inspired by the adversarial training and federated learning methods. The architecture of our approach consists of two parts, i.e., IIoT devices and their cloud server. Besides the function of inference, the DNNs resided in IIoT devices are responsible for the DNN evolution for resisting adversarial examples. Initially, all the devices share the same DNN. Since they are deployed in different environments, they may encounter different input examples and different types of attacks. Such imbalances make the federated learning a best solution to aggregate different defense capabilities.

Fig. 1: The framework and workflow of

The cloud server consists of two modules, i.e., attack monitor module and federated defense model generation module. The attack monitor module is used to record the latest attack information for IIoT devices according to their locations or types. The module manages a library consisting of all the reported attack schemes (i.e., source code or executable programs). Such information can be collected by IIoT device producers or from third-party institutions. When the attack monitor module detects some new attack for a device, it will require the device to download the corresponding attack scheme for the purpose of adversarial retraining. Similar to federated learning, the federated defense model generation module periodically collects device gradient information and aggregates them to achieve an updated model with better robustness. Then the module will dispatch the newly formed model to all the connected IIoT devices for the purpose of model synchronization.

During the execution of an IIoT device, the device keeps a buffer to hold a set of nature examples that are collected randomly when their prediction confidence is high. For a specific period, all the IIoT devices need to be retrained and synchronized in a federated learning way. This process consists of three steps. First, based on the assigned attack schemes by the cloud server, each device generates corresponding adversarial examples locally to form a retraining set, whose elements are pairs of nature examples and corresponding adversarial examples. In the second step, the local adversarial training process periodically uploads the newly achieved gradient information from IIoT devices to the cloud server for the model update and synchronization. Finally, similar to federated learning, the model generated by our federated defense approach will be deployed on each connected IIoT devices. Note that when a new IoT devices joins the IIoT application, it needs to download the new model from the server. Due to the diversities of different devices, the new model is more robust, which can resist more attacks of different types. Since the interactions between the cloud server and IIoT devices only involves gradient information, the data privacy of IIoT devices can be guaranteed.

Iii-B Loss Function Modeling

In machine learning, loss functions are used for classification representing the cost paid for inaccurate predictions. In the context of deep learning, the loss functions can be used in the training phase to optimize the model prediction accuracy. When taking adversarial attacks into account, the definitions of loss functions for adversarial training are different from traditional ones. Most existing loss functions for adversarial training consist of two components, i.e., the normal loss function and the adversarial loss function [17]. They can be formulated as:


The notation denotes the normal loss function, where and represent the nature (normal) example and the classification label when the model’s parameter is . Similar to the definition of , the notation is used to denote the adversarial loss function, where denote the adversarial example generated from . The notation

represents the overall loss function of adversarial training. The hyperparameter

() in Equation (1) is used to set the proportion of the adversarial loss function in the overall loss function. The higher value of indicates the higher weight of the adversarial loss function contributing to the overall loss function.

For different adversarial attacks, there exist different ways to achieve an optimal that can minimize . As an example of FGSM attacks which are of norm, the optimal can be calculated using:


where denotes the training set and the norm specifies the allowable difference between nature and adversarial examples. Note that Equation (2) focuses on adversarial attacks of norm (e.g., FGSM, BIM). It cannot deal with adversarial attacks of norm (e.g., JSMA) and norm (e.g., CW2, DeepFool, SIMBA). To cover all possible attacks for an IIoT application with devices, we extend defined in Equation (1) as follows:


The notation denotes the loss function of our federated defense approach, which equals to the arithmetic average of all the loss functions of devices participating in adversarial training. Here, denotes the set of all the adversarial examples generated by the devices, and and denote the nature examples and their corresponding labels, respectively. The symbols , , denote the nature examples, adversarial examples and corresponding labels of the device. The optimization target of our federated adversarial training is to figure out an optimal which can be formulated as:


In Equation (4) we use , and to indicate generated adversarial examples of norm , norm and from , respectively. We can find that Equation (4) tries to figure out one comprehensive defense model that can resist a wide range of known attacks with higher accuracy than locally retrained models.

Iii-C Federated Defense Model Generation

Our federated defense approach consists of two parts, i.e., IIoT devices and corresponding cloud server. During the execution, IIoT devices randomly collect a set of nature examples with high confidence on-the-fly and save them in their local memory. Based on attack schemes assigned by the cloud sever, IIoT devices generate adversarial examples for their model re-training. Note that due to the limited resources (e.g., memory size, computing power) of IIoT devices, usually cloud servers only assign limited number of attack schemes to IIoT devices. Similar to federated learning, the adversarial training process of our federated defense method involves multiple epochs, where an epoch may involve multiple iterations based on the user specified batch size. In our approach, we consider each iteration as a round. Within a round, all the IIoT devices send gradient information obtained from their locally retrained models to the cloud server, and then the cloud server aggregates the gradients and synchronizes the updated model with all the IIoT devices.

0:  i) , cloud server;   ii) , adversarial attack types;  iii) , batch size;    iv) , hyperparameter;   v) , # of epochs;     vi) , device index;   vii) , device model;
1:  while  do
3:      , device model weight
8:     for  do
9:        for  do
17:              Send gradients to cloud server
19:        end for
20:     end for
21:  end while
Algorithm 1 Adversarial Training Procedure for IIoT Devices

Algorithm 1 details the local adversarial training procedure for IIoT devices. Note that we assume that the IIoT device with index has been connected to the server and its model is the same as the one of the server initially. In step 2, the device tries to collect nature examples with high prediction confidence randomly. Based on the old model (i.e., and ) and the assigned attack schemes by the cloud server, step 3 generates the adversarial examples for using transfer attacks attacks. Note that if the cardinality of (i.e., ) is larger than one, step 3 will generate adversarial attacks of different types for each example in . Since all the examples in are collected with high confidence, step 4 tries to figure out the labels for them. Similar to step 4, step 5 obtains the prediction results for all the generated adversarial examples. Step 6 enlarges both and by duplicating them times for the following loss function calculation in steps 13-14. Steps 8-20 iteratively interact with the cloud server, where steps 10-18 form one round for the gradient aggregation and model update. Step 10 divides the nature and adversarial examples batch by batch. Steps 11-12 figure out the prediction labels for nature and adversarial examples in the same batch, respectively. Steps 13-15 calculate the overall loss function based on the nature and adversarial examples in the batch using the equation defined in Equation (1). Step 16 computes the gradient information and step 17 sends it to the cloud server. Step 18 updates the local model using the aggregated gradient information sent by the cloud server. Note that the is a blocking function waiting for the reply from the cloud server.

0:  i) , weight of the server model;   ii) , batch size;    iii) , # of epochs;    iv) , # of devices;    v) , # of nature examples on one device;
1:  while  do
2:     for  do
3:        for  do
4:           for  do
5:               Receive device’s gradients
6:           end for
7:            Gradient aggregation
9:           for  do
11:           end for
12:        end for
13:     end for
14:  end while
Algorithm 2 Model Generation Procedure for Cloud Server

Algorithm 2 presents the model generation procedure conducted by the cloud server involving both model aggregation and model update operations. As shown in steps 2-13, the sever needs to perform rounds of interactions with all the devices to form a robust model for federated defense. After receiving gradients from all the devices in step 5, step 7 aggregates the gradients according to Equation (3), and step 8 applies the aggregation result to the current model, i.e., . Note that the function in step 5 is a blocking function. When one round is finished, steps 9-11 send the newly updated model weight information to each connected IIoT devices for the model synchronization.

Iv Experiments

To evaluate the effectiveness of our approach, we implemented our

approach on top of a cloud-based architecture, which consists of a cloud server and a set of connected IIoT devices. Since we focus on classification accuracy, the behavior of cloud servers and IIoT devices are all simulated on a workstation with Intel i7-9700k CPU, 16GB memory, NVIDA GeForce GTX1080Ti GPU, and Ubuntu operating system. We adopted Tensorflow (version 1.12.0) and Keras (version 2.2.4) to construct all DNN models in our framework. To validate our approach, we conducted two case studies using LeNet

[27] for dataset MNIST [28], and ResNet [29] for CIFAR10 [30], respectively. The initial LeNet model is trained using 60000 training examples from the MNIST dataset, and the initial ResNet model is trained using 50000 training examples from the CIFAR10 dataset. Note that both MNIST and CIFAR10 datasets have a test set of 10000 examples, individually. We divided them into two halves, where 5000 examples are used for re-training and the remaining 5000 examples are used for testing.

Iv-a Performance Comparison

In the first experiment, we considered an IIoT application that has 10 devices connected to the same cloud server for federated defense. Since most IIoT devices are memory constrained, we assumed that the DNN model on an IIoT device can only keep 100 nature examples for adversarial training. We considered five well-known types of attacks in the experiment, i.e., FGSM [17], BIM [15], JSMA [18], CW2 [19], and DeepFool [20], where each type was used to attack two out of the ten IIoT devices. To enable adversarial training, for each device we generated 100 adversarial examples for the 100 nature examples using the assigned attack scheme, respectively. Note that all the adversarial examples here were generated by transfer attacks, assuming that the initial model can be obtained while the intermediate retrained models cannot be accessed by malicious adversaries. Similar to [17], we set the hyperparameter to 0.5, which indicates that both normal and adversarial losses contribute to the total loss equally.

For the federated training of adversarial attacks, we set the batch size to 100 pairs of nature and adversarial examples. In this case, an epoch consists of only one iteration, which can be considered as a round for the retraining of all the collected example pairs on a device. We set the epoch number to 50. Once one epoch is finished, the IIoT devices need to send the updated gradient information to their corresponding cloud server to perform aggregation.

To enable the performance comparison with the model derived by our federated defense approach, we also generated the models by training the 100 example pairs locally using different types of attacks individually. We use the notation None to denote the initial model without any retraining. We use the notation X+AdvTrain (X) to indicate the model of an IIoT device that is retrained locally based on the 100 adversarial examples made by attacks of type X. The notation FL+AdvTrain denotes the model generated by our federated defense approach. For fair comparison, we also applied all the five attack types on a randomly selected IIoT device among the 10 devices. In this case, we generated 500 adversarial examples to retrain the model locally (with a batch of 100 example pairs) and got the new model ALL+AdvTrain. Figure 2 shows the inference accuracy results for the MNIST dataset. As shown on the X-axis, we used the notation Nature to denote the 5000 test examples without any attack from the MNIST dataset. The other notations on X-axis indicate the adversarial test sets generated by a specific attack type based on Nature.

Fig. 2: Performance comparison between different defense methods for MNIST dataset

From Figure 2, we can find that FL+AdvTrain achieves the best prediction accuracy among seven out of eight models except Nature. For the Nature test set, the None method slightly outperforms our approach by 0.08%. This is because the FL+AdvTrain model includes new adversarial examples in the retraining. For the remaining five test sets, we can find that FL+AdvTrain outperforms the other seven models significantly. As an example of DeepFool test set, the FL+AdvTrain model can achieve better accuracy than the DeepFool+AdvTrain model by 13.5%, though DeepFool+AdvTrain is retrained specific for DeepFool attacks. Note that comparing with the ALL+AdvTrain model, our FL+AdvTrain shows much better accuracy for all the six attack types. In other words, our federated learning is a better choice for IIoT deployment than all the locally retrained models for specific devices.

Fig. 3: Performance comparison between different defense methods for CIFAR10 dataset

We also checked the performance of federated defense method on the CIFAR10 dataset. Figure 3 shows the comparison results. Similar to the observations from Figure 2, we can find that our approach outperforms the other seven methods. In this experiment, we can observe that the ALL+AdvTrain model shows better accuracy comparing with other models with specific attacks.

Since IIoT devices are deployed in an uncertain environment with more and more emerging new attacks, we also checked the robustness of models generated by our approach using new attacks. Figure 4 presents the robustness comparison results between different defense methods against a new type of attacks, i.e., SIMBA [21]. We used the eight models generated in Figure 2 and Figure 3 for this new attack. We generated 5000 test examples using SIMBA and applied each model on this test set individually. We can find that for both MNIST and CIFAR10 datasets our FL+AdvTrain model can resist more SIMBA attacks than the other models. As an example for MNIST test set, our method has an accuracy of 73.8%, while the ALL+AdvTrain model only has an accuracy of 71.9%. In other words, the robustness of our federated defense method is better than other local adversarial training methods to defend new attacks.

Fig. 4: Performance comparison between different defense methods for SIMBA attack

Iv-B Scalability Analysis

The first experiment only investigates an IIoT application with only 10 devices. However, a typical IIoT application may involve dozes of or hundreds of devices. Therefore, to verify whether our approach can be applied to large-scale IIoT applications, we conducted the second experiment to check the scalability of our approach. Figure 5 shows the trend of prediction accuracy along with the increment of the number of IIoT devices over the MNIST dataset. In this experiment, we used our to generate the model for IIoT devices. Similar to the scheme used in the first experiment, we considered five attack types (i.e., FGSM, BIM, JSMA, CW2, DeepFool) for the adversarial example generation. We assumed that there were one fifth of devices attacked by a specific type of attacks. For example, if there are 10 devices involved in an IIoT application, there will be 2 devices for each of the five investigated attack types. We investigated 7 types of test examples, where Nature denotes a test set of 5000 nature examples, and the other six test sets of 5000 examples each are labeled using the attack type.

Fig. 5: The impact of the number of IIoT devices for our federated defense methods on MNIST dataset

From Figure 5 we can find that the prediction accuracy for Nature is the highest. Moreover, we can find that when more devices are engaged in federated defense, the accuracy for Nature test set can still be slightly improved. The same trend can be observed from the other six adversarial test sets. As an example for the JSMA test set, when the number of IIoT devices increases from 10 to 50, the accuracy can be improved from 85.4% to 87.8%. Note that in the federated defense, the attack type SIMBA is not considered. Therefore, we can observe that the prediction accuracy for SIMBA adversarial test set is the lowest. However, we can find an accuracy improvement along the increment of the number of devices, i.e., from 73.8% to 74.8%.

Fig. 6: The impact of the number of IIoT devices for our federated defense methods on CIFAR10 dataset

Figure 6 shows the results of our federated defense method on CIFAR10 dataset. We can observe the similar trend compared with the results shown in Figure 5. As an example for the BIM test set, when the number of IIoT devices increases from 10 to 50, the accuracy can be improved from 73.5% to 81.3%. Moreover, for the SIMBA test set, the accuracy can be significantly improved from 42.2% to 49.1%. In other words, the more devices with high diversities are involved in federated defense, the more attacks the obtained model can resist. Therefore, our approach is promising especially for large-scale IIoT applications.

V Conclusion

Although DNN-based techniques are becoming popular in IIoT applications, they are suffering from an increasing number of adversarial attacks. How to generate DNNs that are immune to various types of attacks (especially newly emerging attacks) is becoming a major bottleneck in the deployment of safety-critical IIoT applications. To address this problem, this paper proposes a novel federated defense approach for cloud-based IIoT applications. Based on a modified federated learning framework and our proposed loss function for adversarial learning, our approach can effectively synthesize DNNs to accurately resist existing adversarial attacks, while the data privacy among different IIoT devices is guaranteed. Experimental results on two well-known benchmarks demonstrate that our approach can not only improve the overall defense performance against various existing adversarial attacks, but also can accurately detect DNN misbehaviors caused by new kinds of attacks.


This work received financial support in part from National Key Research and Development Program of China (Grant #: 2018YFB2101300), and Natural Science Foundation of China (Grant #: 61872147). Mingsong Chen is the corresponding author.


  • [1] P. Li, Z. Chen, L. T. Yang, Q. Zhang and M. J. Deen. “Deep Convolutional Computation Model for Feature Learning on Big Data in Internet of Things”. IEEE Transactions on Industrial Informatics, vol. 14, no. 2, pp.790–798, 2018.
  • [2] H. Gao, B. Cheng, J. Wang, K. Li, J. Zhao and D. Li. “Object Classification Using CNN-Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment”. IEEE Transactions on Industrial Informatics, no. 14, no. 9, pp. 4224–4231, 2018.
  • [3] R. Hadidi, J. Cao, M. S. Ryoo, and H. Kim. “Robustly executing DNNs in IoT systems using coded distributed computing”. in Proc. of ACM/IEEE Design Automation Conference (DAC), no. 234, 2019.
  • [4] K. Pei, Y. Cao, J. Yang, and S. Jana. “Deepxplore: Automated whitebox testing of deep learning systems”. in Proc. of ACM Symposium on Operating Systems Principles (SOSP), pp. 1–18, 2017.
  • [5] L. Xu, W. He and S. Li. “Internet of Things in Industries: A Survey”. IEEE Transactions on Industrial Informatics, vol. 10, no. 4, pp.2233–2243, 2014.
  • [6] F. Farivar, M. S. Haghighi, A. Jolfaei and M. Alazab.

    “Artificial Intelligence for Detection, Estimation, and Compensation of Malicious Attacks in Nonlinear Cyber-Physical Systems and Industrial IoT”.

    IEEE Transactions on Industrial Informatics, vol. 16, no. 4, pp.2716–2725, 2020.
  • [7] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. “Intriguing properties of neural networks”. in Proc. of International Conference on Learning Representations (ICLR), 2014.
  • [8] T. Pang, K. Xu, C. Du, N. Chen, and J. Zhu. “Improving adversarial robustness via promoting ensemble diversity”. in Proc. of International Conference on Machine Learning (ICML), pp. 4970–4979, 2019.
  • [9] U. Hwang, J. Park, H. Jang, S. Yoon, and N. I. Cho.

    “Puvae: A variational autoencoder to purify adversarial examples”.

    IEEE Access, vol. 7, pp.126582–126593, 2019.
  • [10] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas. “Communication-efficient learning of deep networks from decentralized data”. in Proc. of International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 1273–1282, 2017.
  • [11] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konecny, S. Mazzocchi, H. B. McMahan, T. V. Overveldt, D. Petrou, D. Ramage, and J. Roselander. “Towards federated learning at scale: System design”. arXiv:1902.01046, 2019.
  • [12] A. Botta, W. Donato, V. Persico, and A. Pescapé. “Integration of Cloud computing and Internet of Things: A survey”. Future Generation Computer Systems (FGCS), vol. 56, pp. 684–700, 2016.
  • [13] C. M. Dourado Jr, S. P. P. da Silva, R. V. M. da Nóbrega, A. C. D. S. Barros, P. P. Rebouças Filho, and V. H. C. de Albuquerque. “Deep learning IoT system for online stroke detection in skull computed tomography images”. Computer Networks, vol. 152, pp.25–39, 2019.
  • [14] L. Ma, F. Xu, F. Zhang, J. Sun, M. Xue, B. Li, C. Chen, T. Su, L. Li, Y. Liu, J. Zhao, Y. Wang. “DeepGauge: multi-granularity testing criteria for deep learning systems”. in Proc. of International Conference on Automated Software Engineering (ASE), pp. 120–131, 2018.
  • [15] A. Kurakin, I. J. Goodfellow, and S. Bengio. “Adversarial examples in the physical world”. in Proc. of International Conference on Learning Representations (ICLR), 2017.
  • [16] J. Zhang, and C. Li. “Adversarial Examples: Opportunities and Challenges”. IEEE Trans. on Neural Networks and Learning Systems, pp. 1–17, accepted, 2019.
  • [17] I. J. Goodfellow, J. Shlens, and C. Szegedy. “Explaining and harnessing adversarial examples”. in Proc. of International Conference on Learning Representations (ICLR), 2015.
  • [18] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. “The limitations of deep learning in adversarial settings”. in Proc. of European Symposium on Security and Privacy (EuroS&P), pp. 372–387, 2016.
  • [19] N. Carlini and D. Wagner. “Towards evaluating the robustness of neural networks”. in Proc. of IEEE Symposium on Security and Privacy (S&P), pp. 39–57, 2017.
  • [20] S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. “Deepfool: a simple and accurate method to fool deep neural networks”. in

    Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , pp. 2574–2582, 2016.
  • [21] C. Guo, J. R. Gardner, Y. You, A. G. Wilson, and K. Q. Weinberger. “Simple black-box adversarial attacks”. in Proc. of International Conference on Machine Learning (ICML), pp. 2484–2493, 2019.
  • [22] D. Jakubovitz and R. Giryes. “Improving dnn robustness to adversarial attacks using jacobian regularization”. in Proc. of European Conference on Computer Vision (ECCV), pp. 514–529, 2018.
  • [23] W. Xu, D. Evans, and Y. Qi. “Feature squeezing: Detecting adversarial examples in deep neural networks”. in Proc. of Annual Network and Distributed System Security Symposium (NDSS), 2018.
  • [24] F. Tramèr, A. Kurakin, N. Papernot, I. J. Goodfellow, D. Boneh, and P. McDaniel. “Ensemble adversarial training: Attacks and defenses”. in Proc. of International Conference on Learning Representations (ICLR), 2018.
  • [25] J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon. “Federated learning: Strategies for improving communication efficiency”. arXiv:1610.05492, 2016.
  • [26] U. M. Aïvodji, S. Gambs, and A. Martin. “IOTFLA : A secured and privacy-preserving smart home architecture implementing federated learning”. in Proc. of IEEE Symposium on Security and Privacy (S&P) Workshops, pp. 175–180, 2019.
  • [27] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner. “Gradient-based learning applied to document recognition”. Proceedings of the IEEE, vol. 86, no. 11, pp. 278-2324, 1998.
  • [28] Y. LeCun, C. Cortes, and C. J. C. Burges. “The MNIST database of handwritten digits” [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html.
  • [29] “Residual Network” [Online]. Available: https://github.com/BIGBALLON/cifar-10-cnn.
  • [30] A. Krizhevsky, V. Nair, and G. Hinton. “The CIFAR-10 dataset”. arXiv:1610.05492, 2016.