Data Poisoning Attacks Against Federated Learning Systems

07/16/2020 ∙ by Vale Tolpegin, et al. ∙ Georgia Institute of Technology 0

Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server. However, the distributed nature of FL gives rise to new threats caused by potentially malicious participants. In this paper, we study targeted data poisoning attacks against FL systems in which a malicious subset of the participants aim to poison the global model by sending model updates derived from mislabeled data. We first demonstrate that such data poisoning attacks can cause substantial drops in classification accuracy and recall, even with a small percentage of malicious participants. We additionally show that the attacks can be targeted, i.e., they have a large negative impact only on classes that are under attack. We also study attack longevity in early/late round training, the impact of malicious participant availability, and the relationships between the two. Finally, we propose a defense strategy that can help identify malicious participants in FL to circumvent poisoning attacks, and demonstrate its effectiveness.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning (ML) has become ubiquitous in today’s society as a range of industries deploy predictive models into their daily workflows. This environment has not only put a premium on the ML model training and hosting technologies but also on the rich data that companies are collecting about their users to train and inform such models. Companies and users alike are consequently faced with 2 fundamental questions in this reality of ML: (1) How can privacy concerns around such pervasive data collection be moderated without sacrificing the efficacy of ML models? and (2) How can ML models be trusted as accurate predictors?

Federated ML has seen increased adoption in recent years [17, 40, 9] in response to the growing legislative demand to address user privacy [1, 26, 38]. Federated learning (FL) allows data to remain at the edge with only model parameters being shared with a central server. Specifically, there is no centralized data curator who collects and verifies an aggregate dataset. Instead, each data holder (participant) is responsible for conducting training on their local data. In regular intervals participants are then send model parameter values to a central parameter server or aggregator where a global model is created through aggregation of the individual updates. A global model can thus be trained over all participants’ data without any individual participant needing to share their private raw data.

While FL systems allow participants to keep their raw data local, a significant vulnerability is introduced at the heart of question (2). Consider the scenario wherein a subset of participants are either malicious or have been compromised by some adversary. This can lead to these participants having mislabeled or poisonous samples in their local training data. With no central authority able to validate data, these malicious participants can consequently poison the trained global model. For example, consider Microsoft’s AI chat bot Tay. Tay was released on Twitter with the underlying natural language processing model set to learn from the Twitter users it interacted with. Thanks to malicious users, Tay was quickly manipulated to learn offensive and racist language 

[41].

In this paper, we study the vulnerability of FL systems to malicious participants seeking to poison the globally trained model. We make minimal assumptions on the capability of a malicious FL participant – each can only manipulate the raw training data on their device. This allows for non-expert malicious participants to achieve poisoning with no knowledge of model type, parameters, and FL process. Under this set of assumptions, label flipping attacks become a feasible strategy to implement data poisoning, attacks which have been shown to be effective against traditional, centralized ML models [5, 44, 50, 52]. We investigate their application to FL systems using complex deep neural network models.

We demonstrate our FL poisoning attacks using two popular image classification datasets: CIFAR-10 and Fashion-MNIST. Our results yield several interesting findings. First, we show that attack effectiveness (decrease in model utility) depends on the percentage of malicious users and the attack is effective even when this percentage is small. Second, we show that attacks can be targeted, i.e., they have large negative impact on the subset of classes that are under attack, but have little to no impact on remaining classes. This is desirable for adversaries who wish to poison a subset of classes while not completely corrupting the global model to avoid easy detection. Third, we evaluate the impact of attack timing (poisoning in early or late rounds of FL training) and the impact of malicious participant availability (whether malicious participants can increase their availability and selection rate to increase effectiveness). Motivated by our finding that the global model may still converge accurately after early-round poisoning stops, we conclude that largest poisoning impact can be achieved if malicious users participate in later rounds and with high availability.

Given the highly effective poisoning threat to FL systems, we then propose a defense strategy for the FL aggregator to identify malicious participants using their model updates. Our defense is based on the insight that updates sent from malicious participants have unique characteristics compared to honest participants’ updates. Our defense extracts relevant parameters from the high-dimensional update vectors and applies PCA for dimensionality reduction. Results on CIFAR-10 and Fashion-MNIST across varying malicious participant rates (2-20%) show that the aggregator can obtain clear separation between malicious and honest participants’ respective updates using our defense strategy. This enables the FL aggregator to identify and block malicious participants.

The rest of this paper is organized as follows. In Section 2

, we introduce the FL setting, threat model, attack strategy, and attack evaluation metrics. In Section

3, we demonstrate the effectiveness of FL poisoning attacks and analyze their impact with respect to malicious participant percentage, choice of classes under attack, attack timing, and malicious participant availability. In Section 4, we describe and empirically demonstrate our defense strategy. We discuss related work in Section 5 and conclude in Section 6. Our source code is available111https://github.com/git-disl/DataPoisoning_FL.

2 Preliminaries and Attack Formulation

2.1 Federated Machine Learning

FL systems allow global model training without the sharing of raw private data. Instead, individual participants only share model parameter updates. Consider a deep neural network (DNN) model. DNNs consist of multiple layers of nodes where each node is a basic functional unit with a corresponding set of parameters. Nodes receive input from the immediately preceding layer and send output to the following layer; with the first layer nodes receiving input from the training data and the final layer nodes generating the predictive result.

In a traditional DNN learning scenario, there exists a training dataset

and a loss function

. Each is defined as a set of features and a class label where is the set of all possible class values. The final layer of a DNN architecture for such a dataset will consequently contain nodes, each corresponding to a different class in . The loss of this DNN given parameters on is denoted: .

When is fed through the DNN with model parameters

, the output is a set of predicted probabilities

. Each value is the predicted probability that has a class value , and contains a probability for each class value . Each predicted probability is computed by a node in the final layer of the DNN architecture using input received from the preceding layer and ’s corresponding parameters in . The predicted class for instance given a model with parameters then becomes . Given a cross entropy loss function, the loss on can consequently can be calculated as where if and otherwise. The goal of training a DNN model then becomes to find the parameter values for which minimize the chosen loss function .

The process of minimizing this loss is typically done through an iterative process called stochastic gradient descent (SGD). At each step, the SGD algorithm (1) selects a batch of samples

, (2) computes the corresponding gradient , and (3) then updates in the direction . In practice, is shuffled and then evenly divided into

sized batches such that each sample occurs in exactly one batch. Applying SGD iteratively to each of the pre-determined batches is then referred to as one epoch.

In FL environments however, the training dataset is not wholly available at the aggregator. Instead, participants each hold their own private training dataset . Rather than sharing their private raw data, participants instead execute the SGD training algorithm locally and then upload updated parameters to a centralized server (aggregator). Specifically, in the initialization phase (i.e., round 0), the aggregator generates a DNN architecture with parameters which is advertised to all participants. At each global training round , a subset consisting of participants is selected based on availability. Each participant executes one epoch of SGD locally on to obtain updated parameters , which are sent to the aggregator. The aggregator sets the global parameters where . The global parameters are then advertised to all participants. These global parameters at the end of round are used in the next training round . After total global training rounds, the model is finalized with parameters .

2.2 Threat and Adversary Model

Threat Model: We consider the scenario in which a subset of FL participants are malicious or are controlled by a malicious adversary. We denote the percentage of malicious participants among all participants as . Malicious participants may be injected to the system by adding adversary-controlled devices, compromising of the benign participants’ devices, or incentivizing (bribing) of benign participants to poison the global model for a certain number of FL rounds. We consider the aggregator to be honest and not compromised.

Adversarial Goal: The goal of the adversary is to manipulate the learned parameters such that the final global model has high errors for particular classes (a subset of ). The adversary is thereby conducting a targeted poisoning attack. This differs from untargeted attacks which instead seek indiscriminate high global model errors across all classes [6, 14, 51]. Targeted attacks have the desirable property that they decrease the possibility of the poisoning attack being detected by minimizing influence on non-targeted classes.

Adversary Knowledge and Capability: We consider a realistic adversary model with the following constraints. Each malicious participant can manipulate the training data on their own device, but cannot access or manipulate other participants’ data or the model learning process, e.g., SGD implementation, loss function, or server aggregation process. The attack is not specific to the DNN architecture, loss function or optimization function being used. It requires training data to be corrupted, but the learning algorithm remains unaltered.

2.3 Label Flipping Attacks in Federated Learning

We use a label flipping attack to implement targeted data poisoning in FL. Given a source class and a target class from , each malicious participant modifies their dataset as follows: For all instances in whose class is , change their class to . We denote this attack by . For example, in CIFAR-10 image classification, airplane bird denotes that images whose original class labels are airplane will be poisoned by malicious participants by changing their class to bird. The goal of the attack is to make the final global model more likely to misclassify airplane images as bird images at test time.

Label flipping is a well-known attack in centralized ML [43, 44, 50, 52]. It is also suitable for the FL scenario given the adversarial goal and capabilities above. Unlike other types of poisoning attacks, label flipping does not require the adversary to know the global distribution of , the DNN architecture, loss function , etc. It is time and energy-efficient, an attractive feature considering FL is often executed on edge devices. It is also easy to carry out for non-experts and does not require modification or tampering with participant-side FL software.

, Model, model trained with no poisoning
Number of FL participants in each round
Total number of rounds of FL training
FL participants queried at round ,
Global model parameters after round and local model parameters at participant after round
Percentage of malicious participants
, Source and target class in label flipping attack
Global model accuracy
Class recall for class
Baseline misclassification count from class to class
Table 1: Notations used throughout the paper.

Attack Evaluation Metrics: At the end of rounds of FL, the model is finalized with parameters . Let denote the test dataset used in evaluating , where for all participant datasets . In the next sections, we provide a thorough analysis of label flipping attacks in FL. To do so, we use a number of evaluation metrics.
Global Model Accuracy (): The global model accuracy is the percentage of instances where the global model with final parameters predicts and is indeed the true class label of .
Class Recall (): For any class , its class recall is the percentage where is the number of instances where and is the true class label of ; and is the number of instances where and the true class label of is .
Baseline Misclassification Count (): Let be a global model trained for rounds using FL without any malicious attack. For classes , the baseline misclassification count from to , denoted , is defined as the number of instances where and the true class of is .

Table 1 provides a summary of the notation used in the rest of this paper.

3 Analysis of Label Flipping Attacks in FL

3.1 Experimental Setup

Datasets and DNN Architectures: We conduct our attacks using two popular image classification datasets: CIFAR-10 [22] and Fashion-MNIST [49]

. CIFAR-10 consists of 60,000 color images in 10 object classes such as deer, airplane, and dog with 6,000 images included per class. The complete dataset is pre-divided into 50,000 training images and 10,000 test images. Fashion-MNIST consists of a training set of 60,000 images and a test set of 10,000 images. Each image in Fashion-MNIST is gray-scale and associated with one of 10 classes of clothing such as pullover, ankle boot, or bag. In experiments with CIFAR-10, we use a convolutional neural network with six convolutional layers, batch normalization, and two fully connected dense layers. This DNN architecture achieves a test accuracy of 79.90% in the centralized learning scenario, i.e.

, without poisoning. In experiments with Fashion-MNIST, we use a two layer convolutional neural network with batch normalization, an architecture which achieves 91.75% test accuracy in the centralized scenario without poisoning. Further details of the datasets and DNN model architectures can be found in Appendix 0.A.

Federated Learning Setup:

We implement FL in Python using the PyTorch 

[35] library. By default, we have participants, one central aggregator, and . We use an independent and identically distributed (iid) data distribution, i.e., we assume the total training dataset is uniformly randomly distributed among all participants with each participant receiving a unique subset of the training data. The testing data is used for model evaluation only and is therefore not included in any participant ’s train dataset . Observing that both DNN models converge after fewer than 200 training rounds, we set our FL experiments to run for rounds total.

Label Flipping Process: In order to simulate the label flipping attack in a FL system with participants of which are malicious, at the start of each experiment we randomly designate of the participants from as malicious. The rest are honest. To address the impact of random selection of malicious participants, by default we repeat each experiment 10 times and report the average results. Unless otherwise stated, we use .

For both datasets we consider three label flipping attack settings representing a diverse set of conditions in which to base adversarial attacks. These conditions include (1) a source class target class pairing whose source class was very frequently misclassified as the target class in federated, non-poisoned training, (2) a pairing where the source class was very infrequently misclassified as the target class, and (3) a pairing between these two extremes. Specifically, for CIFAR-10 we test (1) 5: dog 3: cat, (2) 0: airplane 2: bird, and (3) 1: automobile 9: truck. For Fashion-MNIST we experiment with (1) 6: shirt 0: t-shirt/top, (2) 1: trouser 3: dress, and (3) 4: coat 6: shirt.

3.2 Label Flipping Attack Feasibility

We start by investigating the feasibility of poisoning FL systems using label flipping attacks. Figure 1 outlines the global model accuracy and source class recall in scenarios with malicious participant percentage ranging from 2% to 50%. Results demonstrate that as the malicious participant percentage, increases the global model utility (test accuracy) decreases. Even with small , we observe a decrease in model accuracy compared to a non-poisoned model (denoted by in the graphs), and there is an even larger decrease in source class recall. In experiments with CIFAR-10, once reaches 40%, the recall of the source class decreases to 0% and the global model accuracy decreases from 78.3% in the non-poisoned setting to 74.4% in the poisoned setting. Experiments conducted on Fashion-MNIST show a similar pattern of utility loss. With source class recall drops by and with it drops by . It is therefore clear that an adversary who controls even a minor proportion of the total participant population is capable of significantly impacting global model utility.

(a) CIFAR-10
(b) CIFAR-10
(c) F-MNIST
(d) F-MNIST
Figure 1: Evaluation of attack feasibility and impact of malicious participant percentage on attack effectiveness. CIFAR-10 experiments are for the 5 3 setting while Fashion-MNIST experiments are for the 4 6 setting. Results are averaged from 10 runs for each setting of

. The black bars are mean over the 10 runs and the green error bars denote standard deviation.

While both datasets are vulnerable to label flipping attacks, the degree of vulnerability varies between datasets with CIFAR-10 demonstrating more vulnerability than Fashion-MNIST. For example, consider the 30% malicious scenario, Figure 0(b) shows the source class recall for the CIFAR-10 dataset drops to 19.7% while Figure 0(d) shows a much lower decrease for the Fashion-MNIST dataset with 58.2% source class recall under the same experimental settings.

Percentage of Malicious Participants ()
2 4 10 20 30 40 50
CIFAR-10
0 2 16 1.42% 2.93% 10.2% 14.1% 48.3% 73% 70.5%
1 9 56 0.69% 3.75% 6.04% 15% 36.3% 49.2% 54.7%
5 3 200 0% 3.21% 7.92% 25.4% 49.5% 69.2% 69.2%
Fashion-MNIST
1 3 18 0.12% 0.42% 2.27% 2.41% 40.3% 45.4% 42%
4 6 51 0.61% 7.16% 16% 29.2% 28.7% 37.1% 58.9%
6 0 118 -1% 2.19% 7.34% 9.81% 19.9% 39% 43.4%
Table 2: Loss in source class recall for three source target class settings with differing baseline misclassification counts in CIFAR-10 and Fashion-MNIST. Loss averaged from 10 runs. Highlighted bold entries are highest loss in each.

On the other hand, vulnerability variation based on source and target class settings is less clear. In Table 2, we report the results of three different combinations of source target attacks for each dataset. Consider the two extreme settings for the CIFAR-10 dataset: on the low end the 0 2 setting has a baseline misclassification count of 16 while the high end count is 200 for the 5 3 setting. Because of the DNN’s relative challenge in differentiating class 5 from class 3 in the non-poisoned setting, it could be anticipated that conducting a label flipping attack within the 5 3 setting would result in the greatest impact on source class recall. However, this was not the case. Table 2 shows that in only two out of the six experimental scenarios did 5 3 record the largest drop in source class recall. In fact, four scenarios’ results show the 0 2 setting, the setting with the lowest baseline misclassification count, as the most effective option for the adversary. Experiments with Fashion-MNIST show a similar trend, with label flipping attacks conducted in the 4 6 setting being the most successful rather than the 6 0 setting which has more than the number of baseline misclassifications. These results indicate that identifying the most vulnerable source and target class combination may be a non-trivial task for the adversary, and that there is not necessarily a correlation between non-poisoned misclassification performance and attack effectiveness.

all other
CIFAR-10
0 2 -6.28% 1.58% 0.34%
1 9 -6.22% 2.28% 0.16%
5 3 -6.12% 3.00% 0.17%
Fashion-MNIST
1 3 -2.23% 0.25% 0.01%
4 6 -9.96% 2.40% 0.09%
6 0 -8.87% 2.59% 0.20%
Table 3: Changes due to poisoning in source class recall, target class recall, and total recall for all remaining classes (non-source, non-target). Results are averaged from 10 runs in each setting. The maximum standard deviation observed was 1.45% in source class recall and 1.13% in target class recall.
(a) CIFAR-10
(b) Fashion-MNIST
Figure 2: Relationship between global model accuracy and source class recall across changing percentages of malicious participants for CIFAR-10 and Fashion-MNIST. As each dataset has 10 classes, the scale for vs is 1:10.

We additionally study a desirable feature of the label flipping attack: they appear to be targeted. Specifically, Table 3 reports the following quantities for each source target flipping scenario: loss in source class recall, loss in target class recall, and loss in recall of all remaining classes. We observe that the attack causes substantial change in source class recall ( drop in most cases) and target class recall. However, the attack impact on the recall of remaining classes is an order of magnitude smaller. CIFAR-10 experiments show a maximum of 0.34% change in class recalls attributable to non-source and non-target classes and Fashion-MNIST experiments similarly show a maximum change of 0.2% attributable to non-source and non-target classes, both of which are relatively minor compared to source and target classes. Thus, the attack is causing the global model to misclassify instances belonging to as at test time while other classes remain relatively unimpacted, demonstrating its targeted nature towards and . Considering the large impact of the attack on source class recall, changes in source class recall therefore make up the vast majority of the decreases in global model accuracy caused by label flipping attacks in FL systems. This observation can also be seen in Figure 2 where the change in global model accuracy closely follows the change in source class recall.

The targeted nature of the label flipping attack allows for adversaries to remain under the radar in many FL systems. Consider systems where the data contain 100 classes or more, as is the case in CIFAR-100 [22]

and ImageNet 

[13]. In such cases, targeted attacks become much more stealthy due to their limited impact to classes other than source and target.

3.3 Attack Timing in Label Flipping Attacks

While label flipping attacks can occur at any point in the learning process and last for arbitrary lengths, it is important to understand the capabilities of adversaries who are available for only part of the training process. For instance, Google’s Gboard application of FL requires all participant devices be plugged into power and connected to the internet via WiFi [9]. Such requirements create cyclic conditions where many participants are not available during the day, when phones are not plugged in and are actively in use. Adversaries can take advantage of this design choice, making themselves available at times when honest participants are unable to.

We consider two scenarios in which the adversary is restricted in the time in which they are able to make malicious participants available: one in which the adversary makes malicious participants available only before the 75th training round, and one in which malicious participants are available only after the 75th training round. As the rate of global model accuracy improvement decreases with both datasets by training round 75, we choose this point to highlight how pre-established model stability may effect an adversary’s ability to launch an effective label flipping attack. Results for the first scenario are given in Figure 3 whereas the results for the second scenario are given in Figure 4.

(a) CIFAR-10
(b) Fashion-MNIST
Figure 3: Source class recall by round for experiments with “early round poisoning”, i.e., malicious participation only in the first 75 rounds (). The blue line indicates the round at which malicious participation is no longer allowed.
(a) CIFAR-10
(b) Fashion-MNIST
Figure 4: Source class recall by round for experiments with “late round poisoning”, i.e., malicious participation only after round 75 (). The blue line indicates the round at which malicious participation starts.

In Figure 3, we compare source class recall in a non-poisoned setting versus with poisoning only before round 75. Results on both CIFAR-10 and Fashion-MNIST show that while there are observable drops in source class recall during the rounds with poisoning (1-75), the global model is able to recover quickly after poisoning finishes (after round 75). Furthermore, the final convergence of the models (towards the end of training) are not impacted, given the models with and without poisoning are converge with roughly the same recall values. We do note that some CIFAR-10 experiments exhibited delayed convergence by an additional 50-100 training rounds, but these circumstances were rare and still eventually achieved the accuracy and recall levels of a non-poisoned model despite delayed convergence.

Source Class Recall
CIFAR-10
0 2 73.90% 82.45%
1 9 77.30% 89.40%
5 3 57.50% 73.10%
Fashion-MNIST
1 3 84.32% 96.25%
4 6 51.50% 89.60%
6 0 49.80% 73.15%
Table 4: Final source class recall when at least one malicious party participates in the final round versus when all participants in round are non-malicious. Results averaged for 10 runs for each experimental setting.

In Figure 4, we compare source class recall in a non-poisoned setting versus with poisoning limited to the 75th and later training rounds. These results show the impact of such late poisoning demonstrating limited longevity; a phenomena which can be seen in the quick and dramatic changes in source class recall. Specifically, source class recall quickly returns to baseline levels once fewer malicious participants are selected in a training round even immediately following a round with a large number of malicious participants having caused a dramatic drop. However, the final poisoned model in the late-round poisoning scenario may show substantial difference in accuracy or recall compared to a non-poisoned model. This is evidenced by the CIFAR-10 experiment in Figure 4, in which the source recall of the poisoned model is 10% lower compared to non-poisoned.

Furthermore, we observe that model convergence on both datasets is negatively impacted, as evidenced by the large variances in recall values between consecutive rounds. Consider Table 

4 where results are compared when either (1) at least one malicious participant is selected for or (2) is made entirely of honest participants. When at least one malicious participant is selected, the final source class recall is, on average, 12.08% lower with the CIFAR-10 dataset and 24.46% lower with the Fashion-MNIST dataset. The utility impact from the label flipping attack is therefore predominantly tied to the number of malicious participants selected in the last few rounds of training.

3.4 Malicious Participant Availability

Given the impact of malicious participation in late training rounds on attack effectiveness, we now introduce a malicious participant availability parameter . By varying we can simulate the adversary’s ability to control compromised participants’ availability (i.e. ensuring connectivity or power access) at various points in training. Specifically, represents malicious participants’ availability and therefore likeliness to be selected relative to honest participants. For example, if , when selecting each participant for round , there is a 0.6 probability that will be one of the malicious participants. Larger implies higher likeliness of malicious participation. In cases where , the number of malicious participants in is bounded by .

(a) CIFAR-10
(b) Fashion-MNIST
Figure 5: Evaluation of impact from malicious participants’ availability on source class recall. Results are averaged from 3 runs for each setting.

Figure 5 reports results for varying values of in late round poisoning, i.e., malicious participation is limited to rounds . Specifically, we are interested in studying those scenarios where an adversary boosts the availability of the malicious participants enough that their selection becomes more likely than the non-malicious participants, hence in Figure 5 we use . The reported source class recalls in Figure 5 are averaged over the last 125 rounds (total 200 rounds minus first 75 rounds) to remove the impact of individual round variability; further, each experiment setting is repeated 3 times and results are averaged. The results show that, when the adversary maintains sufficient representation in the participant pool (i.e. ), manipulating the availability of malicious participants can yield significantly higher impact on the global model utility with source class recall losses in excess of 20%. On both datasets with , the negative impact on source class recall is highest with , which is followed by , and , i.e., in decreasing order of malicious participant availability. Thus, in order to mount an impactful attack, it is in the best interests of the adversary to perform the attack with highest malicious participant availability in late rounds. We note that when is significantly larger than , increasing availability () will be insufficient for meaningfully increasing malicious participant selection in individual training rounds. Therefore, experiments where show little variation despite changes in .

To more acutely demonstrate the impact of , Figure 6 reports source class recall by round when and for both the CIFAR-10 and Fashion-MNIST datasets. In both datasets, when malicious participants are available more frequently, the source class recall is effectively shifted lower in the graph, i.e., source class recall values with are often much smaller than those with . We note that the high round-by-round variance in both graphs is due to the probabilistic variability in number of malicious participants in individual training rounds. When fewer malicious participants are selected in one training round relative to the previous round, source recall increases. When more malicious participants are selected in an individual round relative to the previous round, source recall falls.

(a) CIFAR-10
(b) Fashion-MNIST
Figure 6: Source class recall by round when malicious participants’ availability is close to that of honest participants () vs significantly increased (). The blue line indicates the round in which attack starts.
(a) CIFAR-10
(b) Fashion-MNIST
Figure 7: Relationship between change in source class recall in consecutive rounds versus change in number of malicious participants in consecutive rounds. Specifically, the y-axis represents @ round - @ round while the x-axis represents (# of malicious ) - (# of malicious ).

We further explore and illustrate our last remark with respect to the impact of malicious parties’ participation in consecutive rounds in Figure 7. In this figure, the x-axis represents the change in the number of malicious clients participating in consecutive rounds, i.e., (# of malicious ) – (# of malicious ). The y-axis represents the change in source class recall between these consecutive rounds, i.e., @ round @ round . The reported results are then averaged across multiple runs of FL and all cases in which each participation difference was observed. The results confirm our intuition that, when contains more malicious participants than , there is a substantial drop in source class recall. For large differences (such as +3 or +4), the drop could be as high as 40% or 60%. In contrast, when contains fewer malicious participants than , there is a substantial increase in source class recall, which can be as high as 60% or 40% when the difference is -4 or -3. Altogether, this demonstrates the possibility that the DNN could recover significantly even in few rounds of FL training, if a large enough decrease in malicious participation could be achieved.

4 Defending Against Label Flipping Attacks

Given a highly effective adversary, how can a FL system defend against the label flipping attacks discussed thus far? To that end, we propose a defense which enables the aggregator to identify malicious participants.

def evaluate_updates( set of vulnerable train rounds, participant set):
       for  do
             participants queried in training round global model parameters after training round for  do
                   updated parameters after train_DNN(, ) parameters connected to source class output node Add to
       standardize() PCA(, components=2) plot()
Algorithm 1 Identifying Malicious Model Updates in FL

After identifying malicious participants, the aggregator may blacklist them or ignore their updates in future rounds. We showed in Sections 3.3 and 3.4 that high-utility model convergence can be eventually achieved after eliminating malicious participation. The feasibility of such a recovery from early round attacks supports use of the proposed identification approach as a defense strategy.

Our defense is based on the following insight: The parameter updates sent from malicious participants have unique characteristics compared to honest participants’ updates for a subset of the parameter space. However, since DNNs have many parameters (i.e., is extremely high dimensional) it is non-trivial to analyze parameter updates by hand. Thus, we propose an automated strategy for identifying the relevant parameter subset and for studying participant updates using dimensionality reduction (PCA).

(a) CIFAR-10 =2%
(b) CIFAR-10 =4%
(c) CIFAR-10 =10%
(d) CIFAR-10 =20%
(e) F-MNIST =2%
(f) F-MNIST =4%
(g) F-MNIST 10%
(h) F-MNIST =20%
Figure 8: PCA plots with 2 components demonstrating the ability of Algorithm 1 to identify updates originating from a malicious versus honest participant. Plots represent relevant gradients collected from all training rounds . Blue Xs represent gradients from malicious participants while yellow Os represent gradients from honest participants.

The description of our defense strategy is given in Algorithm 1. Let denote the set of vulnerable FL training rounds and be the class that is suspected to be the source class of a poisoning attack. We note that if is unknown, the aggregator can defend against potential attacks such that . We also note that for a given , Algorithm 1 considers label flipping for all possible . An aggregator therefore will conduct independent iterations of Algorithm 1, which can be conducted in parallel. For each round and participant , the aggregator computes the delta in participant’s model update compared to the global model, i.e., . Recall from Section 2.1 that a predicted probability for any given class is computed by a specific node in the final layer DNN architecture. Given the aggregator’s goal of defending against the label flipping attack from , only the subset of the parameters in corresponding to is extracted. The outcome of the extraction is denoted by and added to a global list built by the aggregator. After is constructed across multiple rounds and participant deltas, it is standardized by removing the mean and scaling to unit variance. The standardized list

is fed into Principal Component Analysis (PCA), which is a popular ML technique used for dimensionality reduction and pattern visualization. For ease of visualization, we use and plot results with two dimensions (two components).

In Figure 8, we show the results of Algorithm 1 on CIFAR-10 and Fashion-MNIST across varying malicious participation rate , with . Even in scenarios with low , as is shown in Figures 7(a) and 7(e), our defense is capable of differentiating between malicious and honest participants. In all graphs, the PCA outcome shows that malicious participants’ updates belong to a visibly different cluster compared to honest participants’ updates which form their own cluster. Another interesting observation is that our defense does not suffer from the “gradient drift” problem. Gradient drift is a potential challenge in designing a robust defense, since changes in model updates may be caused both by actual DNN learning and convergence (which is desirable) or malicious poisoning attempt (which our defense is trying to identify and prevent). Our results show that, even though the defense is tested with a long period of rounds (190 training rounds since ), it remains capable of separating malicious and honest participants, demonstrating its robustness to gradient drift.

A FL system aggregator can therefore effectively identify malicious participants, and consequently restrict their participation in mobile training, by conducting such gradient clustering prior to aggregating parameter updates at each round. Clustering model gradients for malicious participant identification presents a strong defense as it does not require access to any public validation dataset, as is required in [3], which is not necessarily possible to acquire.

5 Related Work

Poisoning attacks are highly relevant in domains such as spam filtering [10, 32]

, malware and network anomaly detection

[11, 24, 39], disease diagnosis [29]

, computer vision

[34], and recommender systems [15, 54]. Several poisoning attacks were developed for popular ML models including SVM [6, 12, 44, 45, 50, 52], regression [19], dimensionality reduction [51]

, linear classifiers

[12, 23, 57]

, unsupervised learning

[7], and more recently, neural networks [12, 30, 42, 45, 53, 58]. However, most of the existing work is concerned with poisoning ML models in the traditional setting where training data is first collected by a centralized party. In contrast, our work studies poisoning attacks in the context of FL. As a result, many of the poisoning attacks and defenses that were designed for traditional ML are not suitable to FL. For example, attacks that rely on crafting optimal poison instances by observing the training data distribution are inapplicable since the malicious FL participant may only access and modify the training data s/he holds. Similarly, server-side defenses that rely on filtering and eliminating poison instances through anomaly detection or k-NN [36, 37] are inapplicable to FL since the server only observes parameter updates from FL participants, not their individual instances.

The rising popularity of FL has led to the investigation of different attacks in the context of FL, such as backdoor attacks [2, 46], gradient leakage attacks [18, 27, 59] and membership inference attacks [31, 47, 48]. Most closely related to our work are poisoning attacks in FL. There are two types of poisoning attacks in FL: data poisoning and model poisoning. Our work falls under the data poisoning category. In data poisoning, a malicious FL participant manipulates their training data, e.g., by adding poison instances or adversarially changing existing instances [16, 43]. The local learning process is otherwise not modified. In model poisoning, the malicious FL participant modifies its learning process in order to create adversarial gradients and parameter updates. [4] and [14] demonstrated the possibility of causing high model error rates through targeted and untargeted model poisoning attacks. While model poisoning is also effective, data poisoning may be preferable or more convenient in certain scenarios, since it does not require adversarial tampering of model learning software on participant devices, it is efficient, and it allows for non-expert poisoning participants.

Finally, FL poisoning attacks have connections to the concept of Byzantine threats, in which one or more participants in a distributed system fail or misbehave. In FL, Byzantine behavior was shown to lead to sub-optimal models or non-convergence [8, 20]. This has spurred a line of work on Byzantine-resilient aggregation for distributed learning, such as Krum [8], Bulyan [28], trimmed mean, and coordinate-wise median [55]. While model poisoning may remain successful despite Byzantine-resilient aggregation [4, 14, 20], it is unclear whether optimal data poisoning attacks can be found to circumvent an individual Byzantine-resilient scheme, or whether one data poisoning attack may circumvent multiple Byzantine-resilient schemes. We plan to investigate these issues in future work.

6 Conclusion

In this paper we studied data poisoning attacks against FL systems. We demonstrated that FL systems are vulnerable to label flipping poisoning attacks and that these attacks can significantly negatively impact the global model. We also showed that the negative impact on the global model increases as the proportion of malicious participants increases, and that it is possible to achieve targeted poisoning impact. Further, we demonstrated that adversaries can enhance attack effectiveness by increasing the availability of malicious participants in later rounds. Finally, we proposed a defense which helps an FL aggregator separate malicious from honest participants. We showed that our defense is capable of identifying malicious participants and it is robust to gradient drift.

As poisoning attacks against FL systems continue to emerge as important research topics in the security and ML communities [14, 4, 33, 56, 21], we plan to continue our work in several ways. First, we will study the impacts of the attack and defense on diverse FL scenarios differing in terms of data size, distribution among FL participants (iid vs non-iid), data type, total number of instances available per class, etc . Second, we will study more complex adversarial behaviors such as each malicious participant changing the labels of only a small portion of source samples or using more sophisticated poisoning strategies to avoid being detected. Third, while we designed and tested our defense against the label flipping attack, we hypothesize the defense will be useful against model poisoning attacks since malicious participants’ gradients are often dissimilar to those of honest participants. Since our defense identifies dissimilar or anomalous gradients, we expect the defense to be effective against other types of FL attacks that cause dissimilar or anomalous gradients. In future work, we will study the applicability of our defense against such other FL attacks including model poisoning, untargeted poisoning, and backdoor attacks.

Acknowledgements. This research is partially sponsored by NSF CISE SaTC 1564097. The second author acknowledges an IBM PhD Fellowship Award and the support from the Enterprise AI, Systems & Solutions division led by Sandeep Gopisetty at IBM Almaden Research Center. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or other funding agencies and companies mentioned above.

References

  • [1] A. Act (1996) Health insurance portability and accountability act of 1996. Public law 104, pp. 191. Cited by: §1.
  • [2] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov (2018) How to backdoor federated learning. arXiv preprint arXiv:1807.00459. Cited by: §5.
  • [3] N. Baracaldo, B. Chen, H. Ludwig, and J. A. Safavi (2017) Mitigating poisoning attacks on machine learning models: a data provenance based approach. In

    10th ACM Workshop on Artificial Intelligence and Security

    ,
    pp. 103–110. Cited by: §4.
  • [4] A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo (2019) Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning, pp. 634–643. Cited by: §5, §5, §6.
  • [5] B. Biggio, B. Nelson, and P. Laskov (2011) Support vector machines under adversarial label noise. In Asian conference on machine learning, pp. 97–112. Cited by: §1.
  • [6] B. Biggio, B. Nelson, and P. Laskov (2012) Poisoning attacks against support vector machines. In Proceedings of the 29th International Coference on International Conference on Machine Learning, pp. 1467–1474. Cited by: §2.2, §5.
  • [7] B. Biggio, I. Pillai, S. Rota Bulò, D. Ariu, M. Pelillo, and F. Roli (2013) Is data clustering in adversarial settings secure?. In Proceedings of the 2013 ACM Workshop on Artificial Intelligence and Security, pp. 87–98. Cited by: §5.
  • [8] P. Blanchard, R. Guerraoui, J. Stainer, et al. (2017) Machine learning with adversaries: byzantine tolerant gradient descent. In NeurIPS, pp. 119–129. Cited by: §5.
  • [9] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. M. Kiddon, J. Konečný, S. Mazzocchi, B. McMahan, T. V. Overveldt, D. Petrou, D. Ramage, and J. Roselander (2019) Towards federated learning at scale: system design. In SysML 2019, Note: To appear External Links: Link Cited by: §1, §3.3.
  • [10] E. Bursztein (2018) Attacks against machine learning - an overview. Note: https://elie.net/blog/ai/attacks-against-machine-learning-an-overview/[Online] Cited by: §5.
  • [11] S. Chen, M. Xue, L. Fan, S. Hao, L. Xu, H. Zhu, and B. Li (2018) Automated poisoning attacks and defenses in malware detection systems: an adversarial machine learning approach. computers & Security 73, pp. 326–344. Cited by: §5.
  • [12] A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli (2019) Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In 28th USENIX Security Symposium, pp. 321–338. Cited by: §5.
  • [13] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    2009 IEEE conference on computer vision and pattern recognition

    ,
    pp. 248–255. Cited by: Appendix 0.A, §3.2.
  • [14] M. Fang, X. Cao, J. Jia, and N. Z. Gong (2020) Local model poisoning attacks to byzantine-robust federated learning. In To appear in USENIX Security Symposium, Cited by: §2.2, §5, §5, §6.
  • [15] M. Fang, G. Yang, N. Z. Gong, and J. Liu (2018) Poisoning attacks to graph-based recommender systems. In Proceedings of the 34th Annual Computer Security Applications Conference, pp. 381–392. Cited by: §5.
  • [16] C. Fung, C. J. Yoon, and I. Beschastnikh (2018) Mitigating sybils in federated learning poisoning. arXiv preprint arXiv:1808.04866. Cited by: §5.
  • [17] A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage (2018) Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604. Cited by: §1.
  • [18] B. Hitaj, G. Ateniese, and F. Perez-Cruz (2017) Deep models under the gan: information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 603–618. Cited by: §5.
  • [19] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li (2018) Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP), pp. 19–35. Cited by: §5.
  • [20] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al. (2019) Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977. Cited by: §5.
  • [21] Y. Khazbak, T. Tan, and G. Cao (2020) MLGuard: mitigating poisoning attacks in privacy preserving distributed collaborative learning. Cited by: §6.
  • [22] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §3.1, §3.2.
  • [23] C. Liu, B. Li, Y. Vorobeychik, and A. Oprea (2017)

    Robust linear regression against training data poisoning

    .
    In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 91–102. Cited by: §5.
  • [24] D. Maiorca, B. Biggio, and G. Giacinto (2019) Towards adversarial malware detection: lessons learned from pdf-based attacks. ACM Computing Surveys (CSUR) 52 (4), pp. 1–36. Cited by: §5.
  • [25] S. Marcel and Y. Rodriguez (2010)

    Torchvision the machine-vision package of torch

    .
    In 18th ACM International Conference on Multimedia, pp. 1485–1488. Cited by: Appendix 0.A.
  • [26] K. Mathews and C. Bowman (2018) The california consumer privacy act of 2018. Cited by: §1.
  • [27] L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov (2019) Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP), pp. 691–706. Cited by: §5.
  • [28] E. M. E. Mhamdi, R. Guerraoui, and S. Rouault (2018) The hidden vulnerability of distributed learning in byzantium. arXiv preprint arXiv:1802.07927. Cited by: §5.
  • [29] M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan, and N. K. Jha (2014) Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE Journal of Biomedical and Health Informatics 19 (6), pp. 1893–1905. Cited by: §5.
  • [30] L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, and F. Roli (2017) Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 27–38. Cited by: §5.
  • [31] M. Nasr, R. Shokri, and A. Houmansadr (2019) Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE Symposium on Security and Privacy (SP), pp. 739–753. Cited by: §5.
  • [32] B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. Rubinstein, U. Saini, C. A. Sutton, J. D. Tygar, and K. Xia (2008) Exploiting machine learning to subvert your spam filter. LEET 8, pp. 1–9. Cited by: §5.
  • [33] T. D. Nguyen, P. Rieger, M. Miettinen, and A. Sadeghi (2020) Poisoning attacks on federated learning-based iot intrusion detection system. Cited by: §6.
  • [34] N. Papernot, P. McDaniel, A. Sinha, and M. P. Wellman (2018) SoK: security and privacy in machine learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 399–414. Cited by: §5.
  • [35] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. (2019) PyTorch: an imperative style, high-performance deep learning library. In NeurIPS, pp. 8024–8035. Cited by: §3.1.
  • [36] A. Paudice, L. Muñoz-González, A. Gyorgy, and E. C. Lupu (2018) Detection of adversarial training examples in poisoning attacks through anomaly detection. arXiv preprint arXiv:1802.03041. Cited by: §5.
  • [37] A. Paudice, L. Muñoz-González, and E. C. Lupu (2018) Label sanitization against label flipping poisoning attacks. In ECML-PKDD, pp. 5–15. Cited by: §5.
  • [38] G. D. P. Regulation (2016) Regulation (eu) 2016/679 of the european parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46. Official Journal of the European Union (OJ) 59 (1-88), pp. 294. Cited by: §1.
  • [39] B. I. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S. Lau, S. Rao, N. Taft, and J. D. Tygar (2009) Antidote: understanding and defending against poisoning of anomaly detectors. In Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement, pp. 1–14. Cited by: §5.
  • [40] T. Ryffel, A. Trask, M. Dahl, B. Wagner, J. Mancuso, D. Rueckert, and J. Passerat-Palmbach (2018) A generic framework for privacy preserving deep learning. arXiv preprint arXiv:1811.04017. Cited by: §1.
  • [41] A. Schlesinger, K. P. O’Hara, and A. S. Taylor (2018) Let’s talk about race: identity, chatbots, and ai. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–14. Cited by: §1.
  • [42] A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein (2018) Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances in Neural Information Processing Systems, pp. 6103–6113. Cited by: §5.
  • [43] S. Shen, S. Tople, and P. Saxena (2016) Auror: defending against poisoning attacks in collaborative deep learning systems. In Proceedings of the 32nd Annual Conference on Computer Security Applications, pp. 508–519. Cited by: §2.3, §5.
  • [44] J. Steinhardt, P. W. W. Koh, and P. S. Liang (2017) Certified defenses for data poisoning attacks. In NeurIPS, pp. 3517–3529. Cited by: §1, §2.3, §5.
  • [45] O. Suciu, R. Marginean, Y. Kaya, H. Daume III, and T. Dumitras (2018) When does machine learning fail? generalized transferability for evasion and poisoning attacks. In 27th USENIX Security Symposium, pp. 1299–1316. Cited by: §5.
  • [46] Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan (2019) Can you really backdoor federated learning?. arXiv preprint arXiv:1911.07963. Cited by: §5.
  • [47] S. Truex, L. Liu, M. E. Gursoy, L. Yu, and W. Wei (2018) Towards demystifying membership inference attacks. arXiv preprint arXiv:1807.09173. Cited by: §5.
  • [48] S. Truex, L. Liu, M. E. Gursoy, L. Yu, and W. Wei (2019) Demystifying membership inference attacks in machine learning as a service. IEEE Transactions on Services Computing. Cited by: §5.
  • [49] H. Xiao, K. Rasul, and R. Vollgraf (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §3.1.
  • [50] H. Xiao, H. Xiao, and C. Eckert (2012) Adversarial label flips attack on support vector machines. In ECAI, pp. 870–875. Cited by: §1, §2.3, §5.
  • [51] H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, and F. Roli (2015)

    Is feature selection secure against training data poisoning?

    .
    In International Conference on Machine Learning, pp. 1689–1698. Cited by: §2.2, §5.
  • [52] H. Xiao, B. Biggio, B. Nelson, H. Xiao, C. Eckert, and F. Roli (2015) Support vector machines under adversarial label contamination. Neurocomputing 160, pp. 53–62. Cited by: §1, §2.3, §5.
  • [53] C. Yang, Q. Wu, H. Li, and Y. Chen (2017) Generative poisoning attack method against neural networks. arXiv preprint arXiv:1703.01340. Cited by: §5.
  • [54] G. Yang, N. Z. Gong, and Y. Cai (2017) Fake co-visitation injection attacks to recommender systems. In NDSS, Cited by: §5.
  • [55] D. Yin, Y. Chen, R. Kannan, and P. Bartlett (2018) Byzantine-robust distributed learning: towards optimal statistical rates. In International Conference on Machine Learning, pp. 5650–5659. Cited by: §5.
  • [56] L. Zhao, S. Hu, Q. Wang, J. Jiang, S. Chao, X. Luo, and P. Hu (2020) Shielding collaborative learning: mitigating poisoning attacks through client-side detection. IEEE Transactions on Dependable and Secure Computing. Cited by: §6.
  • [57] M. Zhao, B. An, W. Gao, and T. Zhang (2017) Efficient label contamination attacks against black-box learning models. In IJCAI, pp. 3945–3951. Cited by: §5.
  • [58] C. Zhu, W. R. Huang, H. Li, G. Taylor, C. Studer, and T. Goldstein (2019) Transferable clean-label poisoning attacks on deep neural nets. In International Conference on Machine Learning, pp. 7614–7623. Cited by: §5.
  • [59] L. Zhu, Z. Liu, and S. Han (2019) Deep leakage from gradients. In Advances in Neural Information Processing Systems, pp. 14747–14756. Cited by: §5.

Appendix 0.A DNN Architectures and Configuration

All NNs were trained using PyTorch version 1.2.0 with random weight initialization. Training and testing was completed using a NVIDIA 980 Ti GPU-accelerator. When necessary, all CUDA tensors were mapped to CPU tensors before exporting to Numpy arrays. Default drivers provided by Ubuntu 19.04 and built-in GPU support in PyTorch was used to accelerate training. Details can be found in our repository:

https://github.com/git-disl/DataPoisoning_FL.

Fashion-MNIST: We do not conduct data pre-processing. We use a Convolutional Neural Network with the architecture described in Table 6. In the table, Conv = Convolutional Layer, and Batch Norm = Batch Normalization.

CIFAR-10: We conduct data pre-processing prior to training. Data is normalized with mean [0.485, 0.456, 0.406] and standard deviation [0.229, 0.224, 0.225]. Values reflect mean and standard deviation of the ImageNet dataset [13] and are commonplace, even expected when using Torchvision [25]

models. We additionally perform data augmentation with random horizontal flipping, random cropping with size 32, and default padding. Our CNN is detailed in Table

6.

Layer Type Size

Conv + ReLu + Batch Norm

3x3x32
Conv + ReLu + Batch Norm 3x32x32 Max Pooling 2x2 Conv + ReLu + Batch Norm 3x32x64 Conv + ReLu + Batch Norm 3x64x64 Max Pooling 2x2 Conv + ReLu + Batch Norm 3x64x128 Conv + ReLu + Batch Norm 3x128x128 Max Pooling 2x2 Fully Connected 2048 Fully Connected + Softmax 128 / 10
Table 5: CIFAR-10 CNN.
Layer Type Size Conv + ReLu + Batch Norm 5x1x16 Max Pooling 2x2 Conv + ReLu + Batch Norm 5x16x32 Max Pooling 2x2 Fully Connected 1568 / 10
Table 6: Fashion-MNIST CNN.