MANDERA: Malicious Node Detection in Federated Learning via Ranking

Federated learning is a distributed learning paradigm which seeks to preserve the privacy of each participating node's data. However, federated learning is vulnerable to attacks, specifically to our interest, model integrity attacks. In this paper, we propose a novel method for malicious node detection called MANDERA. By transferring the original message matrix into a ranking matrix whose column shows the relative rankings of all local nodes along different parameter dimensions, our approach seeks to distinguish the malicious nodes from the benign ones with high efficiency based on key characteristics of the rank domain. We have proved, under mild conditions, that MANDERA is guaranteed to detect all malicious nodes under typical Byzantine attacks with no prior knowledge or history about the participating nodes. The effectiveness of the proposed approach is further confirmed by experiments on two classic datasets, CIFAR-10 and MNIST. Compared to the state-of-art methods in the literature for defending Byzantine attacks, MANDERA is unique in its way to identify the malicious nodes by ranking and its robustness to effectively defense a wide range of attacks.

READ FULL TEXT VIEW PDF
10/21/2020

GFL: A Decentralized Federated Learning Framework Based On Blockchain

Due to people's emerging concern about data privacy, federated learning(...
09/13/2021

SignGuard: Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering

Gradient-based training in federated learning is known to be vulnerable ...
09/06/2021

Byzantine-Robust Federated Learning via Credibility Assessment on Non-IID Data

Federated learning is a novel framework that enables resource-constraine...
10/20/2020

Mitigating Sybil Attacks on Differential Privacy based Federated Learning

In federated learning, machine learning and deep learning models are tra...
10/15/2020

Federated Learning in Adversarial Settings

Federated Learning enables entities to collaboratively learn a shared pr...
11/29/2018

Analyzing Federated Learning through an Adversarial Lens

Federated learning distributes model training among a multitude of agent...
04/19/2018

Individual Security and Network Design with Malicious Nodes

Networks are beneficial to those being connected but can also be used as...

1 Introduction

Federated learning (FL) has observed a steady rise in use across a plethora of applications. FL departs from conventional centralized learning by allowing multiple participating nodes to learn on a local collection of training data, before each respective node’s updates are sent to a global coordinator for aggregation. The global model collectively learns from each of these individual nodes before relaying the updated global update back to the participating nodes. With an aggregation of multiple nodes, the resulting model observes greater performance than if each node was to learn on their local subset only. FL presents two key advantages, increased privacy for the contributing node as local data is not communicated to the global coordinator, and a reduction in computation by the global node as the computation is offloaded to contributing nodes.

However, the presence of malicious actors in the collaborative process may seek to poison the performance of the global model, to reduce the output performance of the model (Chen et al., 2017; Fang et al., 2020; Tolpegin et al., 2020b), or to embed hidden back-doors within the model (Bagdasaryan et al., 2020). Byzantine attack aims to devastate the performance of the global model by manipulating the gradient values of malicious nodes in a certain fashion. As these attacks emerged, researchers seek to defend FL from the negative impacts of these attacks.

In the literature, there are two typical defense strategies: malicious node detection and robust learning. Malicious node detection defenses by detecting malicious nodes and removing them from the aggregation (Blanchard et al., 2017; Guerraoui et al., 2018; Li et al., 2020; So et al., 2021). Robust learning (Blanchard et al., 2017; Yin et al., 2018; Guerraoui et al., 2018; Fang et al., 2020), however, withstands a proportion of malicious nodes and defenses by reducing the negative impacts of the malicious nodes via various robust learning methods (Wu et al., 2020b; Xie et al., 2019, 2020; Cao et al., 2021).

In this paper, we focus on defensing Byzantine attacks via malicious node detection. In the literature, there have been a collection of efforts along this research line. Blanchard et al. (2017)

propose a defense referred to as Krum that treats local nodes whose update vector is too far away from the aggregated barycenter as malicious nodes and precludes them from the downstream aggregation.

Guerraoui et al. (2018) propose Bulyan, a process that performs aggregation on subsets of node updates (by iteratively leaving each node out) to find a set of nodes with the most aligned updates given an aggregation rule. Xie et al. (2019) compute a Stochastic Descendant Score

(SDS) based on the estimated descendant of the loss function, and the magnitude of the update submitted to the global node, and only include a predefined number of nodes with the highest SDS in the aggregation. On the other hand,

Chen et al. (2021) propose a zero-knowledge approach to detect and remove malicious nodes by solving a weighted clustering problem. The resulting clusters update the model individually and accuracy against a validation set are checked. All nodes in a cluster with significant negative accuracy impact are rejected and removed from the aggregation step.

Although the aforementioned methods try to detect malicious nodes in different ways, they all share a common nature: the detection is based on the gradient updates directly. However, it is usually the case that different dimensions of the gradients remain quite different in the range of values and follow very different distributions. This phenomena makes it very challenging to precisely detect malicious nodes directly based on the node updates, as a few dimensions often dominate the final result. Although the weighted clustering method proposed by Chen et al. (2021) could avoid this problem partially by re-weighting different update dimensions, it is often not trivial to determine the weights in a principled way.

In this paper, we propose to resolve this critical problem from a novel perspective. Instead of working on the node updates directly, we propose to extract information about malicious nodes indirectly by transforming the node updates from numeric gradient values to the rank domain. Compared to the original numeric gradient values, whose distribution is difficult to model, the ranks are much easier to handle both theoretically and practically. Moreover, as ranks are scale-free, we no longer need to worry about the scale difference across different dimensions. We proved under mild conditions that the first two moments of the transformed rank vectors carry key information to detect the malicious nodes under a wide range of Byzantine attacks. Based on these theoretical results, a highly efficient method called MANDERA is proposed to separate the malicious nodes from the benign ones by clustering all local nodes into two groups based on the moments of their rank vectors. With the assumption that malicious nodes are the minority in the node pool, we can simply treat all nodes in the smaller cluster as malicious nodes and remove them from the aggregation.

Figure 1: An Overview of MANDERA

The contributions of this work are as follows. (1) We propose the first algorithm leveraging the rank domain of model updates to detect malicious nodes (Figure 1). (2) We provide theoretical guarantee for the detection of malicious nodes based on the rank domain under Byzantine attacks. (3) Our method does not assume knowledge on the number of malicious nodes, which is required in the learning process of prior methods. (4) We experimentally demonstrate the effectiveness and robustness of our defense on Byzantine attacks, including Gaussian attack, Sign Flipping attack and Zero Gradient attack, in addition to a more subtle Label Flipping data poisoning attack. (5) An experimental comparison between MANDERA and a collection of robust aggregation techniques are provided. The computation times are also compared, demonstrating gains of MANDERA by operating in the rank domain.

2 Defense Formalization

2.1 Notations

Suppose there are local nodes in the federated learning framework, where nodes are benign nodes whose indices are denoted by and the other nodes are malicious nodes whose indices are denoted by . The training model is denoted by , where is a -dimensional parameter vector and is a data matrix. Denote the message matrix received from all local nodes by the central server as , where denotes the message received from node . For a benign node , let be the data matrix on it with as the sample size, we have . A malicious node , however, tends to attack the learning system by manipulating in some way. Hereinafter, we denote to be the minimal sample size of the benign nodes.

Given a vector of real numbers , define its ranking vector as , where the ranking operator maps the vector to its permutation space which is the set of all the permutations of . For example, . We adopt average ranking, when there are ties. With the Rank operator, we can transfer the message matrix to a ranking matrix by replacing its column by the corresponding ranking vector . Further define

to be the mean and variance of

, respectively. As it is shown in later subsections, we can judge whether node is a malicious node based on under various attack types. In the following, we will highlight the behaviour of the benign nodes first, and then discuss the behaviour of malicious nodes and their interactions with the benign nodes under various Byzantine attacks respectively.

2.2 Behaviour of benign nodes

As the behaviour of benign nodes does not depend on the type of Byzantine attack, we can study the statistical properties of for a benign node before the specification of a concrete attack type. For any benign node , the message generated for parameter is

(1)

where denotes the sample on it. Throughout this paper, we always assume that s are independent and identically distributed (IID) samples drawn from a data distribution . Under the independent data assumption, since Equation 1 tells us that

is the sample mean of IID random variables, i.e.,

, directly applying the Strong Law of Large Numbers (SLLN) and Central Limit Theorem (CLT) leads to the lemma below immediately.

Lemma 1.

Under the independent data assumption, further denote and , with going to infinity we have for

(2)

2.3 Behaviour of malicious node under the Gaussian attack

Definition 1 (Gaussian attack).

In a Gaussian attack, the attacker manipulates malicious nodes to send Gaussian random messages to the global coordinator, i.e.,

are independent random samples from Gaussian distribution

, where and is the covariance matrix determined by the attacker.

Considering that almost surely (a.s.) with going to infinity for all based on Lemma 1, it is straightforward to see that and the distribution of for each converges to the Gaussian distribution centered at . Lemma 2 provides the details.

Lemma 2.

Under the same assumption as in Lemma 1, with going to infinity, we have for each malicious node under the Gaussian attack that

(3)

Lemma 1 and Lemma 2 tell us that for each parameter dimension , are independent Gaussian random variables with the same mean (i.e, ) but different variances (i.e., or ) under the Gaussian attack. Due to the symmetry of Gaussian distribution, it is straightforward to see

Moreover, the exchangeability of benign nodes and the exchangeability of malicious nodes when is reasonably large tell us: for each parameter dimension , there exist two positive constants and such that

where both and are complex functions of , and . Further assume that ’s are independent of each other, thus is the sum of independent random variables with a common mean. Thus, according to the Kolmogorov Strong Law of Large Numbers (KSLLN), we know that converges to a constant almost surely, which in turn indicates that also converge some constant almost surely. The Theorem 1 summarizes the results formally, with the detailed proof provided in Appendix C.

Theorem 1.

Assuming are independent of each other, under the Gaussian attack, we have for each local node that

(4)
(5)

where stands for the indicator function,

Considering that if and only if ’s fall into a lower dimensional manifold whose measurement is zero under the Lebesgue measure, we have if the attacker specifies the Gaussian variance ’s arbitrarily in the Gaussian attack. Thus, Theorem 1 in fact suggests that the benign nodes and the malicious nodes are different on the value of , and therefore provides a guideline to detect the malicious nodes. Although the we do need and to go to infinity for getting the theoretical results in Theorem 1, in practice the malicious node detection algorithm based on the theorem typically works very well when and are reasonably large and ’s are not dramatically far away from each other.

The independent rank assumption in Theorem 1, which assumes that are independent of each other, may look restrictive. However, in fact it is a mild condition that can be easily satisfied in practice due to the following reasons. First, for a benign node , and are often nearly independent, as the correlation between two model parameters and

is often very week in a larger deep neural network with a huge number of parameters. To verify the statement, we implemented independence tests for 100,000 column pairs randomly chosen from the message matrix

generated from the MNIST data. Distribution of the p-values of these tests are demonstrated in Figure 2

via a histogram, which is very close to a uniform distribution, indicating that

and are indeed nearly independent in practice. Second, even some and shows strong correlation, magnitude of the correlation would be reduced greatly during the transformation from to , as the final ranking also depends on many other factors.

Figure 2: Independence tests for 100,000 column pairs randomly chosen from message matrix generated from MNIST-Fashion data supports the independence assumption made in Theorem 1.

2.4 Malicious node detection for sign flipping attack

Definition 2 (Sign flipping attack).

Sign flipping attack aims to generate the gradient values of malicious nodes by flipping the sign of the average of all the benign nodes’ gradient at each epoch, i.e., specifying

for any , where .

Based on the above definition, the update message of a malicious node under the sign flipping attack is

(6)

For fixed , is also a fixed vector without randomness, as it is a deterministic function of . On the other hand, however, we can also treat as a random vector, since the randomness of can be transferred to via the link function in equation 6. In fact, for any parameter dimension , considering that for any according to Lemma 1, it is straightforward to see that can also be well approximated by a Gaussian distribution. The lemma 3 summarizes the result formally.

Lemma 3.

Under the sign flipping attack, for each malicious node and any parameter dimension , we have is a deterministic function of , whose limiting distribution when goes to infinity is

(7)

where , , and

is the harmonic mean of

.

Lemma 1 and Lemma 3 tell us that for each parameter dimension , the distribution of is a mixture of Gaussian components centered at plus a point mass located at . If ’s are reasonably large, variances

’s would be very close to zero, and the probability mass of the mixture distribution would concentrate to two local centers

and , one for the benign nodes and the other one for the malicious nodes. This intuition provides us the guidance to identify the malicious nodes in this attack pattern. Transforming to the rank domain, the above intuition leads to different behavior patterns of the benign nodes and the malicious nodes in the rank matrix , which in turn result in different limiting behavior of for the benign and malicious nodes. The theorem 2 summarizes the results formally, with the detailed proof provided in Appendix D.

Theorem 2.

With the same independent rank assumption as posed in Theorem 1, under the sign flipping attack, we have for each local node that

(8)
(9)

where , , , and are both quadratic functions of whose concrete form also depends on and .

Considering that if and only if , and if and only if is the solution of a quadratic function, the probability of is zero as . Such a phenomenon suggests that we can detect the malicious nodes based on the moments to defense the sign flipping attack as well. Noticeably, we note that the limit behaviour of and does not dependent on the specification of , which defines the sign flipping attack. Although such a fact looks a bit abnormal at the first glance, it is totally understandable once we realize that with the variance of shrinks to zero with goes to infinity for each benign node , any different between and would result in the same rank vector in the rank domain.

2.5 Malicious node detection for zero gradient attack

Definition 3 (Zero gradient attack).

Zero gradient attack aims to make the aggregated message to be zero, i.e., , at each epoch, by specifying for all .

Apparently, the zero gradient attack defined above is a special case of sign flipping attack by specifying . Since the conclusions of Theorem 2 keep unchanged for different specifications of as we have discussed, we have the following corollary for zero gradient attack.

Corollary 1.

Under the zero gradient attack, ’s and ’s follow exactly the same limiting behaviours as described in Theorem 2.

2.6 Mandera

Theorem 1, 2 and Corollary 1 imply that, under these three attacks (Gaussian attack, zero gradient attack and sign flipping attack), the first two moments of , i.e., , converge to two different limits for the benign nodes and the malicious nodes, respectively. Thus, for a real dataset where ’s and are all finite but reasonably large numbers, the scatter plot of would demonstrate a clustering structure: one cluster for the benign nodes and the other cluster for the malicious nodes. Figure 3 illustrates such a scatter plot for the 100 local nodes in a typical epoch of training the FASHION-MNIST dataset under different FL settings (to keep the two dimensions of the scatter plot to the same scale, we replaced by its square root instead). Clearly, a simple clustering procedure would detect the malicious nodes from the scatter plot. Based on this intuition, we propose MAlicious Node DEtection via RAnking (MANDERA) to detect the malicious nodes, whose workflow is detailed in Algorithm 1.

Input: Data .
1:  Convert the message data to ranking data by applying Rank operator.
2:

  Compute mean and standard deviation (SD) of the rows in

, i.e., and ;
3:

  Run the clustering algorithm K-means to

with , and denote the classification results as .
Output: Classification .
Algorithm 1 Malicious node detection via ranking (MANDERA)
Remark.

MANDERA can be applied to either a single epoch or multiple epochs. For a single-epoch mode, the input data is the message matrix received from a single epoch. For multiple-epoch mode, the data is the column-concatenation of the message matrices from multiple epochs. By default, the experiments below all use single epoch to detect the malicious nodes.

3 Experiments

We evaluate the efficacy in detecting malicious nodes within the federated learning framework with the use of two Datasets. The first is the Fashion-MNIST dataset (Xiao et al., 2017), a dataset of 60,000 and 10,000 training and testing samples respectively divided into 10 classes of apparel. The seconds is CIFAR-10 (Krizhevsky et al., 2009), a dataset of 60,000 small object images also containing 10 object classes. In these experiments we mainly adopt implementations of Byzantine attacks released by (Wu et al., 2020b, a) and the label flipping attack (Tolpegin et al., 2020b, a). In our experiments, we set for the Gaussian attack and for the sign flipping attack, where

is the identity matrix. For all experiments we fix

participating nodes, of which a variable number of nodes are poisoned . The training process is run until 25 epochs have elapsed. We have described the structure of these networks in Appendix A.

3.1 Illustration of the average ranking and standard deviation of ranking

Figure 3: The scatter plots of for the 100 nodes under four types of attack as illustrative examples demonstrating ranking mean and variance from the 1st epoch of training for the FASHION-MNIST dataset.

Section 2 speculated that the distribution of parameter ranks differ sufficiently for the detection of malicious and benign nodes. We validate this hypothesis in Figure 3 by illustrating the difference between the benign nodes and malicious nodes in terms of the mean of gradients’ rankings and the standard deviation of gradients’ ranking.

It can be observed from Figure 3 that, under Gaussian and Label flipping attacks, the average rankings of malicious nodes are of a similar distribution to benign nodes. It is problematic for distinguishing between the two types of nodes, if only average ranking information is used. On the other hand, Figure 3 displays a larger separation of distributions for the standard deviation of ranking. It is noted that all 4 attacks observe a convergence of the distributions as the number of malicious nodes increase, increasing the difficulty of defense for both MANDERA and all other defenses. However, the likelihood of an attacker controlling increasingly large numbers of malicious nodes also decrease.

3.2 Malicious node detection by MANDERA

(a) CIFAR-10
(b) Fashion-MNIST
Figure 4: Classification performance of our proposed approach MANDERA (Algorithm 1

) under four types of attack for CIFAR-10 data. GA: Gaussian attack; ZG: Zero-gradient attack; SF: Sign-flipping; and LF: Label-flipping. The boxplot bounds the 25th (Q1) and 75th (Q3) percentile, with the central line representing the 50th quantile (median). The end points of the whisker represent the Q1-1.5(Q3-Q1) and Q3+1.5(Q3-Q1) respectively.

We test the performance of MANDERA on the update gradients of a model under attacks. In this section, MANDERA acts as an observer without intervening in the learning process to identify malicious nodes with a set of gradients from a single epoch. Each configuration of 25 training epochs, with a given number of malicious nodes was repeated 20 times. Figure 4 demonstrates the classification performance (Metrics defined in Appendix B) of MANDERA with different settings of participating malicious nodes and the four poisoning attacks of Guassian Attack (GA), Zero Gradient attack (ZG), Sign Flipping attack (SF) and the Label Flipping attack (LF).

While we have formally demonstrated the efficacy of MANDERA in accurately detecting potentially malicious nodes participating in the federated learning process. In practice, to leverage an unsupervised K-means clustering algorithm, we must also identify the correct group of nodes as the malicious group. Our strategy is to identify the group with the most exact gradients, or otherwise the smaller group (we regard a system with over 50% of their nodes compromised as having larger issues than just poisoning attacks) 111More informed approaches to selecting the malicious cluster can be tested in future work. E.g. Figure 3 displays less variation of rank variance in malicious cluster compared to benign nodes. This could robust selection of the malicious group, and enabling selection of malicious groups larger than 50%..

From Figure 4, it is immediately evident that the recall of the malicious nodes for the Byzantine attacks is exceptional. However, occasionally benign nodes have also been misclassified as malicious under a SF, and to a lesser extent the ZG attack for both datasets. On all attacks, in the presence of more malicious nodes, the recall of malicious nodes trends down. As for the data poisoning attack of LF, it is consistently more difficult to detect, however we note that the LF attack has a more subtle influence on the model in contrast to the impact of Byzantine attacks.

3.3 MANDERA for defending against poisoning attacks

(a) CIFAR-10 Dataset
(b) FASHION-MNIST dataset
Figure 5: Model Accuracy at each epoch of training, each line of the curve represents a different defense against the poisoning attacks.

In this section, we encapsulate MANDERA into a module prior to the the aggregation step, MANDERA has the sole objective of identifying malicious nodes, and excluding their updates from the global aggregation step. Each configuration of 25 training epochs, a given poisoning attack, defense method, and a given number of malicious nodes was repeated 10 times. We compare MANDERA against 4 other robust aggregation defense methods, Krum (Blanchard et al., 2017), Bulyan (Guerraoui et al., 2018), Trimmed Mean (Yin et al., 2018) and Median (Yin et al., 2018). Of which the first 2 abandon an assumed number of malicious nodes, and the latter 2 only aggregate robustly.

From Figure 5, it is observed that MANDERA performs about the same as the best performing defense mechanisms, close to the performance of a model not under attack. MANDERA’s accuracy is observed to vary slightly under the LF attack on fashion data with 30 malicious nodes, this is consistent with the larger accuracy ranges previously observed in Figure 3(b).

3.4 Computational Efficiency

We have previously been able to observe that MANDERA can perform at par with the current highest performing poisoning attack defenses. Another benefit arises with the simplification of the mitigation strategy with the introduction of ranking at the core of the algorithm. Sorting and Ranking algorithms are fast. Additionally, we only apply clustering on the two dimensions of rank mean and standard deviation, in contrast to other works that seek to cluster on the entire node update (Chen et al., 2021). The times in Table 1 for MANDERA, Krum and Bulyan do not include the parameter/gradient aggregation step. These times were computed on 1 core of a Dual Xeon 14-core E5-2690, with 8 Gb of system RAM and a single NVidia Tesla P100. Table 1 demonstrates that MANDERA is able to achieve a faster speed than that of single Krum 222The use of multi-krum would have yielded better protection (c.f. Section 3) at the behest of speed. (by more than half) and Bulyan (by an order of magnitude).

Defense (Detection) Mean ± SD (ms) Defense (Aggregation) Mean ± SD (ms)
MANDERA  643    ±  8.646 Trimmed Mean 3.96 ± 0.41
Krum (Single) 1352   ±  10.09 Median 9.81 ± 3.88
Bulyan 27209 ±  233.4
Table 1: Mean and standard deviation of computational times for defense function given the same set of gradients from 100 nodes, of which 30 were malicious. Each function was repeated 100 times.

4 Discussion and Conclusion

If attackers create more adaptive attacks unlike Definition 1, 2 and 3, they may evade MANDERA and achieve model poisoning. In this work, we have configured our Federated Learner to use all 100 nodes in the learning process at every round, we acknowledge FL framework may learn the global model only using subset of nodes at each round. In these settings MANDERA would still function, as we would rank and cluster on the parameters of the participating nodes, without assuming any number of poisoned nodes. In Algorithm 1, performance could be improved by incorporating higher order moments. There exists the possibility of performing MANDERA in differential private or secure FL, with the use of private ranking algorithms. It remains to be seen the effectiveness of MANDERA on more advanced poisoning techniques like adversarial poisoning or Evasion attacks.

In conclusion, we have provided theoretical guarantees and experimentally shown efficacy in the use of ranking algorithms for the detection of malicious nodes performing poisoning attacks against federated learning. Our proposed method MANDERA, is able to achieve high detection accuracy and maintain a model accuracy on par with other seminal, high performing defense mechanisms, but with three notable advantages. First, provable guarantees for the use of ranking to detect Gaussian, Zero Gradient and Sign Flipping attacks. Next, faster detection with the use of ranking algorithms. Finally, the MANDERA defense does not need a prior estimation of the number of poisoned nodes. In this work we demonstrate how the rank domain can be useful in applications to defend against malicious actors.

Ethics Statement

The core objective of our research is to provide an additional means of defense against poisoning nodes that target Federated Learning. To test our defense we have implemented different attacks against the Federated Learning framework. Attackers may adopt our defense strategy to design new poisoning attacks. Fortunately, these poisoning attacks can not be leveraged to leak private information from Federated learning models, instead only impact its performance.

Reproducibility Statement

To ensure reproducible research, we have supplemented our proposal for MANDERA, by supplying both R and Python implementations of MANDERA used in this paper, uploaded with the remainder of the experiment code. The two datasets featured in this paper is CIFAR-10 Krizhevsky et al. (2009) and Fasion-MNIST Xiao et al. (2017); we have used each of these dataset unaltered from their respective sources. We have stated the assumptions in our theorems and their proofs can be found in the Appendix. But to explain our assumptions in simple terms, (1) The data samples on each local node are independently drawn from the same distribution. (2) The gradient value for each parameter is independent to each other.

References

  • E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov (2020) How to backdoor federated learning. In

    International Conference on Artificial Intelligence and Statistics

    ,
    pp. 2938–2948. Cited by: §1.
  • P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer (2017) Machine learning with adversaries: byzantine tolerant gradient descent. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30, pp. . External Links: Link Cited by: §1, §1, §3.3.
  • X. Cao, J. Jia, and N. Z. Gong (2021) Provably secure federated learning against malicious clients. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, pp. 6885–6893. Cited by: §1.
  • Y. Chen, L. Su, and J. Xu (2017) Distributed statistical machine learning in adversarial settings: byzantine gradient descent. Proc. ACM Meas. Anal. Comput. Syst. 1 (2). External Links: Link, Document Cited by: §1.
  • Z. Chen, P. Tian, W. Liao, and W. Yu (2021) Zero knowledge clustering based adversarial mitigation in heterogeneous federated learning. IEEE Transactions on Network Science and Engineering 8 (2), pp. 1070–1083. External Links: Document Cited by: §1, §1, §3.4.
  • M. Fang, X. Cao, J. Jia, and N. Gong (2020) Local model poisoning attacks to byzantine-robust federated learning. In 29th USENIX Security Symposium (USENIX Security 20), pp. 1605–1622. Cited by: §1, §1.
  • R. Guerraoui, S. Rouault, et al. (2018) The hidden vulnerability of distributed learning in byzantium. In International Conference on Machine Learning, pp. 3521–3530. Cited by: §1, §1, §3.3.
  • A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §3, Reproducibility Statement.
  • S. Li, Y. Cheng, W. Wang, Y. Liu, and T. Chen (2020) Learning to detect malicious clients for robust federated learning. arXiv preprint arXiv:2002.00211. Cited by: §1.
  • J. So, B. Güler, and A. S. Avestimehr (2021) Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications 39 (7), pp. 2168–2181. External Links: Document Cited by: §1.
  • V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu (2020a) Data poisoning attacks against federated learning systems - github. Note: https://github.com/git-disl/DataPoisoning_FL Cited by: §3.
  • V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu (2020b) Data poisoning attacks against federated learning systems. In European Symposium on Research in Computer Security, pp. 480–501. Cited by: §1, §3.
  • Z. Wu, Q. Ling, T. Chen, and G. B. Giannakis (2020a) Byrd-saga - github. Note: https://github.com/MrFive5555/Byrd-SAGA Cited by: §3.
  • Z. Wu, Q. Ling, T. Chen, and G. B. Giannakis (2020b)

    Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks

    .
    IEEE Transactions on Signal Processing 68, pp. 4583–4596. Cited by: §1, §3.
  • H. Xiao, K. Rasul, and R. Vollgraf (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. External Links: cs.LG/1708.07747 Cited by: §3, Reproducibility Statement.
  • C. Xie, S. Koyejo, and I. Gupta (2019) Zeno: distributed stochastic gradient descent with suspicion-based fault-tolerance. In International Conference on Machine Learning, pp. 6893–6901. Cited by: §1, §1.
  • C. Xie, S. Koyejo, and I. Gupta (2020) Zeno++: robust fully asynchronous sgd. In International Conference on Machine Learning, pp. 10495–10503. Cited by: §1.
  • D. Yin, Y. Chen, R. Kannan, and P. Bartlett (2018) Byzantine-robust distributed learning: towards optimal statistical rates. In International Conference on Machine Learning, pp. 5650–5659. Cited by: §1, §3.3.

Appendix A Neural Network configurations

We train these models with a batch size of 10, an SGD optimizer operates with a learning rate of 0.01, and 0.5 momentum for 25 epochs. The accuracy of the model is evaluated on a holdout set of 1000 samples.

a.1 Fashion-MNIST

  • Layer 1:

    , 2D Convolution, Batch Normalization, ReLU Activation, Max pooling.

  • Layer 2: , 2D Convolution, Batch Normalization, ReLU Activation, Max pooling.

  • Output: Classes, Linear.

a.2 Cifar-10

  • Layer 1: , 2D Convolution, Batch Normalization, ReLU Activation, Max pooling.

  • Layer 2: , 2D Convolution, Batch Normalization, ReLU Activation, Max pooling.

  • Output: Classes, Linear.

Appendix B Metrics

The metrics observed in Section 3 to evaluate the performance of the defense mechanisms are defined as follows:

Appendix C Proof of Theorem 1

Proof.

Because are independent random variables with a finite upper bound (since is fixes) as assumed, direct application of KSLLN leads to

(10)
(11)

To prove Theorem 1 based on Equation 10 and 11, we need to derive the concrete form of and .

Fortunately, because for and for when , it is straightforward to see due to the symmetry of Gaussian distribution that

(12)

Moreover, assuming that the sample sizes of different benign nodes approach to each other with going to infinity, i.e.,

(13)

for each parameter dimension , would converge to the same Gaussian distribution with the increase of . Thus, due to the exchangeability of and , it is easy to see that there exist two positive constants and , such that

(14)

where and are both complex functions of , , , and , and if and only if .

Combining Equation 10 and 12, we have

i.e., Equation 4, which further indicates that and share the same limit when both and go to infinity. Thus, we have

(15)

Combining Equation 11, 14, and 15, we have

i.e., Equation 5. Thus, the proof is complete. ∎

Appendix D Proof of Theorem 2

Proof.

It is straightforward to see that equation 10 also holds for sign flipping attack under the assumptions of Theorem 2. But, we need to re-calculate for benign and malicious nodes under the new setting.

Under the sign flipping attack, because for and for when , and

it is straightforward to see that

which further indicates that

(16)
(17)

where .

Combining Equation 10 and 16, we have

where , i.e., Equation 8.

Define . Based on KSLLN, we have:

As we have proved in Equation 8 that

we have

which implies that

Considering that

where

we have

It completes the proof of Equation 9 by specifying and . ∎

Appendix E Model Performance

Figure 6 presents the model loss to accompany the model prediction performance of Figure 5 previously seen in Section 3.

(a) CIFAR-10
(b) FASHION-MNIST
Figure 6: Model Loss at each epoch of training, each line of the curve represents a different defense against the attacks (GA: Gaussian attack; ZG: Zero-gradient attack; SF: Sign-flipping; and LF: Label-flipping).