# Decentralized Bayesian Learning over Graphs

We propose a decentralized learning algorithm over a general social network. The algorithm leaves the training data distributed on the mobile devices while utilizing a peer to peer model aggregation method. The proposed algorithm allows agents with local data to learn a shared model explaining the global training data in a decentralized fashion. The proposed algorithm can be viewed as a Bayesian and peer-to-peer variant of federated learning in which each agent keeps a "posterior probability distribution" over a global model parameters. The agent update its "posterior" based on 1) the local training data and 2) the asynchronous communication and model aggregation with their 1-hop neighbors. This Bayesian formulation allows for a systematic treatment of model aggregation over any arbitrary connected graph. Furthermore, it provides strong analytic guarantees on converge in the realizable case as well as a closed form characterization of the rate of convergence. We also show that our methodology can be combined with efficient Bayesian inference techniques to train Bayesian neural networks in a decentralized manner. By empirical studies we show that our theoretical analysis can guide the design of network/social interactions and data partitioning to achieve convergence.

## Authors

• 7 publications
• 3 publications
• 1 publication
• 8 publications
• 31 publications
• 28 publications
• ### Peer-to-peer Federated Learning on Graphs

We consider the problem of training a machine learning model over a netw...
01/31/2019 ∙ by Anusha Lalitha, et al. ∙ 0

• ### Decentralized Collaborative Learning of Personalized Models over Networks

We consider a set of learning agents in a collaborative peer-to-peer net...
10/17/2016 ∙ by Paul Vanhaesebrouck, et al. ∙ 0

• ### BayGo: Joint Bayesian Learning and Information-Aware Graph Optimization

This article deals with the problem of distributed machine learning, in ...
11/09/2020 ∙ by Tamara Alshammari, et al. ∙ 0

• ### Collective Online Learning via Decentralized Gaussian Processes in Massive Multi-Agent Systems

Distributed machine learning (ML) is a modern computation paradigm that ...
05/23/2018 ∙ by Trong Nghia Hoang, et al. ∙ 0

• ### Adaptation and learning over networks for nonlinear system modeling

In this chapter, we analyze nonlinear filtering problems in distributed ...
04/28/2017 ∙ by Simone Scardapane, et al. ∙ 0

• ### COLA: Communication-Efficient Decentralized Linear Learning

Decentralized machine learning is a promising emerging paradigm in view ...
08/13/2018 ∙ by Lie He, et al. ∙ 0

• ### Detection of Insider Attacks in Distributed Projected Subgradient Algorithms

The gossip-based distributed algorithms are widely used to solve decentr...
01/18/2021 ∙ by Sissi Xiaoxiao Wu, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Personal edge devices can often use their locally observed data to learn machine learning models that improve the user experience on the device as well as on other devices. However, the use of local data for learning globally rich machine learning models has to address two important challenges. Firstly, this type of localized data, in isolation from the data collected by other devices, is unlikely to be statistically sufficient to learn a global model. Secondly, there might be severe restrictions on sharing raw forms of personal/local data due to privacy and communication cost concerns. In light of these challenges and restrictions, an alternative approach has emerged which leaves the training data distributed on the edge devices while enabling the decentralized learning of a shared model. This alternative, known as

Federated Learning

, is based on edge devices’ periodic communication with a central (cloud-based) server responsible for iterative model aggregation. While addressing the privacy constraints on raw data sharing, and significantly reducing the communication overload as compared to synchronized stochastic gradient descent (SGD), this approach falls short in fully decentralizing the training procedure. Many practical peer to peer networks are dynamic and a regular access to a fixed central server, which coordinates the learning across devices, is not always possible. Existing methods based on federated learning cannot handle such general networks where central server is absent. To summarize, some of the major challenges encountered in a fully decentralized learning paradigm are: (i)

Statistical Insufficiency: The local and individually observed data distributions are likely to be less rich than the global training set. For example, a subset of features associated with the global model may be missing locally. (ii)Restriction on Data Exchange: Due to privacy concerns, agents do not share their raw training data with the neighbors. Furthermore, model parameter sharing has been shown to reduce the communication requirements significantly. (iii) Lack of Synchronization: There may not be a single agent with whom every agent communicates which can synchronize the learning periodically. (iv) Localized Information Exchange: Agents are likely to limit their interactions and information exchange to a small group of their peers which can be viewed as the 1-hop neighbors on the social network graph. Furthermore, information obtained from different peers might be viewed differently, requiring a heterogeneous model aggregation strategy.

Contributions: We consider a fully decentralized learning paradigm where agents iteratively update their models using local data and aggregate information from their neighbors to their local models. In particular, we consider a learning rule where agents take a Bayesian-like approach via the introduction of a posterior distribution over a parameter space characterizing the unknown global model. Our theoretical and conceptual contributions are as follows: (i) Our decentralized learning rule generalizes a learning rule considered in the social literature 7172262 ; 7349151 ; 8359193 by restricting the posterior distribution to a predetermined family of distributions for computational tractability. (ii) We provide theoretical guarantee that each agent will eventually learn the true parameters associated with global model under mild assumptions. (iii) We provide analytical characterization of the rate of convergence of the posterior probability at each agent in the network as a function of network structure and local learning capacity. (iv) Unlike prior work, we allow a fully general network structure as long as it is strongly connected. As a consequence, our work provides first known theoretical guarantees on convergence for a Bayesian variant of federated learning.

In addition to our theoretical results we show that our methodology can be combined with efficient Bayesian inference techniques to train Bayesian neural networks in a decentralized manner. By empirical studies we show that our theoretical analysis can guide the design of network/social interaction and data partition to achieve convergence. We also show the scalability of our method by training over 100 neural networks on asynchronous time-varying networks. Our Bayesian approach has the added advantage of obtaining confidence value over agents’ predictions and can directly benefit from Bayesian learning literature which shows that these models offer robustness to over-fitting, regularization of the weights, uncertainty/confidence estimation, and can easily learn from small datasets

gal2016uncertainty ; local_repara_trick_kingma . In this regard, our work bridges the gap between decentralized training methodologies and Bayesian neural networks.

Related Work: Our fully decentralized training methodology extends federated learning DBLP:journals/corr/KonecnyMRR16 ; konevcny2016federated ; mcmahan2017communication to general graphs in a Bayesian setting and does away with the need of having a centralized controller. In particular, our learning rule also generalizes various Bayesian inference techniques such as vcl_iclr ; weight_uncert_NN ; streaming_var_bayes ; local_repara_trick_kingma and variational continual learning techniques such as vcl_iclr ; streaming_var_bayes . Lastly, our work can be viewed as a Bayesian variant of communication-efficient methods based on SGD JMLR:v18:16-512 ; DBLP:journals/corr/ChaudhariBZST17 ; DBLP:journals/corr/abs-1808-07217 which also allow the agents to make several local computations and then periodically average the local models. This is unlike decentralized optimization and SGD based methods Duchi2012DualAF ; 6425904 ; pmlr-v80-tang18a ; doi:10.1137/16M1080173 ; NIPS2017_7117 ; NIPS2017_7172 ; DBLP:journals/corr/JinYIK16 where local (stochastic) gradients are computed for each instance of data and communication happens at a rate comparable to number of local updates. For a detailed overview on the communication-efficient SGD methods contrasted with decentralized optimization methods refer to DBLP:journals/corr/abs-1808-07576 .

Notation

: We use boldface for vectors

and denote its -th element by . Let . Let and denote the set of all probability distributions and the number of elements resp. on a set . Let

denote the pdf of a Gaussian random variable with mean

and variance

. Let be the Kullback–Leibler (KL) divergence between two probability distributions .

## 2 Problem Formulation

##### The Model:

Let denote the global input space and let denote the set of all possible labels. The global dataset has input-label pairs belonging to which are distributed as . Consider a group of individual agents, where each agent has access to input-label pairs taken from a subset such that . The samples are independent and identically distributed (i.i.d), and are generated according to the distribution . Furthermore, we assume that each agent has a set of candidate local likelihood functions over the label space which are parametrized by and given by . Each agent is aiming to learn a distribution over which achieves the following

 infπ∈P(Θ)EPX[DKL(PY|X(⋅|X)∣∣∣∣∣∣∫Θℓi(⋅|θ,X)π(θ)dθ)]. (1)

Note that for any input , the distribution denotes predictive distribution over the label space . Minimizing the objective in equation (1) ensures that each agent makes statistically similar predictions as the true labelling function over the global dataset.

###### Definition 1.

A social learning model is said to be realizable if there exists a such that for .

We note that, in the realizable case, the minimizer of equation (1) is the trivial distribution which takes value one at and zero elsewhere. In other words, in the realizable case, each agent’s goal is to learn the true model parameter .

###### Definition 2.

If for all agents , then all agents have identically distributed observations across the network. We refer to this as the IID data distribution setting. In contrast, we call the local data to have non-IID data distribution when there exists for which .

###### Example 1 (Decentralized Linear Regression with non-IID Data Distribution).

Let and . Consider a linear realizable model where there exists a , data input , the label given as where the basis function provides the feature vector and denotes the additive Gaussian noise . This implies true probabilistic model generating the labels as well as local likelihood function at any agent , given an input is given by . Now we consider a non-IID data distribution: Fix some and let and . Suppose that agent 1 make observations in or can access only features locally. Similarly, agent 2 observations lie in , i.e. the remaining features locally. It is clear that the local features at each agent is such that the true parameter cannot be locally learned and there is a need for communication and model aggregation.

###### Example 2 (Decentralized Image Classification using Deep Neural Networks).

Consider the problem of learning a neural network which can approximate the input-label probabilistic model with distribution over the label space for each input image . In this setting, the local likelihood function at any agent , given an image was observed, conditioned on the DNN weights is obtained as follows where denotes the value of the output layer of the neural network at label .

##### The Communication Network:

We model the communication network between agents via a directed graph with vertex set . We define the neighborhood of agent , denoted by , as the set of all agents who have an edge going from to . We assume . Furthermore, if agent , agent receives information from agent

. The social interaction of the agents is characterized by a stochastic matrix

. The weight is strictly positive if and only if and . The weight denotes the confidence agent has on the information it receives from agent .

### 2.1 Decentralized Learning Rule

We introduce a decentralized learning rule which generalizes a learning rule considered in the social learning literature 7172262 ; 7349151 ; 8359193 . However, we restrict local posterior distributions to a predetermined family of distributions. This allows us to implement the decentralized algorithm in a computationally tractable manner. Let a family of posterior distributions. Start with with for all and . At each time step the following events happen at every agent :

1. Draw a batch of i.i.d samples .

2. Local Bayesian Update of Posterior: Perform a local Bayesian update on to form the public posterior vector using the following rule. For each ,

 b(n)i(θ) =ℓi(Y(n)i∣θ,X(n)i)q(n−1)i(θ)∫Θℓi(Y(n)i∣ϕ,X(n)i)q(n−1)i(ϕ)dϕ. (2)
3. Projection onto Allowed Family of Posteriors: Project onto an allowed family of posterior distributions by employing KL-divergence minimization,

 ΠQ(b(n)i)=argminπ∈QDKL(π∥∥b(n)i). (3)
4. Communication Step: Agent sends to agent if and receives from neighbors .

5. Consensus Step: Update private posterior distribution by averaging the log posterior distributions received from neighbors, i.e., for each ,

 q(n)i(θ)=exp(∑Nj=1WijlogΠQ(b(n)j)(θ))∫Θexp(∑Nj=1WijlogΠQ(b(n)j)(ϕ))dϕ. (4)
###### Remark 1 (Variational Inference).

In above learning rule, local Bayesian update of the posterior step (2) can be combined with the projection onto allowed family of distributions (3) as follows

 b(n)i =argminπ∈QDKL⎛⎝π∥∥ ∥∥1Z(n)iℓi(Y(n)i∣⋅,X(n)i)b(n−1)i(⋅)⎞⎠ (5) =argminπ∈QDKL(π∥∥q(n−1)i)+Eπ[−logℓi(Y(n)i∣⋅,X(n)i)], (6)

where is the possibly intractable normalization constant. Minimization performed in Equation (6) is referred to as Variational Inference (VI) and the minimand is referred to as the variational free energy murphy_ml_book ; weight_uncert_NN ; local_repara_trick_kingma ; gal2016uncertainty .

###### Remark 2 (Gaussian Approximate Posterior).

Gaussian approximate posterior can be obtained in an computationally efficient manner via VI techniques weight_uncert_NN ; local_repara_trick_kingma . More specifically, let denote the family of Gaussian posterior distributions with pdf given by . Let denote the mean and the covariance matrix of at agent at obtained using equation (6). Then we can show that the posterior distribution obtained after the consensus step also belongs to for all . Furthermore, the mean and covariance matrix of is given as follows

 ˜Σ(n)i−1=N∑j=1WijΣ(n)j−1,˜μ(n)i=˜Σ(n)iN∑j=1WijΣ(n)j−1μ(n)j. (7)

Hence, the family of Gaussian distributions not only makes the algorithm tractable, it simplifies the consensus step by eliminating the normalization involved in equation (

4) by reducing to updates on the mean and covariance matrix. Derivation is provided in the supplementary.

## 3 Analytic Results: Rate of Convergence

###### Assumption 1.

The network is a connected aperiodic graph. Specifically, is an aperiodic and irreducible stochastic matrix.

###### Assumption 2.

Let be a finite set and let and . There exists a parameter that is globally learnable, i.e, .

###### Assumption 3.

For all agents , assume:(i) The prior posterior for all . (ii) There exists an such that , for all , and .

These assumptions are natural. Assumption  1 states that one can always restrict attention to the connected components of the social network where the information gathered locally by the agents can disseminated within the component. Assumption  2 ensures the combined observation of the agents across the network is statistically sufficient to learn the global model. Finally, Assumption  3 prevents the degenerate case where a zero Bayesian prior prohibits learning.

###### Theorem 1.

Let be a finite set and let . Under assumptions 12 and 3, using the decentralized learning algorithm described in Sec. 2.1 for any given confidence parameter and any arbitrarily small , we have

 maxi∈[N]maxθ∉Θ∗b(n)i(θ)

when the number of samples satisfies where we define the rate of convergence of the posterior distribution as follows

 K(Θ):=minθ∗∈Θ∗,θ∈Θ∖Θ∗N∑j=1vjIj(θ∗,θ), (9)

and

where eigenvector centrality

is the unique stationary distribution of with strictly positive components, furthermore define , where denotes

-th eigenvalue of

counted with algebraic multiplicity and , and .

Proof of the theorem and additional comments on the rate of convergence are provided in the supplementary material.

###### Remark 3.

The rate of convergence characterized by (9) is a function of the agent’s ability to distinguish between the parameters given by the KL-divergences and structure of the weighted network which is captured by the eigenvector centrality of the agents. Hence, every agent influences the rate in two ways. Firstly, if the agent has higher eigenvector centrality (i.e. the agent is centrality located), it has larger influence over the posterior distributions of other agents as a result has a greater influence over the rate of exponential decay as well. Secondly, if the agent has high KL-divergence (i.e highly informative local observations that can distinguish between parameters), then again it increases the rate. If an influential agent has highly informative observations then it boosts the rate of convergence. We will illustrate this through extensive simulations in Sec. 4.

## 4 Experiments

### 4.1 Decentralized Bayesian Linear Regression

To illustrate our approach, we construct an example of Bayesian linear regression (Example

1) in the realizable setting over the network with 4 agents. We show that our proposed social learning framework enables a fully decentralized and fast learning of a global model even when the local data is severely deficient. More specifically, we assume that each agent makes observations along only one coordinate of even though the global test set consists of observations belonging to any (further details of the experimental setup are provided in the supplementary). Note that this is a case of extreme non-IID data partition across the agents. Fig. 0(c) shows that the MSE of both agents, when trained using the learning rule, matches that of a central agent implying that the agents converge to the true as our theory predicts.

###### Remark 4.

Note that Gaussian likelihood functions considered in Example 1 violate the bounded likelihood functions assumption. Furthermore, the parameters belong to a continuous parameter set . This example and those that follow demonstrate that our analytical assumptions on the likelihood functions and the parameter set are sufficient but not necessary for convergence of our decentralized learning rule.

### 4.2 Decentralized Image Classification

To illustrate the performance of our learning rule on real datasets we consider the problem of decentralized training of Bayesian neural networks for an image classification task on the MNIST digits dataset lecun-mnisthandwrittendigit-2010 and the Fashion-MNIST (FMNIST) dataset xiao2017_online . For all our experiments we consider a fully connected NN with the same architecture considered in the context of federated learning in mcmahan2017communication . Additional details regarding the implementation are provided in the supplementary. At each time step , we sample for and for each test set image , we employ Monte Carlo to obtain the prediction and confidence in the prediction as and respectively. The posterior probability

in Bayesian Deep Learning literature

gal2016uncertainty ; pmlr-v48-gal16 ; bayesian_DNN_CV_yarin , is interpreted as the confidence of agent in predicting as the true label. In our experiments, we divide the training dataset into subsets with non-overlapping label subsets. Hence, agents must learn such that the resulting predictive distribution can perform well over the global dataset without sharing the local data and hence not having seen input example associated with the labels that are missing locally. In other words, our agents, at test time, will produce labels of items that they might have never encountered during the training phase. To make the distinction, we refer to a label agent produces as an in-domain (ID) label if training data corresponding to that label is available locally otherwise they are referred to as out-of-domain (OOD) labels. We now describe our empirical studies.

#### 4.2.1 Design of Social Interaction Matrix W

In this section, we investigate how the social interaction matrix should be designed for a given network structure and a given data partition such that we maximize the rate of convergence in decentralized training. We examine this on a network with a star topology, where a central agent is connected to 8 other edge agents. Let the social interaction weights for the central agent be . For , we assume that an edge agent puts a confidence on the central agent, on itself and zero on others. Note that as the confidence which the edge agents put on the central agent increases, the eigenvector centrality of the central agent increases i.e., central agent becomes more influential over the network. For both MNIST and FMNIST, we partition the dataset such that the central agent has more informative local observations. Hence, using equation (9) we know that placing more confidence on the central agent increases the rate of convergence to the true parameter and increases rate of convergence of the test dataset accuracy. This is demonstrated in Fig. 1(a) and Fig. 1(b) where both accuracy and the rate of convergence improve as increases. In other words, rate of convergence and the average accuracy is the highest when the agent with most informative local observations has most influence on the network. Furthermore, on star topology we also demonstrate the scalability of our method through asynchronous implementing over time-varying networks with 25 agents and 100 nodes where we achieve and accuracy respectively (Sec. 1.4.3 in supplementary).

We focus on star topology since federated learning methods DBLP:journals/corr/KonecnyMRR16 ; mcmahan2017communication ; konevcny2016federated (only) consider networks with this structure. We compare the performance of our learning rule to the best reported results. On MNIST for the average accuracy we obtain is which is comparable to the federated learning method mcmahan2017communication FedAvg obtaining for the same architecture and data partition. Similarly, on FMNIST for the average accuracy we obtain is slightly inferior to the federated learning method FedAvg mcmahan2017communication which obtains accuracy in similar setting. For asynchronous time-varying networks, when we increase the number of agents in the network from 25 to 100, we again see a drop in the accuracy from to (Sec. 1.4.3 in the supplementary). We believe the lack of periodic global synchronization results in this difference and for detailed discussion refer to Remark 7 in the supplementary. An important area of future work is to overcome this challenge.

Effect on confidence over predictions: In addition to accuracy, Bayesian neural networks provide confidence estimates for each agent’s predictions. Hence, we investigate the effect of network structure on confidence. Fig. 3 shows the confidence on digits 0 and 2 at both central and edge agents as is varied. In all cases, we observe that both central agent and edge agents learn to predict their ID labels with higher confidence than the OOD labels. Furthermore, Fig. 2(a), Fig. 2(b) and Fig. 2(c) show that as eigenvector centrality of central agent (most informative agent) increases, the confidence on OOD label at agent edge increase as expected.

#### 4.2.2 Effect of Data Partition Over the Network

Effect of the agent placements: In this section, we investigate the appropriate placement of a locally informative agent in the network in a manner that maximizes the rate of convergence. We examine this on a grid network obtained by connecting every agent to its adjacent agents as shown in Fig. 3(a). The social interaction weights are defined as if and zero otherwise. In this network, the eigenvector centrality of agent is proportional to its degree ; hence, more number of neighbors implies higher social influence. We divide the data such that the local training set for one of our agents (the Type-1 agent) is statistically informative than the local training set for all other (Type-2) agents. Now, we consider two possible placements of the Type-1 agent in the network (shown in Fig. 3(a)): (i) Setting 1: Type-1 agent is placed at the center (position 5) of the network and (ii) Setting 2: Type-1 agent is placed in a corner location (position 1) in the network. Using equation (9) we can predict that setting 1 has a higher rate of convergence to the true parameter and a higher rate of convergence of the test dataset accuracy compared to setting 2 which is demonstrated in Fig. 8(a). In other words, rate of convergence is highest when the most influential agent in the network has access to an informative training dataset.

Effect of the type of data partition: Theorem 1 establishes the convergence of our learning rule under Assumption 2. Theoretical implication of this result is that all agents eventually learn the labeling function that best fits the global data if every wrong parameter labeling function can be eliminated by some agent in the network. In the case where the agents use neural networks, local learning can only learn features discriminative to in-domain labels. Our theoretical result suggests that agents are guaranteed to converge to the correct labeling function only when every pair of OOD labels is distinguished by some agents in the network. This also suggests that some non-IID data partitioning of the labels can lead to convergence to an ambiguous set of labeling functions. This has been also shown to lead to poor accuracy empirically in the federated learning literature fed_learning_noniid . Unlike federated learning, our analytic Bayesian framework allows us to theoretically predict the issue.

In order to understand the practical implications of Assumption 2, we construct an example where violating assumption leads to poor accuracy. Consider a star network with where the central agent has access labels and edge agents have access to labels . Given that share many common features and since no agent in the network has access to both digits, our analytic results fall short to ensure learning features that can directly distinguish . Indeed, Fig. 4(a) the confidence on OOD digit at the central agent and on OOD digit at an edge agent remains low. The effect of data partition described above is more pronounced in the case of FMNIST dataset. Let central agent have access to labels , trouser, dress, coat, shirt, and edge agents has access to labels , sandal, sneaker . Agents do not learn to distinguish label pullover at edge agents from the labels at the central agent. Fig. 4(b) shows that the confidence on OOD label coat at the edge agents is significantly low for this data partition and the average accuracy drops to . Contrast this with the less ambiguous and less severe data partition of FMNIST data considered for Fig. 1(b) where all the labels with shirt-like features, are assigned to a single type, both accuracy and confidence improve as seen in Fig. 4(c).

## 5 Conclusion

In this paper, we considered the problem of decentralized learning over a network of agents with no central server. We considered a peer-to-peer learning algorithm in which agents iterate and aggregate the beliefs of their one-hop neighbors and collaboratively estimate the global optimal parameter. We obtained high probability bounds on convergence and a full characterization of the rate of convergence across the network. We illustrated the effectiveness of algorithm for learning neural networks in computationally tractable manner while achieving high accuracies. Our experimental illustrate the predictive power of analysis of the algorithm. An important area of future work includes extensive empirical studies on various deep neural network architectures.

## Supplementary Material for Decentralized Bayesian Learning over Graphs

### 1.1 Comments on Rate of Convergence

###### Remark 5 (Positivity of K(Θ)).

We make a few comments on the quantity . Note that in the realizable setting, for any and we get which is non-negative. The KL-divergence between the likelihood functions conditioned on the input captures the extent of distinguishability of parameter from . For a wrong parameter , if is very small then we say that the local observations at agent are not informative enough to distinguish between and . Similarly for the non-realizable setting, for and by definition we have for all . Hence, is always positive. In the social learning literature, eigenvector centrality is a measure of social influence of an agent in the network, since each determines the contribution of agent in the collective network learning rate .

### 1.2 Consensus Step on Gaussian distributions

Let denote the mean and the covariance matrix of at agent at obtained using equation (6). Using equation (4), we have

 N∑j=1WijlnG(θ,μ(n)j,Σ(n)j) (10) =−12N∑j=1Wij((θ−μ(n)j)TΣ(n)j−1(θ−μ(n)j))−12N∑i=1Wijln(2π)k|Σ(n)j| (11) =−12(θTN∑j=1WijΣ(n)j−1θ+N∑j=1μ(n)jTWijΣ(n)j−1μ(n)j) (12) +12(N∑j=1μ(n)jTWijΣ(n)j−1θ+θTN∑j=1WijΣ(n)j−1μ(n)i)−12N∑j=1Wijln(2π)k|Σ(n)j|. (13)

By completing the squares we obtain is Gaussian distribution and we have

 ˜Σ(n)i−1=N∑j=1WijΣ(n)j−1, (14)

and

 ˜Σ(n)−1i˜μ(n)i=N∑j=1WijΣ(n)−1jμ(n)j⟹˜μ(n)i=˜Σ(n)iN∑j=1WijΣ(n)j−1μ(n)j. (15)

### 1.3 Details on Bayesian Linear Regression Experiment

Let and let noise be distributed as where . Agent makes observations , where and is sampled from for , for , for , and for . We assume each agent starts with a Gaussian prior over with zero mean vector and covariance matrix given by , where diag denotes a diagonal matrix with diagonal elements given by vector . The social interaction weights are given as , , and . We assume each agent starts with a Gaussian prior over and hence the posterior distribution after a Bayesian update remains Gaussian. This implies remains fixed as the family of Gaussian distributions and the consensus step reduces to equation (7).

### 1.4 Details on Bayesian Deep Learning Experiments on Image Classification

We consider two datasets: (i) the MNIST digits dataset [24] where each image is assigned a label in and (ii) the Fashion-MNIST (FMNIST) dataset [25] where each image is assigned a label in t-shirt, trouser, pullover, dress, coat, sandal, shirt, sneaker, bag,

. Both datasets consist of 60,000 training images and 10,000 testing images of size 28 by 28. For all our experiments we consider a fully connected NN with 2-hidden layers with 200 units each using ReLU activations which is same as the architecture considered in the context of federated learning in

[8].

For all the experiments we choose to be the family of Gaussian mean-field approximate posterior distributions with pdf given by , where is a strictly diagonal matrix [10, 9]. As discussed in Remark 1 this corresponds to performing variational inference to obtain a Gaussian approximation of the local posterior distribution, i.e., minimizing the variational free energy given in equation (6) over . While we compute the KL divergence in (6) in a closed form, we employ simple Monte Carlo to compute the gradients using Bayes by Backprop [10, 5].

###### Remark 6 (Prediction on Test Dataset).

In the absence of cooperation among the agents, each agent using the Bayes rule only learns the local posterior distribution and makes predictions on the test dataset input using the predictive distribution . However at any time step , using the decentralized learning rule each agent learns a posterior distribution and makes predictions on the test dataset input using a predictive distribution . Applying Thm. 1 we see that as the local posterior converges to for each agent , it can locally predict as if was trained on global dataset.

###### Remark 7.

Federated learning paradigm, unlike our fully decentralized setup, requires a centralized controller to aggregate the local models from each agent. Furthermore, after each round of communication with the central controller, every agent before training initializes its local model with the global model obtained from the central controller. The periodic shared initialization using a global model across the network, while it is a stringent constraint, is required to prevent the averaging performed at the central controller from producing an arbitrarily bad model [8]. Without modelling the correlation between the weights and bias of the agents across the network, different random initialization at each agent can lead to different local minima and result in diverging local models at the agents [29]. However, modeling of correlation between the weights and bias of the agents across the network is computationally prohibitive. We overcome this challenge by using shared initialization when the local models are trained for the first time at each agent, however we do not perform this after each communication round. Our method overcomes the need for shared initialization after each communication round by incorporating the global information (on the weights and bias across the agents) in the local training by using the prior , obtained locally via the consensus step (4) at each agent , in the minimization of variational free energy (6). It would be interesting to investigate other shared initialization suitable for decentralized training which addresses the gap in the performance.

#### 1.4.1 Design of Social Interaction Matrix W

For experiments in Sec. 4.2.1, we use a network with a star topology, where there is one central agent and 8 edge agents. We vary confidence which the edge agents put on the central agent over , the eigenvector centrality of the central agent increases as . We partition the MNIST dataset into two subsets so that the central agent dataset has all images of labels and edge agents has all images of labels . To ensure all the edge agents has equal number of images, we shuffle the images with labels and partition them into 8 non-overlapping subsets. We call this partition MNIST-Setup1.

Similarly, for Fashion-MNIST (FMNIST) dataset, we first partition into two subsets so that central agent has access to labels , pullover, dress, coat, shirt, and edge agents have access to labels , sandal, sneaker, . We shuffle the images with labels , sandal, sneaker, and partition them into 8 non-overlapping subsets. We call this partition FMNIST-Setup1.

We ensure that all agents has same number of local updates per communication round, which is equal to

. For the central agent, this means that for each local epoch, the central agent is trained on a random subset of its local dataset, whereas the edge agents use all the local dataset. For all agents, we use Adam optimizer

[30] with initial learning rate of 0.001 and learning rate decay of 0.99 per communication round.

#### 1.4.2 Effect of Data Partition over the Network

Effect of the agent placements: We use a 3 by 3 grid network illustrated by Fig. 3(a) in Sec. 4.2.2. We assign MNIST images with labels to an agent of Type-1 and divide images with labels among 8 agents of Type-2. In Center setting, we place Type-1 agent at the central location. In Corner setting, we place Type-1 agent in a corner location. Similar to Sec. 1.4.1, We ensure that all agents has same number of local updates per communication round, which is equal to . Again, we use Adam optimizer for all agents.

Effect of the type data partition: In ablation study, we again use a star network and consider two other ways of partitioning the MNIST dataset: (1) the central agent dataset has all images of labels and edge agents has all images of labels , we call this MNIST-Setup2, and (2) the edge agents has all images of labels [4,9] and the central agent other labels, we call this MNIST-Setup3. For FMNIST dataset, central agent has access to images with labels , trouser, dress, coat, shirt, and edge agents have access to images with labels , sandal, sneaker , we call this FMNIST-Setup2.

#### 1.4.3 Asynchronous Decentralized Learning on Time-varying Networks Experiment

Now we implement our learning rule on time-varying networks which model practical peer-to-peer networks where synchronous updates are not easy or very costly to implement. We consider a time-varying network of agents numbered as . At any give time, only agents are connected to agent in a star topology. For , let denote a graph with a star topology where the central agent 0 is connected to edge agents whose indices belong to . This implies at any given time only a small fraction of agents are training over their local data. Note that is strongly connected network over all agents. The social interaction weights for the central agent are . Let . An edge agent puts a confidence on the central agent , on itself and zero on others. The MNIST dataset is divided in an i.i.d manner, i.e., data is shuffled and each agent is randomly assigned approximately samples. For , we obtain an average accuracy of over all agents and accuracy at the central agent and for , we obtain an average accuracy of over all agents and accuracy at the central agent. This also demonstrates that decentralized learning can be achieved with as few as 600 samples locally.

### 1.6 Proof of Theorem 1

The proof of Theorem 1 is based the proof provided in [1, 2, 3]. For the ease of exposition, let for all . Fix a . We begin with the following recursion for each node and for any ,

 1nlogb(n)i(θ∗)b(n)i(θ)=1nN∑j=1n∑k=1Wkijz(n−k+1)j(θ∗,θ), (16)

where

 z(k)j(θ∗,θ)=logℓj(Y(k)j∣θ∗,X(k)i)ℓj(Y(k)j∣θ,X(k)i). (17)

From the above recursion we have

 1nlogb(n)i(θ∗)b(n)i(θ) ≥1nN∑j=1vj(n∑k=1z(k)j(θ∗,θ))−1nN∑j=1n∑k=1∣∣Wkij−vj∣∣∣∣z(k)j(θ∗,θ)∣∣ (18) (a)≥1nN∑j=1vj(n∑k=1z(k)j(θ∗,θ))−4ClogNn(1−λmax(W)), (19)

where follows from Lemma 1 and the boundedness assumption of log-likelihood ratios. Now fix , since we have

 −1nlogb(n)i(θ)≥−ϵ2+1nN∑j=1vj(n∑k=1z(k)j(θ∗,θ)).

Furthermore, we have

 P(−1nlogb(n)i(θ)≤N∑j=1vjIj(θ∗,θ)−ϵ)≤P(1nN∑j=1vjn∑k=1z(k)j(θ∗,θ)≤N∑j=1vjIj(θ∗,θ)−ϵ2).

Now for any note that

 N∑j=1vjn∑k=1z(k)j(θ∗,θ)−nN∑j=1vjIj(θ∗,θ) =n∑k=1(N∑j=1vjz(k)j(θ∗,θ)−N∑j=1vjE[z(k)j(θ∗,θ)]).

For any , applying McDiarmid’s inequality for all and for all we have

 P(n∑k=1(N∑j=1vjz(k)j(θ∗,θ)−N∑j=1vjE[z(k)j(θ∗,θ)])≤−ϵn2)≤e−ϵ2n2C.

Hence, for all , for we have

 P(−1nlogb(n)i(θ)≤N∑j=1vjIj(θ∗,θ)−ϵ)≤e−ϵ2n4C, (20)

which implies

 P(b(n)i(θ)≥e−n(∑Nj=1vjIj(θ∗,θ)−ϵ))≤e−ϵ2n4C. (21)

Using this we obtain a bound on the worst case error over all and across the entire network as follows

 P(maxi∈[N]maxθ∉Θ∗b(n)i(θ)≥e−n(K(Θ)−ϵ))≤N|Θ|e−ϵ2n4C, (22)

where . From Assumption 2 and Lemma 1 we have that . Then, with probability we have

 maxi∈[N]maxθ∉Θ∗b(n)i(θ)

when the number of samples satisfies

 n≥8ClogN|Θ|δϵ2(1−λmax(W)). (24)
###### Lemma 1 ( [2]).

For an irreducible and aperiodic stochastic matrix , the stationary distribution is unique and has strictly positive components and satisfies . Furthermore, for any the weight matrix satisfies