DeepAI
Log In Sign Up

Walk for Learning: A Random Walk Approach for Federated Learning from Heterogeneous Data

06/01/2022
by   Ghadir Ayache, et al.
0

We consider the problem of a Parameter Server (PS) that wishes to learn a model that fits data distributed on the nodes of a graph. We focus on Federated Learning (FL) as a canonical application. One of the main challenges of FL is the communication bottleneck between the nodes and the parameter server. A popular solution in the literature is to allow each node to do several local updates on the model in each iteration before sending it back to the PS. While this mitigates the communication bottleneck, the statistical heterogeneity of the data owned by the different nodes has proven to delay convergence and bias the model. In this work, we study random walk (RW) learning algorithms for tackling the communication and data heterogeneity problems. The main idea is to leverage available direct connections among the nodes themselves, which are typically "cheaper" than the communication to the PS. In a random walk, the model is thought of as a "baton" that is passed from a node to one of its neighbors after being updated in each iteration. The challenge in designing the RW is the data heterogeneity and the uncertainty about the data distributions. Ideally, we would want to visit more often nodes that hold more informative data. We cast this problem as a sleeping multi-armed bandit (MAB) to design a near-optimal node sampling strategy that achieves variance-reduced gradient estimates and approaches sub-linearly the optimal sampling strategy. Based on this framework, we present an adaptive random walk learning algorithm. We provide theoretical guarantees on its convergence. Our numerical results validate our theoretical findings and show that our algorithm outperforms existing random walk algorithms.

READ FULL TEXT VIEW PDF
09/03/2020

Private Weighted Random Walk Stochastic Gradient Descent

We consider a decentralized learning setting in which data is distribute...
04/18/2018

A Communication-Efficient Random-Walk Algorithm for Decentralized Optimization

This paper addresses consensus optimization problem in a multi-agent net...
01/05/2021

Het-node2vec: second order random walk sampling for heterogeneous multigraphs embedding

We introduce a set of algorithms (Het-node2vec) that extend the original...
02/20/2020

Dynamic Federated Learning

Federated learning has emerged as an umbrella term for centralized coord...
11/03/2020

Random Walk Bandits

Bandit learning problems find important applications ranging from medica...
05/31/2022

FedWalk: Communication Efficient Federated Unsupervised Node Embedding with Differential Privacy

Node embedding aims to map nodes in the complex graph into low-dimension...
09/02/2016

SynsetRank: Degree-adjusted Random Walk for Relation Identification

In relation extraction, a key process is to obtain good detectors that f...

I Introduction

I-a Overview and Motivation

Distributed Machine Learning has proven to be an important framework for training machine learning models without moving the available data from its local devices, which ensures privacy and scalability. Federated Learning (FL) has risen to be one of the main applications

[1, 2, 3] that has been attracting significant research attention and has been deployed in real-world systems with millions of uses [2]. Other applications include learning in IoT networks [4], smart cities and healthcare [5, 6]. To see how a typical learning algorithm works in this setting, consider the FL setting in Fig 1. There is as Parameter Server (PS) (typically sitting in the cloud) and a number of nodes (phones, IoT-devices, smart sensors, etc.) each having its own local data. The PS wishes to learn a global model on all the data without moving the data away from its original owner. The algorithm would work in a batch SGD fashion. In each iteration, the PS samples a batch of nodes and sends the current model to it. Each node in this batch will update the model based on its local data and sends back its updated model to the PS. The PS then aggregates all the received models and starts over again.

Fig. 1: Distributed Learning on Graph through local computations at the graph’s nodes and through local communication between the connected nodes.

Locality vs. Heterogenity. One of the main bottlenecks here is the communication with the parameter server (PS) needed in each iteration to aggregate the updates sent by the nodes and to coordinate the learning process. A popular solution is to reduce the communication cost with the PS is to let each node perform several local model updates on its data before reporting back to the PS [7]. However, the local computations may induce local biases to the model and slow the convergence of the learning algorithm. This is due to the data that is heterogeneous across the different nodes, which imposes an inconsistency between the local and the global objectives [8, 9].

Random Walks.

Fig. 2: Random Walk (RW) on the graph: the updated model is transmitted at time . The self-loops at and , indicate that the RW algorithm decided to make a second update at the same node. The model is reported back to the PS on a regular basis.

We propose Random Walks (RW) as a way to simultaneously achieve two seemingly opposing goals: extending the benefits of locality and mitigating the drawbacks of data heterogeneity. The idea being that instead of restricting local computations to the node itself, they can be extended to its neighboring nodes. This is achieved by leveraging existing local connections among the nodes themselves, which are typically cheaper than communication to the PS. We represent the local connections by a graph structure where each node is connected to a subset of other neighboring nodes. Thus, each node can exchange information with its neighbors and, through these local communications, information can propagate through the whole network. This setting can arise in mobile and edge networks, IoT applications and ad-hoc networks, to name a few.

In our proposed framework, the learning algorithm will run as a random walk where in each iteration the model gets updated at one of the nodes and then passed to one of its neighbors to take over the next update, as shown in Figure 2. The model is then passed to the PS on a regular basis and/or depending on the network resources to mitigate the communication cost to the PS.

Random walk learning algorithms have been well studied in the literature on optimization [10, 11], wireless networks [12] and signal processing [13]. What distinguishes this work is that it tackles the problem of designing random walk learning algorithms in the presence of arbitrary data heterogeneity across the nodes. Our main contribution is a random walk algorithm that, along with updating the model, it learns and adapts to the different nodes’ distributions by carefully combining exploration and exploitation. Our main tool is the theory of Multi-Armed Bandit (MAB).

Random Walk via Multi-Armed Bandit. Typically, the distributions of the local data at each node are not known a priori. Therefore, we want to devise a random walk strategy that overcomes data heterogeneity by learning about the local data distributions along the way. The goal is to minimize the variance of the global objective gradient estimates computed locally by adjusting the nodes’ sampling strategy. More specifically, at each iteration

, one has to design the probabilities with which the next node in the RW is chosen among the neighboring nodes. Note that these probabilities will depend on

to adapt to the information learned so far, leading to a time-varying RW. Therefore, the random walk will start with an exploration phase before gradually transitioning into an exploitation phase, once more robust estimates about the nodes’ data are obtained.

We design the RW by casting our problem as Mulit-Armed bandit. The multi-armed bandit (MAB) is a learning framework to decide optimally under uncertainty [14, 15, 16]. It features arms with unknown random costs (negative rewards). At each iteration, one pulls. The problem is to decide on which arm to pull each time so that the accumulated cost, called regret, after playing times is minimized. Our solution is an algorithm that explores the different arms and, in parallel, it exploits the collected information so far, where it observes the outcome of playing an arm and uses it to tune its expected reward estimate to adjust future selections [14, 15]. In our distributed learning setting, we model the node selection in the RW as arm pulling in the MAB framework. At each iteration, the random walk picks a node in graph to activate for the next update, observes the update, and receives the local gradient as a cost.

Under this analogy, the performance of the learning algorithm is measured by the regret, which is the difference between the cost of a random walk with optimal transition matrix when all the distributions are known, and the cost of the nodes visited by the algorithm.

I-B Contribution

In this paragraph, we summarize our contribution as follows:

  • In this work, we propose a distributed learning algorithm to learn a model on the distributed data over the nodes in a graph. Our algorithm selects the nodes to update the model by an adaptive random walk on the network to address the statistical heterogeneity of distributed data.

  • We model the random walk transition design as a sleeping multi-armed bandit problem to compete with the optimal transition probabilities that mitigate the high variance in the local gradients estimates.

  • We provide the theoretical guarantee on the rate of convergence of our proposed algorithm approaching a rate . The rate depends on the graph spectral property and the minimal gradient variance.

  • Finally, we simulate our algorithm on real and synthetic data, for different graph settings and heterogeneity levels and show that it outperforms existing baseline random walk designs.

I-C Prior Work

Random Walk Learning. Several works have studied random walk learning algorithms focusing on the convergence under different sets of assumptions. The works of [17, 11, 18] established theoretical convergence guarantees for uniform random walks for different convex problem settings and using first-order methods. Later work [19] employs more advanced stochastic updates based on gradient tracking technique that uses Hessian information to accelerate convergence. The work of [20] proposed to speed-up the convergence by using non-reversible random walks. In [13], the authors studied the convergence of random walks learning for the alternating direction method of multipliers (ADMM). In [21], the paper proposes to improve the convergence guarantees by designing a weighted random walk that accounts for the importance of the local data to speed up convergence. An asymptotic fundamental bound on the convergence rate of these algorithms was proven by [22] and it approaches under convexity and bounded-gradient assumptions. From our MAB perspective, a common aspect of these algorithms is that they are purely exploitative. They require a priori information about the local data (e.g., gradient-Lipschitz constants, bounds on the gradients) to design a non-adaptive (time-invariant) random walk.

Random Walk algorithms belong to a more general class of decentralized learning algorithms where no central entity, such as a PS, is involved to handle the learning process. Gossip algorithms are another class of decentralized algorithms that are not based on random walks (see for e.g. [10, 23, 24, 25]). In (synchronous) gossip algorithms, at each round, each node updates and exchanges its local model with its neighboring nodes. Hence, in each iteration, all the nodes and all the links in the graph are activated. The goal of a Gossip algorithm is to ensure that all nodes, and not just the PS, learn the global model and assume convergence once a consensus is reached. Hence, it is less efficient in terms of computations and communication costs [21].

Data Heterogeneity. Recently, there has been lots of work addressing the problem of data heterogeneity especially in the FL literature. The data across the nodes is typically not generated in an iid fashion. A local dataset tends to be more personalized and biased towards its specific owner’s profile. Therefore, multiple local updates on the global model can drift the global objective optimization towards its local one. This may slow down the convergence and can lead to converging to a suboptimal model [26, 9]. Several measures have been proposed in the literature to quantify statistical heterogeneity, which refers to this local vs. global objectives’ inconsistency in the distributed data. The focus has been on quantifying the gap between the local update direction and the global one [8, 26, 27, 28, 29]. The proposed solutions vary between controlling the update direction [28] or the learning objective [26].

Multi-Armed Bandit Sampler. The multi-armed bandit (MAB) problem aims to devise optimal sampling strategies by balancing together exploration and exploitation [14, 15, 30, 16]. Results from MAB have been used in problems related to standard SGD training in a non-distributed settings [31, 32, 33, 34, 35]. The idea there is to use an MAB sampler to select more often the data points that can better guide the learning algorithm.

In the classical MAB setting there is no constraint on which node to sample (visit) at a given time (which arm to pull in the MAB language). However, in our case, we are restricted by the graph topology, so only neighboring nodes can be visited. To account for this constraint, we cast our problem as Sleeping Multi-Armed Bandit in which nodes that are not neighbors of the current nodes are assumed to be sleeping (not available) at the time of the sampling. The Sleeping MAB literature has studied various assumptions on sleeping reliance: independent availabilities, general availabilities, and adversarial availabilities. The lower bound on the regret is known to be if we consider stochastic independent availabilities. For a harder sleeping-MAB setting with adversarial availabilities, the lower bound is for being the total number of nodes and being the total number of rounds [36, 30, 37]. In our work, we model the RW design problem as dependent availabilities Sleeping MAB learning algorithm [37]. For the proof technicality, we use a harder upper bound on the performance that assume oblivious adversarial availibilities.

A related line of work is the work on importance sampling which can be thought of as a pure exploitation scheme with no exploration. The literature has studied different aspects of importance sampling using prior information on the local datasets (e.g.,[38, 39, 40, 41]). For instance, in [42], the paper proposes to sample proportionally to the smoothness bounds of the local objectives. While the scheme in [38] suggests to select the data points based on the bounds of the gradients of the local objectives.

I-D Organization

The rest of the paper is organized as follows. We present the problem setup in Section II. In Section III, we present the detailed random walk learning algorithm. In Section IV, we provide the optimal sampling scheme and its theoretical motivation. In Section V, we outline the analogy between the RW design problem and the sleeping MAB problem. In Section VI, we present the MAB RW learning algorithm and the main theorem on its convergence. Moreover, we provide the technical definitions and assumptions used into the main theorem proof in Section VII . Finally, we provide numerical results on the convergence of our proposed algorithm in Section VIII. The full proofs of the technical results are deferred to the appendices.

Ii Setup

Ii-a Network Model.

We represent a network of nodes by an undirected graph with being the set of nodes and being the set of edges such that . Since the graph is undirected, then . Any two connected nodes and are called neighbor nodes and we denote it by . Moreover, we assume that all the nodes have self-loops, thus, .

Ii-B Data Model.

We assume that every node owns a dataset of size such that , which is sampled from an unknown local distribution .

Ii-C Learning Objective.

Our goal is to minimize a global objective function where , being the feasible set assumed to be closed and bounded. The objective function represents the empirical mean of local losses on the distributed data over the graph of nodes. Therefore, we are looking to solve the following problem:

(1)

where the function is the local objective at the node and it is defined as

(2)

The optimal model is denoted by and defined as follows:

Ii-D Data Heterogeneity.

The data distributions across the nodes of the network are assumed to be arbitrary. Therefore, when a node performs a local update on the global model it may bias it by its local dataset that may not be a good representative of the global learning objective. Multiple definitions have been recently proposed to quantify the degree of local heterogeneity in distributed systems [26, 29, 9, 27]. These definitions focus on the variance of the local gradients with respect to the global gradient at a given model . In our work, we adopt the definition used by [28] as stated below.

Definition 1 (Data Heterogeneity).

The local objectives s are -locally dissimilar at if

for , , is the expectation over the nodes and is the gradient at node . For and , we restore the homogeneous case.

This definition is a generalization of other definitions [26, 29].

Ii-E Model Update.

We focus in our analysis on first-order methods using stochastic gradient descent

111Our work is applicable to any iterative algorithm that uses an unbiased descent update [43, 44, 45].. Given the limited compute power of the nodes,

Thus, the model update at round will be as follows

(3)

where is the step size,

is an unbiased estimate of the local gradient at node

computed on a uniformly sampled data point from , is the probability of picking node at round , and is the projection operator onto the feasible set . For our convergence analysis, we will need the following technical assumptions.

Assumption 1.

For every node

, the local loss function

is differentiable and convex function on the closed bounded domain .

Assumption 2.

The step size is decreasing and satisfies the following

(4)
Assumption 3 (Bounded Gradient).

There exits a constant such that, and , we have

The last assumption is actually a result that follows from the functions ’s being convex on a closed bounded subset . A complementary proof can be found in [21].

Iii Random Walk Algorithm

Our objective is to design a random walk on the graph algorithm to learn the optimal model . The algorithm starts uniformly at random at an initial node in the graph, say , with an initial model also sampled uniformly at random from the feasible set . Let be the node visited (active) at the round of the algorithm, . At each round , the active node will receive the latest model update from a neighbor node , that was active at the previous round. Then, the model is updated via a gradient descent update using data sampled from the local dataset of node .

The main question we are after is how to design the transition probabilities defining the random walk, which govern how the RW is sampling the nodes in the graph. In addition to the explicit objective of learning the model, the random walk will simultaneously learn information about the heterogeneity of each node’s data. Therefore, as the random walk progresses, it can adapt with the information gained on the importance of a given node’s data to speed up the convergence. For this reason, we allow the transition probabilities of the random walk to adapt over time (algorithm rounds). We denote by

the probability distribution to select the node

to be active at round . We denote by the transition matrix at time . Therefore, we have if and otherwise. Moreover, we have and .

Iv Node Sampling Strategy

We aim to design a sampling strategy that mitigates the effect of heterogeneity on the performance of the learning algorithm. Such strategy is constrained by an environment with two essential properties to consider: 1) The node sampling is restricted by the topology of the graph where the node to pick next has to be connected to the current active node; 2) No full information about the distributed heterogeneous data is available except what has been learned in the rounds so far and what can be shared among neighbor nodes.

In our algorithm, each node is sampled (visited) with probability at round . And the gradient is computed on one data point sampled uniformly among the

local data points at the visited node. A crucial quantity for our analysis is the second moment of the unbiased gradient estimate at round

which is is

(5)

This quantity affects the convergence rate of the Random Walk SGD algorithm as we show in equation (6) (see the Appendix for the details)

(6)

Thus, the second moment of the gradients’ updates imposes a burden on the convergence, especially in heterogeneous data settings where the diversity of the gradients is high. This dependence on the second moment is a common property of SGD based algorithms and has been well studied in the literature. Variance reduction techniques via importance sampling have been proposed to improve the convergence guarantees [42, 38, 39, 46, 41].

Our goal is to design the node sampling strategy to approach the optimal probability that minimizes the convergence bound [38, 42, 39] given by

Note that computing the ’s is very costly since it requires computing the gradients related to all the data points owned by the node and its neighbors. Moreover, the ’s need to be re-computed at the new model at each iteration. Instead, we propose to estimate in each iteration the ’s using the already computed gradients for the update step in (3). Therefore, at each iteration, the random walk has a double-fold objective: (i) update the model in each iteration and (ii) refine the estimates of the ’s by adjusting the RW level of exploitation vs. exploration using tools from the theory of sleeping multi-armed bandit.

V Sleeping Bandit for RW Node Sampling

The multi-armed bandit (MAB) problem [47, 16] is a decision framework that features a set of arms, where each arm has an unknown cost at round . A player selects a sequence of arms up to the final round (also called called horizon) of the algorithm. The goal is to design an arm selection strategy to minimize the accumulated regret over the total number of rounds :

(7)

The first term in the regret, , is the average cost of the arm selected by the player. The second term, , is the “best” arm with the minimum cost that could have been selected were the costs known. The expectation is taken over the selection strategy and the cost randomness.

Our work is based on establishing an analogy between MAB and RW. This allows us to use results from the vast literature on MAB to design the RW in order to speed up the learning process. To that end, we think of each node as an arm, and visiting a node in the RW as selecting an arm in the MAB problem. What is not clear in this analogy is what the cost of visiting a node and updating the model is. Based on the discussion in section IV and the the upper bound in (6), minimizing the accumulated variance of the gradient serves to tighten the convergence guarantees. Thus, we propose the cost of visiting a node , at a given round in our Random Walk algorithm, to be:

(8)
MAB RW Learning on Graph
Arm Node
Action Select the next node in the RW
Cost Variance of the local gradient at the selected node as shown in (8)
Regret Gap between the accumulated variance and the minimal variance under the optimal transition probabilities in full information setting as shown in (9)
TABLE I: Our proposed Analogy between the sleeping MAB and the Random Walk design problems.

However, this analogy between “standard” MAB and RW cannot be fully established here. That is because in MAB any arm can be selected at any time. Whereas in RW only neighboring nodes can be visited in each iteration. To take into account the graph topology, we consider a variant of the standard MAB called sleeping MAB, where in each iteration only a subset or arms is available (the rest are sleeping)[36, 30, 37]. Moreover, the available nodes to select from are the ones that are connected to the currently visited node. In Table I, we summarise this analogy between MAB and RW.

Within this sleeping multi-armed bandit framework, our goal is to minimize the regret given the available arms and approach the best node sampling strategy denoted by , which is a mapping from a set of available arms to a selected arm. The goal is to minimize the following regret

(9)

where the cost is defined in (8), the expectation is taken w.r.t. the availabilities and the randomness of the player’s strategy, and is the set of available nodes at time which consists of the neighbors of the currently visited node. Therefore, the regret function is defined as the gap between the local variance of the local gradient estimate implied by the our selection strategy and the minimal variance that requires full information about the local datasets.

In [36, 30, 16], it was shown that one can achieve a sublinear regret for sleeping MAB and it is asymptotically optimal. This is achieved by applying the EXP3 algorithm. Initially, the algorithm assigns equal importance to all arms. Then, at every round, the player receives the subset of non-sleeping arms, selects one among them, and observes the outcome of the chosen arm. The player then updates its cost estimation and keeps track of the empirical probability of the appearance of a given arm in the non-sleeping set. The goal of the player is to balance between exploration and exploitation and gradually shifts to exploitation as the costs estimates become more robust after playing enough rounds.

The multi-armed bandit modeling implies an algorithm design on the random walk that guarantees a sublinear decaying of the regret in (9) such that , thus, it approaches asymptotically the optimal transition scheme of the random walk.

Vi Main Results

1:   Input: Exploration parameter . Learning parameter . Horizon T. Graph .
2:   Initialization: Initial control weight , Initial model chosen uniformly at random from . Starter node chosen uniformly at random from .
3:  for  to  do
4:     Compute and otherwise.
5:     Choose a neighbor node .
6:     Choose uniformly at random from and compute .
7:     Compute the cost estimate .
8:     Update the model using the SGD update in (3).
9:     Update the control weight using  (10).
10:  end for
Algorithm 1 Sleeping MAB Random Walk SGD

In this section, we summarize our main technical results. First, we present the details of our Sleeping Multi-Armed Bandit Random Walk SGD algorithm in Algorithm 1. Second, we prove in Theorem 1 that the proposed algorithm has an asymptotically optimal convergence rate.

Vi-a Algorithm

Algorithm 1 leverages the analogy between Sleeping MAB and RW that we established in the previous section to design the RW learning algorithm. In the literature of Sleeping MAB [37], there are two versions of the EXP3 algorithm based on the availabilities of the arms: dependent versus independent availabilities. The case with dependent availabilities fits our RW model since the graph structure dictates the joint availability of any set of nodes.

In Algorithm 1, each node keeps an accumulated control (importance) value of the observed cost up to round . The nodes with higher values will be favored in the selection.

In each round of the algorithm, the active node has to make a decision to select a neighboring node to carry the next update. In order to do that, the active node receives the control value from each neighboring node, and does the selection proportionally to the control values .

The selection starts with pure exploration using uniform control values and keeps refining it with time given the observed average cost. The selected node will turn active, samples one of its local data point uniformly at random, performs the update in (3) and computes the cost estimate based on the local gradient.

Each node keeps tracks of the empirical estimate . The exploration is implicitly adjusted by a decreasing exploration parameter . At early stage of the training gives less importance to the observed cost contribution. Lastly the control value is updated as follows

(10)

Vi-B Convergence Guarantees

Theorem 1.

Under assumptions 1, 2 and 3, for a connected graph , particularly for for , the convergence rate of Algorithm 1 is as follows:

where,

Moreover, is a constant function of the convexity constants and the step size and is the spectral norm of the transition matrix defined in Section VII.

The first order contribution of Theorem 1 is to characterize the convergence rate approaching of our proposed bandit random walk SGD algorithm. This proves its asymptotic optimality given the lower bound in [48]. To better understand its significance, one must compare it with other first-order random walk algorithms, such as uniform node sampling. For all these algorithms, one can get a similar bound as in Theorem 1 with the same constants (depending on the data and graph topology, etc.). The only difference would be in the constant which is the cumulated variance of the gradients corresponding to the optimal sampling strategy . Any other node sampling strategy will lead to a higher constant and a looser bound. Of course, here we are optimizing the upper bound which we are taking as a proxy for the actual performance. Our numerical results in Section VIII substantiate our theoretical conclusions and show that our proposed algorithm outperform other baselines.

Vii Proof Outline

We outline here the different steps needed to establish the result in Theorem 1. The details can be found in the Appendix. First, we state the results on the regret rate of the sleeping multi-armed bandit selection scheme used in Algorithm 1. Furthermore, we show that the multi-armed bandit random walk is strongly ergodic which is an essential assumption for the convergence of our algorithm.

Lemma 2 (Regret Rate of Sleeping Multi-Armed Bandit Sampler).

Let is the minimal cost if the optimal transition scheme is known. Under algorithm 1, The multi-armed bandit sleeping algorithm approximates the optimal cost asymptotically as follows

The proof follows the multiplicative weight approach for EXP3 algorithms introduced in [15].

Next, we state the definition on Strongly Ergodic Non-Homogeneous Random Walk [49]. The sleeping multi-armed bandit algorithm guarantees that this property applies on the sequence of employed transition, which in the end guarantees the convergence stated in Theorem 1.

Definition 2 (Strongly Ergodic Non-Homogeneous Random Walk [49]).

A non-homogeneous random walk, with uniform starting distribution

, is called strongly ergodic if there exists a vector

such that for all ,

Lastly, we present the result on the rate of convergence of the transition probability distribution.

Proposition 3 (Convergence Non-Homogeneous Strongly Ergodic Random Walk).

The non-homogeneous random walk in Algorithm 1

is strongly ergodic. Thus, it exists a stochastic matrix

such that . Moreover, it exists a stochastic matrix such that , where

are the eigenvalues of the matrix

. Moreover, it exists a function such that

where

Viii Simulations

In this section, we present the numerical performance of our proposed Multi-Armed Bandit Random Walk (RW) SGD algorithm described in Algorithm 1.

Viii-a Baseline Algorithms

We compare the performance of our algorithm to three baselines, namely: (1) Uniform Random Walk, (2) Static Weighted Random Walk, and (3) Adaptive Weighted Random Walk.

The Uniform Random Walk

This algorithm assigns equal importance to all nodes in the network [17] imitating uniform sampling in centralized SGD. We implement the Metropolis Hasting (MH) decision rule to design the transition probabilities, so the random walk converges to a uniform stationary. The MH rule can be described as follows:

  1. At the step of the random walk, the active node selects uniformly at random one of its neighbors, say , as a candidate to be the next active node. This selection gets accepted with probability

    Upon the acceptance, we have .

  2. Otherwise, if the candidate node gets rejected, the random walk stays at the same node, i.e., .

Static Weighted Random Walk

This algorithm assigns a static importance metric to each node that is proportional to the gradient-Lipschitz constant of the local loss function [50, 42]. The random walk is designed by the MH again with a stationary distribution that is proportional to the local gradient-Lipschitz constants. In order to achieve that stationary, the probability of acceptance looks as follows:

(11)

Adaptive Weighted Random Walk

In this algorithm, we adapted the importance sampling scheme that is used in [38, 40] for centralized settings. The importance is computed for each node as the average of of gradients computed so far at that node which is at time , , for is the total number of rounds when node has been active up to . We call it the pure Exploitation scheme.

Viii-B Datasets and Comparison

Our simulations are run on both synthetic dataset and on real benchmarks to confirm our theoretical results. They show that a bandit based random walk in decentralized learning consistently outperforms existing random walk baselines that uses static or exploitation based importance estimation.

Synthetic Data. For each node , we sample the dataset

from a a normal distribution with

. We assigned manually a label to each node dataset such that half the nodes has the label and the other half has the . We run our simulations on an Expander graph which known to be sparse (The Margulis-Gabber-Galil graph). 222We call the generator function of the Python library NetworkX v2.8 [51]. .

MNIST dataset and Fashion-MNIST.

We run experiments on the MNIST dataset and the Fashion-MNIST to train a multi-class logistic regression model. We divide the data among the nodes as follows: for a level

of similarity, each client has of its local dataset drawn i.i.d. from a shared pool of data. Another non-public pool of the data is sorted with respect to label and partitioned into non-overlapping chunks. Each node is assigned a different chunk that consists of its remaining data [28].

Table II and Table III report the results for different level of similarity from fully homogeneous to fully heterogeneous to highlight how different similarity levels affect the convergence of our algorithm vs. the oblivious uniform algorithm. Table II is on the MNIST dataset. The gap between both algorithms becomes wider once the system turns more heterogeneous. Table III is on the Fashion-MNIST dataset. The gap between both algorithms becomes more significant as the system turns into more heterogeneous state 333The fact that the ratio between the number of iterations of the two algorithms is constant (roughly 2) is an artifact of the data and is not reproducible for other datasets. See for example Table III which is on the Fashion-MNIST dataset that shows an increasing ratio with decreasing similarity. .

Fig. 3: Classification model trained on a Synthetic dataset distributed over an Expander graph of nodes.
Fig. 4: Classification model trained on the Synthetic dataset. The same dataset is been distributed over a Expander graph of size nodes and nodes.
Fig. 5: Classification model trained on the Synthetic dataset. The local dataset size has been augmented from to in a Expander graph of size .
Fig. 6: Multi-class MNIST dataset of 10 classes distributed over nodes with similarity on Expander graph.
Similarity
Uniform RW SGD
Bandit RW SGD
TABLE II: Number of rounds to reach 0.45 test accuracy for logistic regression on MNIST as we vary the level of similarity. Bandit RW SGD is consistently faster than Uniform RW SGD.
Similarity 10%
Uniform RW SGD 90
Bandit RW SGD 69
TABLE III: Number of rounds to reach 0.45 test accuracy for logistic regression on FMNIST as we vary the level of similarity. Bandit RW SGD is consistently faster than Uniform RW SGD.
Probability of Connectivity
Uniform RW SGD
Bandit RW SGD
TABLE IV: Number of rounds to reach 0.45 test accuracy for logistic regression on MNIST as we vary the graph connectivity parameter. Bandit RW SGD is consistently faster than Uniform RW SGD. As we decrease the probability of connectivity, the increase in the number of rounds is less significant for the bandit algorithm.

To elaborate on how our algorithm performs given the graph structure, we consider multiple scenarios of simulations where we use an Erdos-Renyi with different probabilities of connectivity that go from sparser to denser. Table IV measures the performance given the probability of connectivity in the graph.

Optimal Sampling. Here is a sketched proof on the optimal sampling. Consider the optimization problem in hand which can be formulated as follows:

The optimality conditions of the Lagrangian expression of the problem above gives the following:

where is the Lagrange multiplier. Thus, by a simple algebraic manipulation, we get

Definition 3.

The local loss function for each node has an -Lipschitz continuous gradient; that is, any , , there exists a constant such that

Lemma 4.

Under assumptions 1,  2 and  3, the Random Walk SGD algorithm, that uses the update in equation 3 for transition matrix , has the following rate of convergence.

In order to prove Lemma 4 and Theorem 1, we present some technical results that we use in the proof. The proof techniques are essentially inspired by the work of [21] and theya are adapted to the assumptions and setting of this work.

Lemma 5 (Convexity and Lipschitzness).

If is a convex function on an open subset , then for a closed bounded subset , there exists a constant , such that, for any ,

We define . Therefore,

A proof for Lemma 5 can be found in [52].

Corollary 1 (Boundedness of the gradient).

If is a convex function on , then for a closed bounded subset ,

Proof.

Taking ,

(a) follows Lemma 5 and (b) follows from being convex. ∎

Now, we present the steps of the the proof:

follows from being a convex closed set, so one can apply nonexpansivity theorem in [52].

For the next we use the convexity of ,

(12)

Re-arranging the above equation gives

(13)

Summing (13) over and using Assumption and the boundness of ,

(14)

Next we give some results we need on the Markov chain. We denote by

the stationary distribution, is the transition matrix and is the power of matrix P. We refer to row of a matrix by .

Lemma 6 (Convergence of Markov Chain [53]).

Assume the graph is connected with self-loop, therefore a random walk is aperiodic and irreducible, we have

for , where is a constant that depends and and and is a constant that depends on the Jordan canonical form of .

Corollary 2.

Using the previous lemma, we get

for

Here, we state the next corollary on the convergence of the random walk.

(a) follows from Lemma 5, (b) using triangle inequality, (c) using linearity of expectation and (d) follows from the Cauchy–Schwarz inequality.

Now taking the summation over :

(15)

By simply using the assumption on the step size summability, the result is as follows:

(16)

Now, we compute the following lower bound:

(17)

(a) using Markov property and (b) using Lemma 1 in [20].

Therefore,

Next, we get a bound on

(18)

(a) follows from Lemma 4, (b) using triangle inequality, (c) using linearity of expectation, (d) follows Corollary 1 and (e) follows from the Cauchy–Schwarz inequality. The upper bound summability over follows from previous discussion in equation (VIII-B).

Combining with the results in (14) and (VIII-B) , we get

By this step we proved Lemma 4. Next we present the essential technical results to use in the proof of Theorem .

Proposition 7.

[Sleeping multi-armed bandit convergence [30]] The sleeping multi-armed bandit sampling scheme under adversarial availabilities guarantees the following: . Therefore, using Definition 1, the random walk with transition matrices is strongly ergodic.

We state next Lemma of [49] about the convergence of strongly ergodic random walk.

Lemma 8 (Theorem II.7 in [49]).

Given strongly ergodic non-homogenous transition matrices with a stochastic matrix such that , given such , then