We consider the problem of training a Neural Network (NN) model, when training data is distributed over different agents that are connected by a sparse, possibly time-varying, communication network. To grasp the main motivation, let us consider a ‘smart’ environment, wherein thousands of low-power sensors (e.g., cameras, wearables, etc.) are embedded to provide context-aware assistance, security provisioning, and so forth (pottie2000wireless; boric2002wireless). If the amount of produced data is small and we can count on a very reliable communication network, we may think of a centralized approach where all the data are transmitted to one (or more) fusion center that performs the learning task. However, in big data applications, sharing local information with a central processor might be either unfeasible or not economical/efficient, owing to the large size of the network and volume of data, time-varying network topology, energy constraints, robustness and/or privacy concerns. Performing the computation in a centralized fashion may raise robustness concerns as well, since the central processor represents a bottleneck and an isolated point of failure. For these reasons, effective learning methods must necessarily exploit distributed computation/learning architectures (with possibly parallelized multi-core processors), while keeping into account the distributed large-scale storage of data over the network and communication constraints. Very often, the implementation of such learning schemes requires the training of a shared
predictive function, i.e., a common model accessible independently by each of them. Considering the previous example, suppose that a set of embedded cameras is taking multiple high-resolution photos of a possible security threat. In this case, if the threat needs to be recognized quickly in the near future, the sensors have to train a shared classifier that must leverage on all the currently acquired photos, in order to obtain a sufficiently high accuracy. These problems are ubiquitous in the real world, and appear in many practical systems such as, e.g., wireless sensor networks(predd2006distributed), smart grids, distributed databases (lazarevic2002boosting), robotic swarms, just to name a few.
If a predictive behavior is needed, however, the designer of the distributed system has to answer a necessary question: What kind of model should be chosen as a classifier/regressor? Since deep NNs are currently obtaining state-of-the-art results in several fields (schmidhuber2015deep; lecun2015deep), employing them appears as a reasonable choice. Nevertheless, somewhat surprisingly, the literature on distributed training algorithms for NNs satisfying all the above requirements is extremely scarce. Most authors resort either to an ensemble of models trained independently by the agents (lazarevic2002boosting; zhang2013privacy), or to strategies requiring the sum of the gradients’ contributions for all agents at every single iteration (samet2012privacy; georgopoulos2014distributed)
, exploiting the additivity of the gradients updates. Both these approaches can be easily shown to be unsatisfactory in general. In the former case, we have no guarantee that the ensemble of models will perform as good as a single model trained on the collection of all local datasets. In the latter case, instead, a global sum at every iteration might be infeasible due to an excessive amount of communication, particularly for large models comprising several hundred thousands parameters. It is also worth mentioning that a lot of research has been devoted recently to the design of parallel, asynchronous versions of stochastic gradient descent for training NNs on large clusters of commodity hardware(dean2012large; sak2014squence; abadi2016tensorflow). However, all these previous methods require the presence of at least one central server node, which coordinates the learning process; thus, they are not applicable in our context.
One of the reasons for the lack of distributed training methods for NNs is that, in principle, these methods require the solution of a distributed non-convex optimization problem, which was tackled only in a few papers even in the optimization literature (bianchi2013convergence; di2016next). On the other side, if we turn our attention to methods for convex learning problems, the literature on their distributed training is vast, including algorithms for decentralized optimization of linear predictors (xiao2007distributed; sayed2014adaptive; sayed2014adaptation), sparse linear models (mateos2010distributed; di2013sparse)
, kernel ridge regression(predd2006distributed; predd2009collaborative), random-weights networks (huang2015distributed; scardapane2015distributed; scardapane2016decentralized)navia2006distributed; lu2008distributed; forero2010consensus; scardapane2016distributed), and kernel filtering (perez2010robust; gao2015diffusion).
In this paper, we propose an algorithmic framework for training general NN models in a fully distributed scenario, which encompasses several common loss functions and regularization terms.111A preliminary version of this work, focusing only on the squared loss function, was presented in (di2016neuralnetworks). In particular, we build upon the in-network nonconvex optimization (NEXT) algorithm proposed in (di2016next), and recently extended in (sun2016distributed) to handle general time-varying topologies. NEXT is one of the first methods to solve distributed non-convex optimization problems over networks of agents. The algorithm, which leverages on the so-called successive convex approximation (SCA) family of methods (facchinei2015parallel), is built upon two foundational ideas. First and foremost, at every iteration, the original non-convex problem is replaced with a strongly convex approximation, which is solved locally at every agent. As we will illustrate along the paper, several kinds of convexification are possible, resulting in different trade-offs in terms of computational complexity and speed of convergence. Second, the framework exploits a dynamic consensus procedure (zhu2010discrete), so that each agent can recover the information relative to all the other agents, which typically is not available at its local side. The resulting algorithms are shown to be convergent to a stationary solution of the social non-convex problem under loose requirements relative to the agents’ communication topology, the choice of the algorithm’s parameters, and the structure of the optimization problem. A further interesting aspect of the framework presented here is that the local optimization problems can be easily parallelized in a principled way (up to one NN parameter per available processor), without loosing the convergence properties of the framework. Consider, for example, the case of multiple medical institutions requiring the training of a common NN (e.g., for diagnosis purposes) leveraging on all historical clinical information (vieira2006secure). In this case, a decentralized algorithm is required due to strong privacy concerns on the release of medical, sensible information about the patients. Nonetheless, each institution may have access to an internal private cloud infrastructure. Using the framework outlined in this paper, privacy is guaranteed via the use of a distributed protocol, while each institution can parallelize its optimization steps using local cloud computing hardware. In this way, the resulting algorithms are both distributed (across the nodes) and parallel (inside each node) in nature. At the end of the (distributed) training process, each agent has access to the optimal set of NN’s parameters, and it can apply the resulting model to newly arriving data (e.g., new photos taken from the camera) independently of the other agents. A comprehensive set of experimental results validate the proposed approach.
Outline of the paper: The rest of the paper is organized as follows. In Section 2, we formalize the problem of distributed NN training. In Section 3 we describe the general framework for distributed NN training built upon the NEXT algorithm. Then, in Section 4, we consider the customization of the framework to different loss functions (squared loss, cross entropy, etc.) and regularization terms ( norm, sparsity inducing penalties, etc.). Section 5 describes a principled way to parallelize the optimization phase. In Section 6, we perform a large set of experiments aimed at assessing the performance of the proposed framework. Finally, Section 7 draws some conclusions and future lines of research.
Notation: We denote vectors using boldface lowercase letters, e.g., ; matrices are denoted by boldface uppercase letters, e.g., . All vectors are assumed to be column vectors. The operator is the standard norm on an Euclidean space. For , it coincides with the Euclidean norm, while for we obtain the Manhattan (or taxicab) norm defined for a generic vector as . The notation denotes the dependence of on the time-index . Other notation is introduced along the paper when required.
2 Problem Formulation
Let us consider the problem of training a generic NN model , where denotes the -dimensional input vector of the network, whereas is the vector collecting all the adaptable parameters that we aim to optimize. Note that we are considering the NN as a function of its parameters, as this will make the following derivation simpler. We are not concerned with the specific structure of the NN
(i.e., number of hidden layers, choice of the activation functions, etc.), as long as the following assumptions are satisfied for any possible input vector.
Assumption A [On the NN model]:
is in , i.e., it is continuously differentiable with respect to ;
has Lipschitz continuous gradient, with respect to , for some Lipschitz constant , i.e.:
for some , and define the matrix . These weights are used in the definition of the proposed algorithm in order to locally combine the information diffused over every neighborhood, i.e., represents the weight given by agent to the information coming from agent . The weights are given and are required to respect some properties listed later on in (9). Many choices are possible, and a brief overview can be found in di2016next. Clearly, different setups for the weights may influence the convergence speed. Roughly speaking, simple choices like the one we detail in Section 6.1 can be implemented immediately with no knowledge of the graph topology far from each neighborhood. On the contrary, more sophisticated weights can speedup convergence, while requiring global knowledge of the network and/or the solution to some optimization problem, e.g. see the strategies detailed in xiao2004fast.
For the purpose of training the NN, we assume that the th agent has access to a local training dataset of examples, denoted as , where we consider a single-output problem with for simplicity of overall notation. The output of the NN is an integer or a real value, depending on whether we are facing a classification task or a regression task, respectively. Given all the previous definitions, a general formulation for the distributed training of NNs can be cast as the minimization of a social cost function plus a regularization term , which writes as:
where is the error term relative to the th local dataset:
with denoting a generic (convex) loss function, while is a regularization term. Due to the nonlinearity of the NN model , problem (3) is typically non-convex. In this work, we consider the following assumptions on the functions involved in (3)-(4).
Assumption B [On Problem (3)]:
is convex and , with Lipschitz continuous gradient;
satisfies (B1), or it is a nondifferentiable convex function with bounded subgradients;
is coercive, i.e., .
The structure of the function in (4) depends on the learning task (i.e., regression, classification, etc.). Typical choices are the squared loss for regression problems, and the cross-entropy for classification tasks (haykin2009neural). The regularization function in (3) is commonly chosen to avoid overfitted solutions and/or impose a specific structure in the solution, e.g., sparsity or group sparsity. Typical choices are the and norms. All these functions satisfy Assumption B, and will be discussed in detail in the sequel. In view of the distributed nature of the problem, the th agent knows its own cost function and the common regularization term , but it does not have access to for , nor can it exchange freely its own dataset due to a variety of reasons, including privacy, data volume, and communication constraints. This aspect, combined with the non-convexity of , makes optimizing in a distributed fashion a challenging problem, which has no ready-to-use solution available in the literature. The design of such algorithmic framework is the topic of the next three sections.
3 NEXT: In-Network Successive Convex Approximation
In this section, we review the basics of the NEXT framework proposed in (di2016next), which was designed to solve general nonconvex distributed problems of the form (3). The next section will then focus on how to customize the framework to the NN distributed training problem considered in this paper. Due to lack of space, we provide only a very brief introduction to the NEXT framework, and we refer the interested readers to (di2016next; sun2016distributed) for a full treatment, which also includes a proof of the convergence results. NEXT combines SCA techniques (Step 1) with dynamic consensus mechanisms (Steps 2 and 3), as described next.
Step 1 (local SCA optimization): Each agent
maintains a local estimateof the optimization variable that is iteratively updated. Solving directly Problem (3) may be too costly (due to the nonconvexity of ) and is not even feasible in a distributed setting. One may then prefer to approximate Problem (3), in some suitable sense, in order to permit each agent to compute locally and efficiently the new iteration. In particular, writing , we consider a convexification of having the following form: i) at every iteration , the (possibly) nonconvex is replaced by a strongly convex surrogate, say , which may depend on the current iterate ; and ii) is linearized around . More formally, the proposed updating scheme reads: at every iteration , given the local estimate , each agent solves the strongly convex optimization problem:
The evaluation of (6) would require the knowledge of all , at node . This information is not directly available at node ; we will cope with this local lack of global knowledge later on in step 3. Once the surrogate problem (5) is solved, each agent computes an auxiliary variable, say , as the convex combination:
where is a possibly time-varying step-size sequence. This concludes the optimization phase of the algorithm. An appropriate choice of the surrogate function guarantees the coincidence between the fixed-points of and the stationary solutions of Problem (3). The main results are given in the following proposition (facchinei2015parallel):
Conditions F1-F3 state that should be regarded as a strongly convex approximation of at the point , which preserves the first order properties of . Several feasible choices are possible for a given ; the appropriate one depends on computational and communication requirements. The goal of the next section will be to illustrate some possible choices for the local surrogate cost properly customized to our distributed NN training problem.
Step 2 (agreement update): To force the asymptotic agreement among the ’s, a consensus-based step is employed on the auxiliary variables ’s. Each agent updates its local variable as:
where is defined in (2), and satisfies
Since the weights are constrained by the network topology, (8) can be implemented via local message exchanges: agent updates its estimate by averaging over the current solutions received from its neighbors. The double stochasticity condition in (9) can be achieved according to a variety of predefined strategies, including the Metropolis-Hastings criterion (xiao2007distributed), or by optimizing a cost function with respect to the spectral properties of the graph (xiao2004fast).
Step 3 (diffusion of information over the network): The computation of in (5) is not fully distributed yet, because the evaluation of in (6) would require the knowledge of all , , which is a global information that is not available locally at node . To cope with this issue, as proposed in (di2016next), we replace in (5) with a local estimate, say , asymptotically converging to . Thus, we can update the local estimate in a fully distributed manner as:
where is a local auxiliary variable (controlled by agent ) that aims to asymptotically track the average of the gradients. This can be done updating according to the following dynamic consensus recursion:
where , and can be computed locally by every agent. Note that the update of and thus can be now performed locally with message exchanges with the agents in the neighborhood.
The overall procedure is summarized in Algorithm 1, where is used as a simplified notation for . Its convergence properties are reported in the following Proposition.
Let be the sequence generated by Algorithm 1, and let be its average. Suppose that i) Assumptions A and B hold; ii) the sequence of graphs describing the network is -strongly connected333Formally, there exists an integer such that the graph , with is strongly connected, for all .; iii) condition (9) holds; and iv) the step-size sequence is chosen so that for all and . Then, (a) all the limit points of the sequence are stationary solutions of (3); (b) all the sequences asymptotically agree, i.e., , for all .
Algorithm 1 is a special case of an extension of the NEXT framework proposed in (sun2016distributed) (i.e., the SONATA algorithm). Then, under the above assumptions on the NN model in (3), the network among agents, and the algorithm’s parameters, all conditions of Theorem 1 in (sun2016distributed) are satisfied, and the convergence result follows. ∎
, , , , and . Set .
(S.1)If satisfies a global termination criterion: STOP;
(S.2) Local Optimization: Each agent
(a) computes as:
(b) updates its local variable :
(S.3) Consensus update: Each agent
(a) collects and from neighbors,
(b) updates as:
(c) updates as:
(d) updates as :
(S.4) , and go to
It is interesting to notice that convergence conditions are particularly loose. With respect to the network connecting the agents, it is enough to ensure connectivity over a finite (but arbitrary) union of time instants.
Step-size sequences satisfying the conditions can be derived easily, either fixed (and sufficiently small) as remarked in (sun2016distributed), or diminishing, e.g., using the following quadratically decreasing rule that was found particularly effective in our experiments:
where must be chosen by the user.
The per-iteration cost of the algorithm is clearly dominated by the solution of the surrogate optimization problem in (12). As we will see in the next section, the flexibility of the framework allows to select different choices of surrogate functions, typically impacting the complexity/performance tradeoff of the algorithm. The framework can be accelerated in two ways. First, we can parallelize the surrogate optimization in (12); this point will be discussed in Section 5. Second, at each iteration , we can consider an inexact solution of the surrogate problems in (12) within a user-specified error bound . In this case, it can be shown that convergence is still guaranteed, as long as the following condition is satisfied: , , which establish a decaying rate of the error sequence over time. For further details, we refer to (di2016next, Theorem 4).
4 Strategies for Distributed NN Training
In this section, we customize the NEXT framework for the solution of several distributed NN training problems. In particular, we focus on the choice of the surrogate functions in (5). From Proposition 1, we know that they must be chosen to satisfy F1-F3. Thus, we explore two general-purpose strategies that can be used to this end, before analyzing some practical algorithms resulting from the combination of these two strategies with common choices of the loss function and the regularization term. Essentially, the aim of is to provide a strongly convex approximation of (the non-convex) around the current point, preserving (at least) the first-order information of the original function. Then, the most basic idea is to linearize the entire , irrespectively of the actual choice of loss function , as:
where the last term in (14) is a proximal regularization term (with ) used to ensure strong convexity; in what follows, we will refer to (14) as the full linearization strategy (FL). In general, the use of the FL strategy leads to the formulation of surrogate problems in (12) allowing for a simple, closed-form solution for most choices of regularization. At the same time, this strategy is throwing away most information of , by only keeping first-order information on its gradient. For this reason, the resulting family of algorithms can possess a slow convergence speed, similarly to what happens with the use of (centralized) steepest descent optimization procedures.
To implement a more sophisticated approximation aimed at preserving the hidden convexity in the problem, we start noticing that the loss function in (4) is composed of the summation of terms, each one given by the composition of an exterior convex function (i.e., the loss function ), and an interior nonlinear function (i.e., the NN model ). Then, a possible choice for is to preserve the convexity of , while linearizing around the current estimate , and a generic input point , as:
Then, the surrogate is obtained as:
In the remainder of the section, we consider a set of practical examples resulting from the use of our general framework.
4.1 Case 1: ridge regression cost
As a first example, we consider the use of a squared loss function combined with a classical norm regularization on the weights (also known as weight decay in the NN literature (moody1995simple)):
where is a positive regularization parameter. Historically, this is the most common training criterion for NNs, and it is still widely used today for regression problems. Being equivalent to a nonlinear ridge regression, we borrow this terminology here.
Let us begin with the FL strategy in (14). Note that, thanks to the specific form of the regularizer, the resulting optimization problem in (12) is already strongly convex, so that we can set . Then, using (14) and (17), the surrogate problem in (12) reduces to the minimization of a positive definite quadratic function, which admits a simple closed form solution, given by:
where as before is used as a simplified notation for . Eq. (18) represents the first practical implementation of the framework in Algorithm 1 for distributed NN training. As we can notice from (18), the FL strategy discards all information on the global cost function in (3), except for a first-order approximation. Thus, the descent direction in (18) will be proportional to the opposite of the gradient of , thanks to the current estimate of (6) that is locally available at node . As we will see in the numerical results, the performance of the resulting distributed scheme is similar to a centralized gradient method, sharing its advantages (low computational complexity) and its drawbacks (possible slow convergence speed).
We now proceed considering the PL strategy in (16). To this aim, let us introduce the following ‘residual’ terms:
where is a -dimensional vector containing the derivatives of the NN output with respect to any single weight parameter. In the general case, it will be a matrix with one column per NN output. This quantity is sometimes denoted as the weight Jacobian (blackwell2012neural), since it measures the influence of a small parameter change on the output of the neural network.444Note that a single back-propagation step per iteration is needed to build the weight Jacobian, as discussed in bishop2006pattern. Now, using (16) in (12), it is easy to show that the surrogate problem can be written again as the minimization of a positive definite quadratic form, given by:
As an interesting side note, in the NN literature the matrix (21) is known as an outer product approximation to the Hessian matrix of (i.e., the error function local to agent ), which is obtained by assuming that the error is uncorrelated with the second derivative of the network’s output (bishop2006pattern, Section 5.4.2). Finally, solving the resulting minimization problem in (20), the solution of the surrogate problem, to be used in (12), is given by:
Differently from the FL strategy, whose computational complexity is linear in the number of parameters, in this case solving the surrogate problem is of the order , where is the number of adaptable NN parameters, due to the matrix inversion step. Nevertheless, as we will see in the numerical results, the resulting descent direction provides a very large improvement in terms of convergence speed. Additionally, this strategy can benefit from a larger relative speedup when employing the parallelization strategy described in Section 5.
4.2 Case 2a: squared error with weight sparsity
As a second example, let us consider again the use of a squared loss term in (17), combined this time with a sparsity promoting term given by the norm on the weight vector, i.e.,
The norm promotes sparsity of the weight vector, acting as a convex approximation of the non-convex, non-differentiable norm (tibshirani1996regression). While there exists customized algorithms to solve non-convex regularized problems (ochs2015iteratively), it is common in the NN literature to apply first-order procedures (e.g., stochastic descent with momentum) followed by a thresholding step to obtain sparse solutions (bengio2012practical). In what follows, we illustrate the customization of the NEXT framework to this use case, using both FL and PL strategies in (14) and (16), respectively. In the FL case, using (14) and (24), with to ensure strong convexity, the problem in (12) can be written as the minimization of the sum of independent functions, as follows:
After some easy calculations, the solution of the optimization problem in (25) is given by the closed form expression:
is the (component-wise) soft thresholding function.
where and are given by (21) and (22), respectively. This is the first case we encounter where the solution of the optimization step cannot be expressed immediately in a closed form. Nevertheless, problem (28) is given by the sum of a strongly convex function and an term, and many efficient strategies can be used for its approximate solution, including FISTA (beck2009fast), coordinate descent methods (cevher2014convex), and several others.
4.3 Case 2b: group sparse penalization
The formulation introduced in Sec. 4.2 can be easily extended to handle a group sparse penalization, which allows the selective removal of entire neurons during the training process, see, e.g., (scardapane2017group). The basic idea is to force all the outgoing weights from a neuron to be simultaneously either non-zero or zero; the latter resulting in the direct removal of the neuron itself. Note that a neuron here can correspond to an input neuron, to a neuron in a hidden layer, or to a bias term, thus allowing the removal of input features, hidden neurons, and bias terms from the trained network (see (scardapane2017group) for details). To this aim, let us suppose that the neurons are ordered and indexed as . Also, let us denote by , the subset of weights of collecting all connections between the th neuron with all the neurons in the following layer. Group sparsity can then be imposed by choosing in (3) the following regularization term:
where are positive constants, , with denoting the dimensionality of .
Let us now analyze the customization of our framework when the FL strategy in (14) is applied. Then, let us define , denoting with the restriction of to the indexes associated with the th group. Thus, the surrogate problem in (12) writes as:
As we can notice from (30), the cost function is given by a summation of costs, each one dependent on a single neuron. Also in this case, even if problem (30) cannot be solved in closed form, it is possible to implement very fast and efficient algorithms for its solution, see, e.g., (schmidt2010graphical; cevher2014convex). Furthermore, in the case each agent has a multi-core architecture, the structure of (30) makes straightforward the parallelization of computation, where each local processor can take care only of a subset of neurons. Finally, considering the PL strategy, the resulting formulation is equivalent to (28), with the only difference that (29) replaces the norm. Again, many of the techniques mentioned before can be used to solve also the resulting (group sparse) strongly convex problem.
4.4 Case 3: cross-entropy loss
As an additional example, let us consider the case of binary classification, i.e., . Then, assuming the output of the NN is limited between and , a standard optimization criterion involves the cross-entropy loss function in (3), i.e.:
In this case, using the FL strategy in (14), we obtain the same closed form solution as in (18) (or (25)) by using the (or ) regularization, with the only difference being that each function in (4) now depends on the cross-entropy loss in (31). The PL case, instead, requires some additional care. In particular, although the NN output is bounded, the same is not true for its linear approximation (15). Simply substituting (15) in (31) might result in undefined values, since the argument of the logarithm must be positive. To tackle this issue, let us notice that in this case the NN model can be written as:
where is a squashing function (without loss of generality, we assume it to be a sigmoid), and is the NN output up to (but not including) the activation function of the output neuron. The sigmoid is non-convex, but its internal composition with the cross-entropy loss in (31) is convex, see, e.g., (boyd2004convex). Exploiting such hidden convexity, we can write the surrogate problem in (12), while satisfying the conditions F1-F3 in Proposition 1, as follows:
where is the first-order linearization of defined as in (15
). Also in this case we cannot make any further simplifications although, once again, the strong convexity of the problem makes it relatively easy to be solved (roughly equivalent to a traditional logistic regression).
5 Parallelizing the Local Optimization
In this section, we explore how each agent can parallelize the local optimization in (12), when having access to separate computing machines (e.g., cores, or computers in a cloud). As we stated in the introduction, this effectively gives rise to algorithms that are both distributed (across agents) and parallel (inside each agent) in nature. To this end, suppose that the weight vector is partitioned in non-overlapping blocks , so that (assuming that the union keeps the original order). Note that we use a similar notation as in Section 4.3 to identify a single group, i.e., using an additional subscript under the variable. For convenience, we also define as the tuple of all blocks excepts the -th one, and similarly for all other variables. Additionally, we assume that the regularization term is block separable, i.e., for some . This is true for the and norms, and it holds true also for the group sparse norm in (29) if we choose the groups in a consistent way. Then, the key idea is to decompose (12) on a per-core-basis, and solve a sequence of (strongly) convex low-complexity subproblems, whereby all processors of agent update their blocks in parallel. To this aim, we build a surrogate function that additively decomposes over the different cores, i.e.:
where each is any surrogate function satisfying conditions F1-F3 on the variable . It is easy to check that the surrogate in (34) satisfies F1-F3 on the variable . Given (34), each core can then minimize its corresponding term independently of the others, and their solutions can be aggregated to form the final solution vector. In the case of the FL strategy, parallelization is not particularly effective. In fact, the final solution is given by simple aggregation of vectors as in (18), whose computation has linear complexity with respect to the size of , eventually with a point-wise application of the thresholding operator in (27
). However, the (linear) cost of solving the surrogate problems at each core is easily overshadowed by the need of computing gradients via a backpropagation step. On the other side, parallelization can largely reduce computational complexity when using the PL strategy. To give an example of application of the proposed methodology, in the sequel we illustrate how to parallelize the local optimization in the case of a ridge regression cost as in Sec.4.1. In particular, let us consider the surrogate function in (20). To obtain the surrogate function associated to each core , we fix in (20) all the variables , such that the resulting function depends only on . The surrogate associated to core is then given by:
where is the block (rows and columns) of the matrix in (21) corresponding to the -th partition, whereas takes the rows corresponding to the -th partition and all the columns not associated to . The minimum of (5) is:
, and the overall solution is given by . As we can see from (36), the effect of the parallelization is evident: At each iteration , each core has to invert a matrix having (approximately) size of the original one in (23), thus remarkably reducing the overall computational burden. Similar arguments can be used also to parallelize the formulations in (28), (30), and (33).
6 Experimental Validation
In this section, we assess the performance of the proposed method via numerical simulations. We begin by analyzing the test error of the solutions obtained by the algorithms for some representative regression and classification datasets in Sections 6.2 and 6.3, respectively. Then, we consider the convergence behaviors of the proposed framework, comparing it to centralized and distributed counterparts, in Section 6.4. In Section 6.5, we describe the speed-up achieved thanks to the parallelization strategy outlined before. Finally, we consider large-scale inference in Section 6.6. Python code to repeat the experiments is available under open-source license on the web.555https://bitbucket.org/ispamm/parallel-and-distributed-neural-networks
The code is built upon the Theano(bergstra2010theano) and Lasagne666https://github.com/Lasagne/Lasagne libraries.
6.1 Experimental setup
In all experiments, the original dataset is normalized so that both inputs and outputs lie in the range. Then, the dataset is partitioned as follows. First, of the dataset is kept separate to test the algorithms. The remaining is partitioned evenly among a randomly generated network of agents. For simplicity, we consider networks with time-invariant, symmetric connectivity, such that every pair of agents have a probability of being connected, with the only requirement that the overall network is connected. An example of such connectivity is shown in Fig. 1.
We have selected the weight coefficients in (2) using the Metropolis-Hastings strategy (lopes2008diffusion):
where is the degree of node . It it easy to check that this choice of the weight matrix satisfies the convergence conditions of the framework. Missing data is handled by removing the corresponding example. All experiments are repeated times by varying the data partitioning and the NN initialization.
Regarding the NN structure, we use hyperbolic tangent nonlinearities in all neurons, except for classification problems, where we use a sigmoid nonlinearity in the output neuron. The weights of the NN are initialized independently at every agent using the normalized strategy described byglorot2010understanding. All algorithms run for a maximum of epochs. In all the figures illustrating the results of the distributed strategies, whenever not explicitly stated, we consider the evolution of the average weight vector as defined in Proposition .
6.2 Results with regression datasets
We start considering three representative regression datasets, whose characteristics are summarized in Table 1. Boston (also known as the Housing dataset) is the task of predicting the median value of a house based on a set of features describing it (quinlan1993combining). Kin8nm is a member of the kinematics family of datasets777http://www.cs.toronto.edu/~delve/data/kin/desc.html, having high non-linearity and a medium amount of additive noise. Finally, Wine concerns predicting the subjective quality of a (white) wine based on a wide set of chemical features (cortez2009modeling). The fourth and fifth columns in Table 1 describe the parameters of the NN in terms of hidden neurons and regularization coefficients. These parameters are chosen based on an analysis of previous literature in order to obtain state-of-the-art results. However, we underline that our aim is to compare different solvers for the same NN optimization problem, and for this reason only relative differences in accuracy are of concern.
In particular, we compare the results of our algorithms with respect to five state-of-the-art centralized solvers, in terms of mean-squared error (MSE) over the test data, when solving the global optimization problem with the ridge regression cost in (17). Note that these solvers would not be available in a distributed scenario, and are only used for comparison purposes as optimal benchmarks. Specifically, we consider the following algorithms:
- Gradient descent (GD)
: this is a simple first-order steepest descent procedure with fixed step-size.
(duchi2011adaptive) : differently from GD, this algorithm employs different adaptive step-sizes per weight, which evolve according to the relative values of the gradients’ updates.
: equivalent to an AdaDelta variant (zeiler2012adadelta), it also considers adaptive independent step-sizes; however they are adapted based on shorter time windows in order to avoid exponentially decreasing schedules.
- Conjugate gradient (CG)
: this is the Polak-Ribiere variant of the nonlinear conjugate gradient algorithm (nocedal2006numerical), implemented in the SciPy library.888https://scipy.org/
: a low-memory version of the second-order BFGS algorithm (byrd1995limited), keeping track of an approximation to the full Hessian matrix, also implemented in the SciPy library.
In addition, we consider the behavior of a centralized implementation of the PL strategy in (23), denoted as PL-SCA, resulting in a novel centralized algorithm. In particular, assuming all data is available on a single location, we can consider a centralized equivalent of (21) and (22) as:
The centralized implementation of the FL strategy is almost equivalent to GD, so we do not consider it separately. For the distributed algorithms, we consider both the PL strategy in (23), denoted as PL-NEXT, and the FL strategy in (18), denoted as FL-NEXT. For PL-SCA, PL-NEXT and FL-NEXT we use the quadratically decreasing step-size sequence defined in (13). To have a fair comparison, the parameters of the step-size sequence in (13) were tuned at hand in order to select the fastest convergence behavior for all algorithms.
The results on this set of experiments are provided in Table 2
, both in terms of the mean and the standard deviation. Several conclusions can be drawn from the table. For the centralized algorithms, L-BFGS, being a second-order algorithm, is able to obtain the best accuracies, and it is matched only by CG in the Boston case (in the next section we will show some plots of the convergence behavior of the different algorithms). Interestingly, PL-SCA is able to match L-BFGS in all cases, complementing our previous observation that the matrix in (38) acts as an approximation of the Hessian matrix. For the distributed algorithms, we see similar distinctions between FL-NEXT and PL-NEXT. Specifically, PL-NEXT has comparable accuracies with respect to L-BFGS, while FL-NEXT obtains errors comparable to GD and AdaGrad. Clearly, the improved convergence comes at the cost of a higher computational burden (due to the need of inverting a matrix in (23
)), in line with the equivalent difference in the centralized case. Summarizing, we see that FL-NEXT and PL-NEXT represent viable algorithms for distributed scenarios, providing a relative trade-off with respect to convergence and computational requirements, and matching the respective centralized implementations that are not viable in the distributed setting treated in this paper. Importantly, this is also achieved with a minimal (or non-existent) increase in term of variance. We defer a statistical analysis of the results to the next section, in order to consider also the classification datasets.
6.3 Results with classification datasets
In this section, we analyze the performance of the distributed algorithms when applied to two classification problems, whose characteristics are briefly summarized in Table 3. Wisconsin is a medical classification task, aimed at separating cancerous cells from non-cancerous ones from several features describing the cell nucleus.999https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic) The Cardiotocography (CGT) dataset is another clinical problem, where we wish to infer suspect/pathological fetuses from several biometric signals.101010https://archive.ics.uci.edu/ml/datasets/Cardiotocography In this case, we solve the global optimization problem with the cross-entropy loss in (31) and a squared regularization term. We analyze the behavior of both the FL strategy and the PL strategy when compared to the state-of-the-art solvers described in the previous section. For PL-NEXT, the local surrogate problem in (33) is solved with AdaGrad, run for a maximum of iterations (with an initial step-size of ), or until the gradient norm is below a fixed threshold of . For the local optimization at each agent, we perform a ‘warm start’ from the current estimate .
The overall results are given in Table 4 in terms of misclassification rate. We see that, in this case, first-order algorithms are generally competitive, with the GD solver obtaining the best accuracy among the centralized solvers for the Wisconsin dataset, and CG/L-BFGS obtaining a slightly better result in the CTG case. Nevertheless, the distributed strategies are again able to obtain state-of-the-art results, with PL-NEXT consistently obtaining the lowest misclassification rate, and FL-NEXT ranging close to AdaGrad and RMSProp. In order to formalize the intuition that PL-NEXT is generally converging to a better minimum than FL-NEXT, we perform a Wilcoxon signed-rank test (demsar2006statistical) on the results over both regression and classification datasets. The difference is found to be significant with a confidence value (although the number of datasets under consideration is relatively small). We can reasonably conclude that PL-NEXT seems a better choice in terms of accuracy, if it is possible for the agents to cope with the increased computational cost.
6.4 Analysis of convergence
In a distributed setting, the final accuracy is not the only parameter of interest. We are also concerned on how fast this accuracy is obtained, because the convergence speed has a direct impact on the communication burden over the network of agents. As we mentioned in the introduction, in the case of general non-differentiable regularizers , there is no ready-to-use alternative for comparing our proposed algorithms. However, in the specific case where the regularization function satisfies assumption B1, we can easily adapt the framework introduced in bianchi2013convergence, resulting in a simple method that we denote as the distributed gradient (DistGrad) algorithm. Similarly to the NEXT framework, DistGrad alternates between a local optimization phase and a communication phase. In the optimization phase, each agent iteratively updates its own estimate according to a local gradient descent step as follows:
where is the step-size sequence. In the communication phase, the local estimates are combined similarly to (8). DistGrad can be seen as a simplified version of the FL strategy, where we do not consider the dynamic consensus step (i.e., Step 3 of NEXT). For fairness of comparison, we use the step-size rule in (13), and the same strategy for selecting the combination coefficients in (2).
In Figs. (a)a-(b)b we plot the evolution of the global cost function in (3) for FL-NEXT, PL-NEXT, DistGrad and a few representative centralized solvers for two different datasets. For improved readability, the behavior of centralized solvers is depicted using dashed lines, while the distributed algorithms are shown with solid lines. We see that the results are similar to what we have already discussed previously for the final test error: PL-NEXT is able to track consistently the convergence rate of L-BFGS, while FL-NEXT achieves results comparable to (centralized) first-order procedures. Differently, the DistGrad algorithm is slower and, for a given number of epochs, has a very large gap compared to other methods.
Another performance metric of interest is the transient behavior of the test error in terms of the amount of scalar values that are exchanged among agents in the network. We plot this metric for the three distributed algorithms in Figs. (c)c-(d)d, where the -axis is shown with a logarithmic scale. We notice that DistGrad requires exactly half as many scalars to be exchanged at every iteration (since it does not rely on the dynamic consensus to track the average gradient). Nevertheless, from Figs. (c)c-(d)d, we can see that both PL-NEXT and FL-NEXT can reach better errors with respect to DistGrad for any given amount of scalars exchanged, showing their better efficiency in terms of overall communication burden. PL-NEXT is particularly well performing, with only a very small amount of communication required for achieving an error close to the optimal one.
6.5 Exploiting parallelization
Next, we investigate the speed-up obtained by parallelizing the local optimization at each agent. We consider again the Boston and Kin8nm datasets, but we vary the number of (local) processors available at every agent in the range , with . The relative speedup with respect to the baseline is shown in Figs. (a)a-(b)b. We see that the speedup is roughly linear with respect to the amount of available processors, so that in the case we only need of the time for Boston, and for Kin8nm. Additionally, in Figs. (c)c-(d)d, we can visualize the evolution of the overall cost function for , and . From the figures, we can notice that the improvement in training time is obtained with only a limited effect on the convergence behavior, where in the worst case we obtain only a very small (or null) decrease.
6.6 Experiment on a large-scale dataset
Before concluding this experimental section, we briefly discuss the important point of large-scale distributed learning, i.e., performing distributed inference whenever is very large for the majority of the agents. To this end, we consider the YearPredictionMSD dataset (bertin2011million), which is one of the largest regression datasets available on the UCI repository. The task is to predict the year of release of a song starting from audio features. There are examples for training, and examples for testing (of different authors). Similarly to before, we preprocess the input and output values in , and we consider a NN with two hidden layers having, respectively, and neurons. We partition the training data among different agents, and we compare PL-NEXT with AdaGrad. We choose AdaGrad for two main reasons, i.e., it was found to be extremely fast in the previous section, and we can use it together with stochastic updates over small batches of data in order to handle the large-scale dataset. Specifically, for every iteration AdaGrad is updated with mini-batches of elements, and accuracy is computed after a complete pass over the training dataset. The regularization is chosen as . Step-sizes are chosen in order to guarantee a smooth convergence behavior.
The evolution of the global loss function in (3) is shown in Fig. 4. Despite AdaGrad making several stochastic update steps at every iteration, PL-NEXT is able to achieve a comparable convergence behavior, with a minimum loss value which is slightly better due to the unnoisy gradient evaluations. Both algorithms also achieve a similar mean squared error on the independent test set, which is around . For comparison, the average MSE of a support vector algorithm is around (ho2012large).
This example shows two important aspects of large-scale inference over networks. First of all, what is considered a challenging benchmark in a centralized environment might be relatively simpler in a distributed experiment, since the training data must be partitioned over several agents. In this case, for example, the original half a million training points must be partitioned over agents, so that each agent only has to deal with
training points. Thus, there is the need of designing more elaborate benchmarks to test the capabilities of the algorithms is larger situations. Secondly, properly handling these datasets will require the development of stochastic updates at every agent, paralleling the stochastic algorithms used in the centralized case and commonly used in the deep learning literature. Having such stochastic algorithms for distributed, non-convex problems remain an open problem in the literature, and we remark it here as the main line of research for future investigations.
7 Conclusions and Future Works
In this paper, we have investigated the problem of training a NN model in a distributed scenario, where multiple agents have a limited knowledge of the training dataset. We have proposed a provably convergent procedure to this aim, which builds exclusively on local optimization steps and one-hop communication steps. The method can be customized to several typical error functions and regularization terms. We have also described an immediate way to parallelize the local optimization phase across multiple processors/machines, available at each agent, with a limited impact on the convergence behavior.
One immediate extension of the framework presented here is to handle non-convex regularization terms, which are generally considered too challenging in practice. One example is the sample variance penalization (maurer2009empirical), which is defined in terms of the NN output. Additional extensions can consider the presence of non-differentiable points in the NN model (e.g., by using ReLu activation functions), stochastic updates of the surrogate functions, and online formulations where new data arrives in a streaming fashion, like in distributed filtering (sayed2014adaptive). Some interesting results can derive by considering the literature on distributed constraint optimization problems (DCOP), which deals with distributed decision making problems where the decision variables are separated among the different agents (modi2005adopt; rogers2011bounded). Finally, we are interested in testing our framework on real-world applications such as, e.g., multimedia classification and chaotic prediction tasks.