Convergence Analysis of a Cooperative Diffusion Gauss-Newton Strategy

11/29/2018
by   Mou Wu, et al.
Central China Normal University
NetEase, Inc
0

In this paper, we investigate the convergence performance of a cooperative diffusion Gauss-Newton (GN) method, which is widely used to solve the nonlinear least squares problems (NLLS) due to the low computation cost compared with Newton's method. This diffusion GN collects the diversity of temporalspatial information over the network, which is used on local updates. In order to address the challenges on convergence analysis, we firstly consider to form a global recursion relation over spatial and temporal scales since the traditional GN is a time iterative method and the network-wide NLLS need to be solved. Secondly, the derived recursion related to the networkwide deviation between the successive two iterations is ambiguous due to the uncertainty of descent discrepancy in GN update step between two versions of cooperation and non-cooperation. Thus, an important work is to derive the boundedness conditions of this discrepancy. Finally, based on the temporal-spatial recursion relation and the steady-state equilibria theory for discrete dynamical systems, we obtain the sufficient conditions for algorithm convergence, which require the good initial guesses, reasonable step size values and network connectivity. Such analysis provides a guideline for the applications based on this diffusion GN method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/05/2020

A local character based method for solving linear systems of radiation diffusion problems

The radiation diffusion problem is a kind of time-dependent nonlinear eq...
02/28/2018

An Event-based Diffusion LMS Strategy

We consider a wireless sensor network consists of cooperative nodes, eac...
11/27/2019

On the choice of initial guesses for the Newton-Raphson algorithm

The initialization of equation-based differential-algebraic system model...
10/07/2019

All-at-once versus reduced iterative methods for time dependent inverse problems

In this paper we investigate all-at-once versus reduced regularization o...
04/06/2020

Adaptive Social Learning

This work proposes a novel strategy for social learning by introducing t...
03/26/2019

On the Performance of Exact Diffusion over Adaptive Networks

Various bias-correction methods such as EXTRA, DIGing, and exact diffusi...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Gauss-Newton method has found wide applications, such as deep learning in artificial intelligence and neural network

[1, 2], and parameter estimate in a networked system [3, 4, 5]. Deriving from Newton’s method, GN algorithm discards the second-order terms in the computation of Hessian for small residual NLLS problems, thereby resulting in saving in computation. Such the amount of computations can be further reduced via the mathematical process. In order to compute easily the first derivative of objective function, the perturbed GN method is proposed in [6], where a perturbed derivative version substitutes the original one. The truncated GN method [7] is proposed to implement the inexact update instead of exact one. The truncated-perturbed GN method [7] integrates the above two advantages into the update step.

Many scenarios can be modeled as the NLLS problem depended on the performance of GN, such as computer vision

[8], image alignment and reconstruction [9, 10], network-based localization [11, 12], signal processing for direction-of-arrival estimation and frequency estimation [13]

, logistic regression

[14] and power system state estimation [4, 15, 16]. Despite the widespread utility, it is difficult for exploiting the original GN method as a fully cooperative scheme for a distributed network, since its iteration rule involves the matrix inverse operator, which is ideally suited to be implemented in a centralized way. However, for the well known advantages such as load balancing and robustness, distributed algorithm with the improvement of performance is preferred.

The purpose of this work is to analyze the convergence of a cooperative diffusion GN strategy over a distributed network, where every node sense the temporal data that is variable over the spatial domain. Several diffusion GN methods [17, 18] are proposed for solving the localization problem in wireless sensor networks. However, they are centralized in nature and implemented in a non-cooperative way, in which the local intermediate estimates are not shared over the diffusion network.

Notation: The operator

denotes the transpose for matrix or vector, the operator

denotes the inverse of a non-singular matrix. The capital letters are used when the matrices are denoted, while the small letters are used when the vectors or scalars are denoted. The Euclidean norm of a vector is written as , 2-norm and Frobenius norm of a matrix is denoted by and , respectively. and denote the identity matrix and vector whose every entry is 1, respectively. We will use subscripts , , and to denote node, and superscript , to denote time.

Ii Description of cooperative diffusion Gauss-Newton solution

Ii-a Centralized solution

For an adaptive network represented by a set , we would like to estimate a unknown parameter vector belonging to a closed convex set . Let be a continuous and differentiable global cost function throughout the network, where is the individual cost function associated with node by collecting the measurements from the related events. The estimation problem can be formulated as

(1)

By rewriting , the object of each node in the network is to seek a vector that solve the following Non-Linear Least Squares (NLLS) problem with the form

(2)

The GN method is well recognized for solving NLLS problems. Let us consider a fusion center (FC) that can communicate with all nodes in the network. Given an initial good guess , a centralized scheme can be implemented on FC based on the GN update rule in an iterative way

(3)

where is the estimation of at iteration , denotes a descent direction of GN, and is the step size parameter that ensure is nearer a stationary point than .

In this paper, we adopt the following assumption for the above optimization problem.

Assumption 1.

(1) The stationary points that satisfy

always exist, where is the Jacobian of with the size and the entries , .

(2) The notations and

are denoted as the minimum and maximum eigenvalues. For all

and , let

where .

Under Assumption 1, the approximate Hessian of is positive definite. Thereby, a local minimizer of denoted by that belongs to the set of stationary points always exist [19, 20]. Thus, the descent direction of GN update is written as

(4)

By rewriting

(5)

and defining

(6)

we get

(7)

Therefore, we have the following GN iteration update

(8)

To successfully implement (8) in a centralized way, we assume that the FC can communicate with all nodes over network and the same initial estimate is given by . In the centralized GN algorithm, the computation results of and from each node are aggregated by the FC to obtain the new estimate based on (8). Then the estimate is returned to all nodes until an appropriate termination condition is satisfied, for example or , where and are the predefined minimum norm decline and the maximum number of iterations, respectively. Thus, the centralized GN includes actually a step of diffusion for new estimate form FC to individual nodes.

In this paper, we adopt the constant step size for the subsequent development and analysis.

Ii-B Diffusion Gauss-Newton

Consider the adaptive network , where any node at time receives a set of estimates from all its 1-hop neighbors including itself. Thus, the local estimates is combined in a weighted combination way denoted by

(9)

where is the weighted coefficient between node and . And the conditions

(10)

is satisfied.

Once the aggregate estimate is obtained as the local weighted estimate, any node in the network can implement the GN update step as follows:

(11)

where we define

(12)

and

(13)

Removing the aggregate step of diffusion GN algorithm, we obtain a non-cooperative diffusion GN algorithm, where each node in the network acts as the FC to implement the centralized GN by communicating with all immediate neighbors. Its GN update step is given by

(14)

where we define

(15)

and

(16)

Note that the expression on arguments in (12) (13) (15) (16) shows the main difference between cooperative and non-cooperative algorithms.

The question that remains is how well does the diffusion GN algorithm perform in terms of its expected convergence behavior. First, what are the sufficient conditions of convergence for the diffusion GN algorithm? Second, is better the diffusion GN algorithm on convergence, compared with its non-cooperative counterpart? In other words, what are the benefits of cooperation? The following analysis and simulations will answer the above questions.

Iii Convergence analysis

Iii-a Assumptions and data model

To proceed the analysis, several reasonable assumptions need to be given as is commonly done in the literature [4, 21].

Assumption 2.

(1) is bounded for all near , and satisfies

and

where denotes the minimum value of when evaluated at .

(2) For all and , let

and

where

(3) Both and are Lipschitz continuous on with Lipschitz constant such that

and

for all . Furthermore, we have the following results [22]

and

where and are the corresponding Lipschitz constants.

In addition, the studying of the local convergence behavior need to be considered from the global view of network, since the performance of individual node depends on the whole network including cooperation rule and network topology. Thus, we introduce the global quantities

where

and

where is a block diagonal matrix whose entries are those of the column vector .

An aggregate matrix can be given with non-negative real entries that is redefined with the following conditions

(17)

Conditions (17) indicate that the sum of all entries on each row of the matrix is one, while the entry of shows the degree of closeness between nodes and . We will see the influence of selecting on the performance of the resulting algorithms in later simulations.

Similarly, we introduce an adjacency matrix with the element , in which if node is linked with node ; otherwise 0.

We also introduce the extended aggregate matrix

where is the Kronecker product operation and is the identity matrix.

Iii-B Temporal-spatial recursion relation

The temporal-spatial relation across network need to be considered as a starting point of convergence analysis. First, the diffusion strategy leads to the frequent spatial interaction between the neighborhoods, thereby each node is influenced by both local information such as and spatial information from neighbours such as . Second, the iteration way decides that the estimates and the local collected information on each node are time-variant, i.e., .

To begin with (9), we have

(18)

Using (18), we rewrite the local diffusion GN update step (11) as a global representation

(19)

Accordingly, we get the global non-cooperative GN update step

(20)

Subtracting on both sides of the equation (19) and embedding the equation (20), we get

(21)

Using the triangle inequality for vectors, we get the following recursion

(22)

The inequality (22) can be regarded as a temporal-spatial recursion relation, where the superscript and the subscript reflect the evolution of diffusion GN algorithm from temporal and spatial dimensions, respectively. And we establish the relation between diffusion GN and non-cooperative diffusion algorithms from the global perspective.

For the first term of the right side of (22), we have

(23)

where we use based on the property of .

For the second term of the right side of (22), we have the following conclusion.

Lemma 1. Let Assumptions 1 and 2 hold. The norm of global vector satisfies the following recursion

(24)

where

(25)

Proof: See Appendix A.

Given (23) and (24), we rewrite the temporal-spatial recursion relation (22) as

(26)

Given the above, the left side of (26) is the network deviation at time , while the right side of (26) will be related to the network deviation at time if we can confirm that shares the same character or is bounded by a given constant . Then, we can establish the relation of the network deviation between the successive two times in diffusion GN.

Iii-C Boundness of descent discrepancy

denotes the GN descent discrepancy over network between two modes of cooperative and non-cooperative. To decide the boundness of the discrepancy, we first evaluate the entry of , i.e., .

To begin the process, we write the entry as

(27)

Because of the matrix inverse operator, we introduce two quantities

(28)

And in order to lower the impact of inverse operator for our analysis, the known matrix expansion formula [21] will be used frequently in our analysis. That is

(29)

for any matrix and if .

From (9), is a convex combination of for . Thus, Assumptions 1 and 2 hold for .

Then we have

(30)

and

(31)

From (30) and (31), both and depend on . We now study the boundness of . Before that, we define a vector

which is the row of matrix .

Evaluating the norm of , we get

(32)

The block quantity represents the estimate difference across the network at time and is written by

whose individual entry is a vector.

For the norms of and , and , we have the following Lemmas.

Lemma 2. Let Assumptions 1 and 2 hold. The estimate difference between nodes and through the non-cooperative GN update (14) is bounded by

(33)

where

(34)
(35)
(36)

denotes the number of nodes that are both in and , denotes the number of nodes that are in and not in .

Proof: See Appendix B.

Lemma 3. Let Assumptions 1 and 2 hold. The estimate difference between nodes and through the diffusion GN update (11) is bounded by

(37)

and

(38)

always holds under the sufficient condition

(39)

where , , and are assigned by (35), (36), (15) and (28), respectively.

Proof: See Appendix D.

The condition (39) means that any two nodes and in the network have at least one common neighboring node, which is more likely to be achieved by a small and dense network. However, the condition can be relaxed in practice by allowing that all nodes are linked over single-hop or multi-hops so that it holds for the large scale networks. Thus, it is reasonable that the sufficient condition for applying the expansion formula in (27) always holds under Lemma 2.

Thus, we use the expansion formula (29) and the norm operator on (27) as follows

(40)

where the last equality comes from the obtained results including (105) (107) (112) (113) and the definitions (34) (114) of and (see Appendixes C and D). From (96), we know that is a bounded quantity that depends on the network topology.

Finally, we obtain the boundness conclusion as follows

(41)

Iii-D Sufficient conditions for system convergence

Giving the constant that satisfies (41), we rewrite the global recursion relation (26) as

(42)

which can be regarded as a nonlinear discrete dynamical system. Let , we will simplify notation of (42) with the general form

(43)

whose steady-state equilibrium is a level [23] that solves

(44)

With this expression it is easy to know that the global error is determined by the dynamical system since . Thus, guaranteing the stability of system will be needed.

Solving (44), we get two steady-state equilibrium points as follows

(45)

and

(46)

with the condition

(47)

The equilibrium points of the dynamical system (43) is stable if and only if [23]

(48)

where is the first order derivative of .

Thus, we know that is unstable since

(49)

while can be stable if

(50)

holds.

Because of

(51)

from (50), we get the following constraints

(52)

and