# Expectation Consistent Approximate Inference: Generalizations and Convergence

Approximations of loopy belief propagation, including expectation propagation and approximate message passing, have attracted considerable attention for probabilistic inference problems. This paper proposes and analyzes a generalization of Opper and Winther's expectation consistent (EC) approximate inference method. The proposed method, called Generalized Expectation Consistency (GEC), can be applied to both maximum a posteriori (MAP) and minimum mean squared error (MMSE) estimation. Here we characterize its fixed points, convergence, and performance relative to the replica prediction of optimality.

## Authors

• 13 publications
• 9 publications
• 25 publications
• 26 publications
• ### Approximate Survey Propagation for Statistical Inference

Approximate message passing algorithm enjoyed considerable attention in ...
07/03/2018 ∙ by Fabrizio Antenucci, et al. ∙ 14

• ### An Expectation-Maximization Approach to Tuning Generalized Vector Approximate Message Passing

Generalized Vector Approximate Message Passing (GVAMP) is an efficient i...
06/26/2018 ∙ by Christopher A. Metzler, et al. ∙ 0

• ### Expectation Propagation on the Maximum of Correlated Normal Variables

Many inference problems involving questions of optimality ask for the ma...
10/01/2009 ∙ by Philipp Hennig, et al. ∙ 0

• ### Fast Scalable Image Restoration using Total Variation Priors and Expectation Propagation

This paper presents a scalable approximate Bayesian method for image res...
10/04/2021 ∙ by Dan Yao, et al. ∙ 0

• ### Study of Joint Activity Detection and Channel Estimation Based on Message Passing with RBP Scheduling for MTC

In this work, based on the hybrid generalized approximate message passin...
06/13/2021 ∙ by R. B. di Renna, et al. ∙ 0

• ### Inference in Deep Networks in High Dimensions

Deep generative networks provide a powerful tool for modeling complex da...
06/20/2017 ∙ by Alyson K. Fletcher, et al. ∙ 0

• ### Fast and Accurate Binary Response Mixed Model Analysis via Expectation Propagation

Expectation propagation is a general prescription for approximation of i...
05/22/2018 ∙ by P. Hall, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Consider the problem of estimating a random vector

from observations under the posterior density

 p(x|y)=Z−1exp[−f1(x)−f2(x)], (1)

where is a normalization constantsometimes called the partition function and are penalty functions. Although both and the penalties may depend on , our notation suppresses this dependence. We are interested in two problems:

• MAP estimation: In maximum a posteriori (MAP) estimation, we wish to find the point estimate , equivalently stated as

 ˆx=argminx[f1(x)+f2(x)]. (2)
• MMSE estimation and approximate inference: In minimum mean-squared error (MMSE) estimation, we wish to compute the posterior mean and maybe also approximations of the posterior covariance or marginal posterior densities .

For the MAP estimation problem (2), the separable structure of the objective function can be exploited by one of several optimization methods, including variants of the iterative shrinkage and thresholding algorithm (ISTA) [1] ,[2, 3, 4, 5, 6] and the alternating direction method of multipliers (ADMM) [7] ,[8, 9, 10].

The MMSE and inference problems, however, are more difficult[11], even for the case of convex penalties [12, 13]. In recent years, there has been considerable interest in approximations of loopy belief propagation [14, 15] for both MMSE estimation and approximate inference. These methods include variants of expectation propagation (EP) [16, 17, 18] and, more recently, approximate message passing (AMP) [19, 20] ,[21, 22, 23]. For a posterior of the form (1), both EP and AMP reduce the inference problem to a sequence of problems involving only one penalty at a time. These “local” problems are computationally tractable under suitable penalties. Moreover, in certain large random instances, these methods are provably optimal [24],[20, 25]. Due to their generality, these methods have been successfully applied to a wide range of problems, e.g., [26, 27],[27, 28, 29, 30, 31, 32, 33].

Despite their computational simplicity, the convergence and accuracy of these methods are not fully understood. This work analyzes one promising EP-type method known as expectation consistent approximate inference (EC), originally proposed by Opper and Winther in [17]. As shown in [18], EC interpreted as a parallel form of the EP method from [16], while being closely related to the adaptive TAP method from[34] [35].

As we now describe, our work contributes to the extension and understanding of Opper and Winther’s EC method.

• Generalization: We propose and analyze a generalization of the EC algorithm that we call Generalized EC (GEC). The proposed method can be applied to arbitrary penalties and , and can also be used for both MAP or MMSE inference by appropriate selection of estimation functions. Standard EC typically applies only to MMSE inference, often with one penalty being quadratic. Also, GEC supports a generalization of the covariance diagonalization step, which is one of the key computational bottlenecks in standard EC [12].

• Fixed points: It is well known that, when the standard EC algorithm converges, its fixed points can be interpreted as saddle points of an energy function [17, 18] similar to the Bethe Free Energy (BFE) that arises in the analysis of loopy BP [15]. We provide a similar energy-function interpretation of the MMSE-GEC algorithm (Theorem 3

). Our analysis shows that the so-called first- and second-order terms output by MMSE-GEC can be interpreted as estimates of the posterior mean and variance. Regarding the fixed points of MAP-GEC, we show that the first-order terms are critical points of the objective function (

2) and the second-order terms can be interpreted as estimates of the local curvature of the objective function.

• Convergence: A critical concern for both EP and AMP is convergence [12, 36] [37, 38]. This situation is perhaps not surprising, given that they derive from loopy BP, which also may diverge. Most provably convergent alternate approaches are based on variants of double-loop methods such as [17, 13]. Other modifications to improve the stability include damping and fractional updates [39, 36, 40] and sequential updating [41], which increase the likelihood of convergence at the cost of convergence speed. Our analysis of GEC convergence considers the first- and second-order terms separately—a decoupling technique also used in [18],[42]. We show that, for strictly convex, smooth penalties, the standard updates for the first-order terms are provably convergent. For MAP-GEC, the second-order terms converge as well.

• Relation to the replica prediction of optimality: In [43], Tulino et al. used a replica analysis from statistical physics to predict the the MMSE error when estimating a random vector

from noisy measurements of the linear transformation

under large, unitarily invariant, random . This work extended the replica analyses in [44, 45, 46], which applied to i.i.d. . (See also [47].) In [48, 49]

, Çakmak et al. proposed a variant of AMP (called S-AMP) using closely related methods. In this work, we show that, when GEC is applied to linear regression, a prediction of the posterior MSE satisfies a fixed point equation that exactly matches the replica prediction from

[43].

• Relation to ADMM: ADMM [7] is a popular approach to optimization problems of the form (2) with convex . Essentially, ADMM iterates individual optimizations of and together with a “dual update” that (asymptotically) enforces consistency between the individual optimizations. The dual update involves a fixed step size, whose choice affects the convergence speed of the algorithm. In this work, we show that GEC can be interpreted as a variant of ADMM with two dual-updates, each with a step size that is adapted according to the local curvature of the corresponding penalty .

## Ii The Generalized EC Algorithm

### Ii-a Estimation and Diagonalization

The proposed GEC algorithm involves two key operations: i) estimation, which computes an estimate of using one penalty at a time; and ii) diagonalization of a sensitivity term.

#### Estimation

The estimation function is constructed differently for the MAP and MMSE cases. In the MAP case, the estimation function is given by

 gi(ri,γi):=argminx[fi(x)+12∥x−ri∥2γi], (3)

where and (componentwise), and where

 ∥v∥2γ:=N∑n=1γn|vn|2

for any and positive . The estimation function (3) is a scaled version of what is often called the proximal operator.

For the MMSE problem, the estimation function is

 gi(ri,γi):=E[x|ri,γi], (4)

where the expectation is with respect to the conditional density

 pi(x|ri,γi)=1Ziexp[−fi(x)−12∥x−ri∥2γi]. (5)

#### Diagonalization

In its more general form, the diagonalization operator is an affine linear map from to . Several instances of diagonalization are relevant to our work. For example, vector-valued diagonalization,

 d(Q):=diag(Q), (6)

which simply returns a -dimensional vector containing the diagonal elements of , and uniform diagonalization,

 d(Q):=N−1tr(Q)1N, (7)

which returns a constant vector containing the average diagonal element of . Here, denotes the -dimensional vector with all elements equal to one.

For the separable GLM, it will be useful to consider a block uniform diagonalization. In this case, we partition

 x=(x1;⋯;xL),xℓ∈RNℓ, (8)

with . Conformal to the partition, we define the block uniform diagonalization

 d(Q):=(d11N1;⋯;dL1NL),dℓ=1Nℓtr(Qℓℓ), (9)

where is the -th diagonal block of .

We note that any of these diagonalization operators can be used with either the MAP or MMSE estimation functions.

### Ii-B Algorithm Description

The generalized EC (GEC) algorithm is specified in Algorithm 1. There, is the Jacobian matrix of evaluated at , is the diagonal matrix whose diagonal equals , “” is componentwise vector division, and “” is componentwise vector multiplication. Note that it is not necessary to compute the full matrix in line 5; it suffices to compute only the diagonalization .

It will sometimes be useful to rewrite Algorithm 1 in a scaled form. Define and . Then GEC can be rewritten as

 ηi ←1./d(˜Qi), ˜Qi:=∂˜gi(βi,γi)/∂βi (10a) γj ←ηi−γi (10b) βj ←ηi.˜gi(βi,γi)−βi. (10c)

Note that, in line 5 of Algorithm 1, we are required to compute the (scaled) Jacobian of the estimation function. For the MAP estimation function (3), this quantity becomes [20]

 [∂gi(ri,γi)/∂ri]Γ−1i=[Hfi(ˆxi)+Γi]−1, (11)

where is the minimizer in (3) and is the Hessian of at that minimizer. For the MMSE estimation function, this scaled Jacobian becomes the covariance matrix

 [∂gi(ri,γi)/∂ri]Γ−1i=Cov(xi|ri,γi), (12)

where the covariance is taken with respect to the density (5).

### Ii-C Examples

#### SLR with Separable Prior

Suppose that we aim to estimate given noisy linear measurements of the form

 y=Ax+w,w∼N(0,γ−1wI), (13)

where is a known matrix and is independent of . Statisticians often refer to this problem as standard linear regression (SLR). Suppose also that has independent elements with marginal densities :

 p(x)=N∏n=1p(xn). (14)

Then, the posterior takes the form of (1) when

 f1(x)=−N∑n=1logp(xn),f2(x)=γw2∥y−Ax∥2. (15)

The separable nature of implies that, in both the MAP or MMSE cases, the output of the estimator (recall (3) and (4)) can be computed in a componentwise manner, as can the diagonal terms of their Jacobians. Likewise, the quadratic nature of implies that the output of can be computed by solving a linear system.

#### GLM with Separable Prior

Now suppose that, instead of (13), we have a more general likelihood with the form

 (16)

Statisticians often refer to (16) as the generalized linear model (GLM) [50, 51]. To pose the GLM in a format convenient for GEC, we define the new vector . Then, the posterior can be placed in the form of (1) using the penalties

 f1(u) =f1(x,z)=−N∑n=1logp(xn)−M∑m=1logp(ym|zm), f2(u) =f2(x,z)={0if z=Ax,∞if z≠Ax,

where constrains to the nullspace of . Because the first penalty remains separable, the MAP and MMSE functions can be evaluated componentwise, as in separable SLR. For the second penalty , MAP or MMSE estimation simply becomes projection onto a linear space.

## Iii Fixed Points of GEC

### Iii-a Consistency

We now characterize the fixed points of GEC for both MAP and MMSE estimation functions. For both scenarios, we will need the following simple consistency result.

###### Lemma 1.

Consider GEC (Algorithm 1) with arbitrary estimation functions and arbitrary diagonalization operator . For any fixed point with , we have

 η1 =η2=η:=γ1+γ2 (18a) ˆx1 =ˆx2=ˆx:=(γ1r1+γ2r2)./(γ1+γ2). (18b)
###### Proof.

From line 7 of Algorithm 1, for , which proves (18a). Also, since , the elements of are invertible. In addition, from line 8,

 ˆxi=(γ1.r1+γ2.r2)./ηi~{}~{}for~{}~{}i=1,2,

which proves (18b).

### Iii-B MAP Estimation

We first examine GEC’s fixed points for MAP estimation.

###### Theorem 1.

Consider GEC (Algorithm 1) with the MAP estimation functions from (3) and an arbitrary diagonalization . For any fixed point with , let be the common value of the two estimates as defined in Lemma 1. Then is a stationary point of the minimization (2).

See Appendix A.

### Iii-C MAP Estimation and Curvature

Note that Theorem 1 applies to an arbitrary diagonalization operator . This raises two questions: i) what is the role of the diagonalization operator , and ii) how can the fixed point be interpreted as a result of that diagonalization? We now show that, under certain additional conditions and certain choices of , can be related to the curvature of the optimization objective in (2).

Let be a stationary point of (2) and let be the Hessian of at . Then, the Hessian of the objective function in (2) is . Furthermore, let

 ˆη:=1./d((P1+P2)−1), (19)

so that is the diagonal of the inverse Hessian. Geometrically speaking, this inverse Hessian measures the curvature of the objective function at the critical point .

We now identify two cases where : i) when are diagonal, and ii) when are free. To define “free,” consider the Stieltjes transform of any real symmetric matrix :

 SP(ω)=1Ntr[(P−ωI)−1]=1NN∑n=11λn−ω, (20)

where

are the eigenvalues of

. Also, let denote the so-called -transform of , given by

 RP(ω)=S−1P(−ω)−1ω, (21)

where the inverse is in terms of composition of functions. The Stieltjes and -transforms are discussed in detail in [52]. We will say that and are “free” if

 RP1+P2(ω)=RP1(ω)+RP2(ω). (22)

An important example of freeness is the following. Suppose that the penalty functions are given by for some matrices and functions . Then

 Pi=Hfi(ˆx)=A\sf TiHhi(ˆzi)Ai,ˆzi=Aiˆx.

It is shown in [52] that, if are fixed and are unitarily invariant random matrices, then are asymptotically free in certain limits as . Freeness will thus occur in the limits of large problem with unitarily invariant random matrices.

###### Theorem 2.

Consider GEC (Algorithm 1) with the MAP estimation functions (3). Consider any fixed point with , and let and be the common values of and from Lemma 1. Recall that is a stationary point of the minimization (2) via Theorem 1. Then from (19) under either

1. vector-valued from (6) and diagonal ; or

2. uniform from (7) and free .

See Appendix B.

### Iii-D MMSE Estimation

We now consider the fixed points of GEC under MMSE estimation functions. It is well-known that the fixed points of the standard EC algorithm are critical points of a certain free-energy optimization for approximate inference [17, 18]. We derive a similar characterization of the fixed points of GEC.

Let be the density (1) for some fixed . Then, given any density , it is straightforward to show that the KL divergence between and can be expressed as

 D(b∥p)=D(b∥e−f1)+D(b∥e−f2)+H(b)+const, (23)

where is the differential entropy of and the constant term does not depend on . Thus, in principle, we could compute by minimizing (23) over all densities . Of course, this minimization is generally intractable since it involves a search over an -dimensional density.

To approximate the minimization, define

 J(b1,b2,q):=D(b1∥e−f1)+D(b2∥e−f2)+H(q), (24)

where , and are densities on the variable . Note that minimization of (23) over is equivalent to the optimization

 minb1,b2maxqJ(b1,b2,q) (25)

under the constraint

 b1=b2=q. (26)

The energy function (24) is known as the Bethe Free Energy (BFE). Under the constraint (26), the BFE matches the original energy function (23). However, BFE minimization under the constraint (26) is equally intractable.

As with EC, the GEC algorithm can be derived as a relaxation of the above BFE optimization, wherein (26) is replaced by the so-called moment matching constraints:

 E(x|b1) =E(x|b2)=E(x|q) (27a) d(E(xx\sf T|b1)) =d(E(xx\sf T|b2))=d(E(xx\sf T|q)). (27b)

Thus, instead of requiring a perfect match in the densities as in (26), GEC requires only a match in their first moments and certain diagonal components of their second moments. Note that, for the vector-valued diagonalization (6), (27b) is equivalent to

 E[x2n|bi]=E[x2n|q]  ∀n,i,

which requires only that the marginal 2nd moments match. Under the uniform diagonalization (7), (27b) is equivalent to

 1NN∑n=1E[x2n|bi]=1NN∑n=1E[x2n|q],  i=1,2,

requiring only that the average 2nd marginal moments match.

###### Theorem 3.

Consider GEC (Algorithm 1) with the MMSE estimation functions (4) and either vector-valued (6) or uniform (7) diagonalization. For any fixed point with , let and be the common values of and from Lemma 1. Also let

 bi(x)=pi(x|ri,γi) (28)

for from (5) and let be the Gaussian density

 q(x)∝exp[−12∥x−ˆx∥2η]. (29)

Then, are stationary points of the optimization (25) subject to the moment matching constraints (27). In addition, is the mean, and the marginal precision, of these densities:

 ˆx =E(x|q)=E(x|bi), i=1,2 (30) 1./η =d(Cov(xx\sf T% |q))=d(Cov(xx\sf T|bi)), i=1,2. (31)

See Appendix C.

### Iii-E An Unbiased Estimate of x

As described in Section II-C, a popular application of GEC is to approximate the marginals of the posterior density (1) in the case that the first penalty describes the prior and the second penalty describes the likelihood. That is,

 p(x)∝e−f1(x)~{}~{}and~{}~{}p(y|x)∝e−f2(x;y).

Here, we have made the dependence of on explicit. The GEC algorithm produces three estimates for the posterior density: , , and . Consider the first of these estimates, . From (28) and (5), this belief estimate is given by

 b1(x;r1,γ1)=Z(r1)−1p(x)exp[−12∥x−r1∥2γ1], (32)

where is a normalization constant.

If we model as a random vector, then (32) implies that

 p(x|r1)=b1(x;r1,γ1),

From Bayes rule, we know , which together with (32) implies

 p(r1|x)=p(r1)Z(r1)exp[−12∥r1−x∥2γ1].

For to be an admissible prior density on , it must satisfy , , and . It is straightforward to show that one admissible choice is

 p(r1)=cZ(r1),c2=(2π)−NN∏n=1γ1n.

Under this choice, we get

 p(r1|x)=N(x,Γ−11), (33)

in which case

can be interpreted as an unbiased estimate of

with -covariance Gaussian estimation error.

The situation above is reminiscent of AMP algorithms [19, 20]. Specifically, their state evolution analyses [24] show that, under large i.i.d. , they recursively produce a sequence of vectors that can be modeled as realizations of the true vector plus zero-mean white Gaussian noise. They then compute a sequence of estimates of by “denoising” each .

## Iv Convergence of the First-Order Terms for Strictly Convex Penalties

We first analyze the convergence of GEC with fixed “second-order terms” and . To this end, fix at arbitrary values and assume that are fixed points of (10b). Then Lemma 1 implies that . With and fixed, the (scaled) GEC algorithm (10) updates only . In particular, (10c) implies that this update is

 βj←(Γ1+Γ2)˜gi(βi,γi)−βi, (i,j)∈{(1,2),(2,1)}. (34)

We analyze the recursion (34) under the following assumption

###### Assumption 1.

For , fix and suppose that is differentiable in . Also define

 ˜Qi(βi):=∂˜gi(βi,γi)/∂βi, (35)

and assume that is symmetric and that there exists constants such that, for all ,

 ci1I+Γi≤˜Qi(βi)−1≤ci2I+Γi. (36)

This assumption is valid under strictly convex penalties:

###### Lemma 2.

Suppose that is strictly convex in that its Hessian satisfies

 ci1I≤Hfi(x)≤ci2I, (37)

for constants and all . Then, the MAP and MMSE estimation functions (3) and (4) satisfy Assumption 1.

###### Proof.

See [53].

We then have the following convergence result.

###### Theorem 4.

Consider the recursion (34) with functions satisfying Assumption 1 and arbitrary fixed values of , for . Then, from any initialization of , (34) converges to a unique fixed point that is invariant to the choice of .

See Appendix D.

## V Convergence of the Second-Order Terms for MAP Estimation

### V-a Convergence

We now examine the convergence of the second-order terms and . The convergence results that we present here apply only to the case of MAP estimation (3) under strictly convex penalties that satisfy the conditions in Lemma 2. Furthermore, they assume that Algorithm 1 is initialized using a pair yielding , where is a local minimizer of (2).

###### Theorem 5.

Consider GEC (Algorithm 1) under the MAP estimation functions (3) with penalties that are strictly convex functions satisfying the assumptions in Lemma 2. Construct and as follows:

Choose arbitrary and run Algorithm 1 under fixed and fixed (for ) until convergence (as guaranteed by Theorem 4). Then record the final value of as .

Finally, run Algorithm 1 from the initialization without keeping and fixed.

1. For all subsequent iterations, we will have , where is the unique global minimizer of (2).

2. If is either the vector-valued or uniform diagonalization operator, then the second-order terms will converge to unique fixed points.

See Appendix E.

## Vi Relation to the Replica Prediction

Consider the separable SLR problem described in Section II-C for any matrix and noise precision . Consider GEC under the penalty functions (15), MMSE estimation (4), and uniform diagonalization (7). Thus, will have identical components of scalar value .

Suppose that is the belief estimate generated at a fixed point of GEC. Since in (14) is separable, (32) implies

 b1(x;r1,γ1)∝N∏n=1p(xn)e−γ1(xn−r1n)2/2.

In the sequel, let and denote the mean and variance with respect to the marginal density

 b1(xn|r1n,γ1)∝p(xn)e−γ1(xn−r1n)2/2. (38)

From (27a), the GEC estimate satisfies , which is the posterior mean under the estimated density (38). Also, from (27b) and the definition of the uniform diagonal operator (7), the components of are identical and satisfy

 η−1=1Ntr(Cov(x|r1,γ1))=1NN∑n=1var(xn|r1n,γ1), (39)

which is the average of the marginal posterior variances. Equivalently, is the average estimation MSE,

 η−1=1NN∑n=1E[(xn−ˆxn)2|r1n,γ1].

We will show that the value for

can be characterized in terms of the singular values of

. Let , and let denote its Stieltjes Transform (20) and its -transform (21). We then have the following.

###### Theorem 6.

For the above problem, at any fixed point of GEC (Algorithm 1), and satisfy the fixed-point equations

 γ1=RY(−η−1),η−1=1NN∑n=1var(xn|r1n,γ1), (40)

where is the posterior variance from the density in (38).

###### Proof.

See Appendix F.

It is interesting to compare this result with that in [43], which considers exactly this estimation problem in the limit of large with certain unitarily invariant random matrices and i.i.d. . That work uses a replica symmetric (RS) analysis to predict that the asymptotic MSE satisfies the fixed point equations

 γ1=RY(−η−1),η−1=E[var(xn|r1n,γ1)], (41)

where the expectation is over

. This Gaussian distribution is exactly the predicted likelihood

in (33). Hence, if is i.i.d., and follows the likelihood in (33), then the MSE predicted from the GEC estimated posterior must satisfy the same fixed point equation as the minimum MSE predicted from the replica method in the limit as . In particular, if this equation has a unique fixed point, then the GEC-predicted MSE will match the minimum MSE as given by the replica method.

Of course, these arguments are not a rigorous proof of optimality. The analysis relies on the GEC model with a particular choice of prior on . Also, the replica method itself is not rigorous. Nevertheless, the arguments do provide some hope that GEC is optimal in certain asymptotic and random regimes.

We conclude by relating GEC to the well-known alternating direction method of multipliers (ADMM) [7, 8, 9, 10]. Consider the MAP minimization problem (2). To solve this via ADMM, we rewrite the minimization as a constrained optimization

 minx1,x2f1(x1)+f2(x2) s.t. x1=x2. (42)

The division of the variable into two variables and is often called variable splitting. Corresponding to the constrained optimization (42), define the augmented Lagrangian,

 Lγ(x1,x2,s) =f1(x1)+f2(x2)+s% \sf T(x1−x2) +γ2∥x1−x2∥2, (43)

where is a dual vector and is an adjustable weight. The ADMM algorithm for this problem iterates the steps

 ˆx1 ←argminx1Lγ(x1,ˆx2,s) (44a) ˆx2 ←argminx2Lγ(ˆx1,x2,s) (44b) s ←s+γ(ˆx1−ˆx2), (44c)

where it becomes evident that can also be interpreted as a step size. The benefit of the ADMM method is that the minimizations involve only one penalty, or , at a time. A classic result [7] shows that if the penalties are convex (not necessarily smooth) and (2) has a unique minima, then the ADMM algorithm will converge to that minima. Our next result relates MAP-GEC to ADMM.

###### Theorem 7.

Consider GEC (Algorithm 1) with the MAP estimation functions (3), but with fixed second-order terms,

 γ1=γ2=γ, η=γ1+γ2=2γ (45)

for some fixed scalar value . Define

 ski=γ(ˆxki−rki). (46)

Then, the outputs of GEC satisfy

 ˆxk1 =argminx1Lγ(x1,ˆxk2,sk1) (47a) sk2 =sk1+γ(ˆxk1−ˆxk2) (47b) ˆxk+12 =argminx2Lγ(ˆxk1,x2,sk2) (47c) sk+11 =sk2+γ(ˆxk1−ˆxk+12). (47d)

Note that in the above description, we have been explicit about the iteration number to be precise about the timing of the updates. We see that a variant of ADMM can be interpreted as a special case of GEC with particular, fixed step sizes. This variant differs from the standard ADMM updates by having two updates of the dual parameters in each iteration. Alternatively, we can think of GEC as a particular variant of ADMM that uses an adaptive step size. From our discussion above, we know that the GEC algorithm can be interpreted as adapting the step-size values to match the local “curvature” of the objective function.

## Appendix A Proof of Theorem 1

Since , and is the MAP estimation function (3), we have

 ˆx=argminx[fi(x)+12∥x−ri∥2γi].

Hence,

 ∇fi(ˆx)+γi.(ˆx−ri)=0,

where denotes the gradient of at . Summing over and applying (18b),

 ∇f1(ˆx)+∇f2(ˆx)=(γ1.r1+γ2.r2)−(γ1+γ2).ˆx=0,

which shows that is a critical point of (2).

## Appendix B Proof of Theorem 2

Using (11), the fixed points of line 7 of Algorithm 1 must satisfy

 γj=1./d(Qi)−γi,Qi=(Pi+Γi)−1. (48)

Now, to prove part (a) of the theorem, suppose for some vector . Using (48) with the vector-valued diagonalization ,

 γj=1./diag[(Pi+Γi)−1]−γi=pi+γi−γi=pi.

Hence,

 η=γ1+γ2=1./diag[(P1+P2)−1]=ˆη.

In part (b) of the theorem, we use uniform diagonalization (7). Recall that has identical components, which we shall call . Likewise, are vectors with identical components . Then from line 6 of Algorithm 1,

which shows that