# Recovering a Single Community with Side Information

We study the effect of the quality and quantity of side information on the recovery of a hidden community of size K=o(n) in a graph of size n. Side information for each node in the graph is modeled by a random vector with the following features: either the dimension of the vector is allowed to vary with n, while log-likelihood ratio (LLR) of each component with respect to the node label is fixed, or the LLR is allowed to vary and the vector dimension is fixed. These two models represent the variation in quality and quantity of side information. Under maximum likelihood detection, we calculate tight necessary and sufficient conditions for exact recovery of the labels. We demonstrate how side information needs to evolve with n in terms of either its quantity, or quality, to improve the exact recovery threshold. A similar set of results are obtained for weak recovery. Under belief propagation, tight necessary and sufficient conditions for weak recovery are calculated when the LLRs are constant, and sufficient conditions when the LLRs vary with n. Moreover, we design and analyze a local voting procedure using side information that can achieve exact recovery when applied after belief propagation. The results for belief propagation are validated via simulations on finite synthetic data-sets, showing that the asymptotic results of this paper can also shed light on the performance at finite n.

## Authors

• 2 publications
• 10 publications
• ### Information Limits for Recovering a Hidden Community

We study the problem of recovering a hidden community of cardinality K f...
09/25/2015 ∙ by Bruce Hajek, et al. ∙ 0

• ### Community Detection with Side Information: Exact Recovery under the Stochastic Block Model

The community detection problem involves making inferences about node la...
05/22/2018 ∙ by Hussein Saad, et al. ∙ 0

• ### Semidefinite Programming for Community Detection with Side Information

This paper produces an efficient Semidefinite Programming (SDP) solution...
05/06/2021 ∙ by Mohammad Esmaeili, et al. ∙ 0

• ### Community Detection: Exact Recovery in Weighted Graphs

In community detection, the exact recovery of communities (clusters) has...
02/08/2021 ∙ by Mohammad Esmaeili, et al. ∙ 0

• ### On the Fundamental Limits of Exact Inference in Structured Prediction

Inference is a main task in structured prediction and it is naturally mo...
02/17/2021 ∙ by Hanbyul Lee, et al. ∙ 0

• ### Consistent recovery threshold of hidden nearest neighbor graphs

Motivated by applications such as discovering strong ties in social netw...
11/18/2019 ∙ by Jian Ding, et al. ∙ 0

• ### A New Pathway to Approximate Energy Expenditure and Recovery of an Athlete

This work proposes to use evolutionary computation as a pathway to allow...
04/16/2021 ∙ by Fabian Clemens Weigend, et al. ∙ 4

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Detecting communities (or clusters) in graphs is a fundamental problem that has been studied in various fields, statistics [3, 4, 5, 6, 7], computer science [8, 9, 10, 11, 12] and theoretical statistical physics [13, 14]. It has many applications: finding like-minded people in social networks [15], improving recommendation systems [16], detecting protein complexes [17]. In this paper, we consider the problem of finding a single sub-graph (community) hidden in a large graph, where the community size is much smaller than the graph size. Applications of finding a hidden community include fraud activity detection [18, 19] and correlation mining [20].

Several models have been studied for random graphs that exhibit a community structure [21]. A widely used model in the context of community detection is the stochastic block model (SBM) [22]. In this paper, the stochastic block model for one community is considered [23, 24, 25, 26]. The stochastic block model for one community consists of a graph of size with a community of size

, where any two nodes are connected with probability

if they are both within the community, and with probability otherwise.

The problem of finding a hidden community upon observing only the graph has been studied in [23, 24, 25]. The information limits111

The extremal phase transition threshold is also known as

information theoretic limit [22] or information limit [24]. We use the latter term throughout this paper. of weak recovery and exact recovery have been studied in [24]. Weak recovery is achieved when the expected number of misclassified nodes is , and exact recovery when all labels are recovered with probability approaching one. The limits of belief propagation for weak recovery have been characterized [25, 23] in terms of a signal-to-noise ratio parameter . The utility of a voting procedure after belief propagation to achieve exact recovery was pointed out in [25].

Graphical models are popular because they represent many large data sets and give insight on the performance of inference algorithms, but also in many inference problems they do not capture all data that is both relevant and available. In many practical applications, non-graphical relevant information is available that can aid the inference. For example, social networks such as Facebook and Twitter have access to other information other than the graph edges such as date of birth, nationality, school. A citation network has the authors’ names, keywords, and therefore may provide significant additional information beyond the co-authoring relationships. This paper characterizes the utility of side information in single-community detection, in particular exploring when and by how much can side information improve the information limit, as well as the phase transition of belief propagation, in single-community detection.

We model a varying quantity and quality of side information by associating with each node a vector (i.e., non-graphical) observation whose dimension represents the quantity of side information and whose (element-wise) log-likelihood ratios (LLRs) with respect to node labels represents the quality of side information. The contributions of this paper can be summarized as follows:

• The information limits in the presence of side information are characterized. When the dimension of side information for each node varies but its LLR is fixed across

, tight necessary and sufficient conditions are calculated for both weak and exact recovery. Also, it is shown that under the same sufficient conditions, weak recovery is achievable even when the size of the community is random and unknown. We also find conditions on the graph and side information where achievability of weak recovery implies achievability of exact recovery. Subject to some mild conditions on the exponential moments of LLR, the results apply to both discrete as well as continuous-valued side information.

When the side information for each node has fixed dimension but varying LLR, we find tight necessary and sufficient conditions for exact recovery, and necessary conditions for weak recovery. Under varying LLR, our results apply to side information with finite alphabet.

• The phase transition of belief propagation in the presence of side information is characterized, where we assume the side information per node has a fixed dimension. When the LLRs are fixed across

, tight necessary and sufficient conditions are calculated for weak recovery. Furthermore, it is shown that when belief propagation fails, no local algorithm can achieve weak recovery. It is also shown than belief propagation is strictly inferior to the maximum likelihood detector. Numerical results on finite synthetic data-sets validate our asymptotic analysis and show the relevance of our asymptotic results to even graphs of moderate size. We also calculate conditions under which belief propagation followed by a local voting procedure achieves exact recovery.

When the side information has variable LLR across , the belief propagation misclassification rate was calculated using density evolution. Our results generalize [26], where it was shown that belief propagation achieves weak recovery for only for binary side information consisting of noisy labels with vanishing noise.

We now present a brief review of the literature in the area of side information for community detection and highlight the distinctions of the present work. In the context of detecting two or more communities: Mossel and Xu [27]

showed that, under certain condition, belief propagation with noisy label information has the same residual error as the maximum a-posteriori estimator for two symmetric communities. Cai

et. al [28] studied weak recovery of two symmetric communities under belief propagation upon observing a vanishing fraction of labels. Neither [27] nor [28] establishes a converse. For two symmetric communities, Saad and Nosratinia [29, 30] studied exact recovery under side information. Asadi [31] studied the effect of i.i.d. vectors of side information on the phase transition of exact recovery for more than two communities. Kanade et. al [32] showed that observation of a vanishing number of labels is unhelpful to correlated recovery222Correlated recovery denotes probability of error that is strictly better than a random guess, and is not a subject of this paper. phase transition. For single community detection, Kadavankandy et al. [26] studied belief propagation with noisy label information with vanishing noise (unbounded LLRs).

The issue of side information in the context of single-community detection has not been addressed in the literature except for [26] whose results are generalized in this paper. Analyzing the effect of side information on information limit of weak recovery is a novel contribution of this work. A converse for the local algorithms such as belief propagation with side information has not been available prior to this work. The study of side information whose LLRs vary with is largely novel. And finally, while this work (inevitably) shares many tools and techniques with other works in the area of stochastic block models and community detection, the treatment of side information with variable LLR (as a function of ) presents new challenges for the bounding of errors by the application of Chernoff bound and large deviations, which are addressed in this work.

## Ii System Model and Definitions

Let be a realization from a random ensemble of graphs , where each graph has nodes and contains a hidden community with size . The underlying distribution of the graph is as follows: an edge connects a pair of nodes with probability if both nodes are in and with probability otherwise. is the indicator of an edge between nodes . For each node , a vector of dimension is observed consisting of side information, whose distribution depends on the label of the node. By convention if and if . For node , the entries of the side information vector are each denoted and can be interpreted as different features of the side information. The side information for the entire graph is collected into the matrix . The column vector collects the side information feature for all nodes .

The vector of true labels is denoted . and

are Bernoulli distributions with parameters

, respectively, and

 LG(i,j)=log(P(Gij)Q(Gij))

is the log-likelihood ratio of edge with respect to and .

In this paper, we address the problem of single-community detection, i.e., recovering from and , under the following conditions: while , , and .

An estimator is said to achieve exact recovery of if, as , . An estimator is said to achieve weak recovery if, as , in probability, where denotes the Hamming distance. It was shown in [24] that the latter definition is equivalent to the existence of an estimator such that . This equivalence will be used throughout our paper.

## Iii Information Limits

### Iii-a Fixed-Quality Features

In this subsection, the side information for each node is allowed to evolve with by having a varying number of independent and identically distributed scalar observations, each of which has a finite (imperfect) amount of information about the node label. By allowing the dimension of the side information per-node to vary and its scalar components to be identically distributed, the side information is represented with fixed-quality quanta. The results of this section demonstrate that as grows, the number of these side information quanta per-node must increase in a prescribed fashion in order to have a positive effect on the threshold for recovery.

For all , for all , define the distributions:

 V(υ)≜P(yi,m=υ|xi=1)U(υ)≜P(yi,m=υ|xi=−1)

Thus the components of the side information for each node (features) are identically distributed for all nodes and all graph sizes ; we also assume all features are independent conditioned on the node labels . The dimension of the side information per node is allowed to vary as the size of the graph changes.

are such that the resulting LLR random variable, defined below, has bounded support:

 LS(i,m)=log(V(yi,m)U(yi,m))

Throughout the paper, will continue to denote the LLR random variable of one side information feature, and denotes the random variable of the LLR of a graph edge.

###### Definition 1.
 ψQU(t,m1,m2) ≜m1log(EQ[etLG])+m2log(EU[etLS]) (1) ψPV(t,m1,m2) ≜m1log(EP[etLG])+m2log(EV[etLS]) (2) EQU(θ,m1,m2) ≜supt∈[0,1]tθ−ψQU(t,m1,m2) (3) EPV(θ,m1,m2) ≜supt∈[−1,0]tθ−ψPV(t,m1,m2) (4)

where , and .

#### Iii-A1 Weak Recovery

###### Theorem 1.

For single community detection under bounded-LLR side information, weak recovery is achieved if and only if:

 (K−1)D(P||Q)+MD(V||U)→∞ ,% liminfn→∞[(K−1)D(P||Q)+2MD(V||U)]>2log(nK) (5)
###### Proof.

For necessity please see Appendix B. For sufficiency, please see Appendix C. ∎

###### Remark 1.

The condition of bounded support for the LLRs can be somewhat weakened to Eqs. (65) and (68). As an example and with satisfies (65), (68) and the theorem continues to hold even though the LLR is not bounded.

###### Remark 2.

Theorem 1 shows that if grows with slowly enough, e.g., if is fixed and independent of , or if , side information does not affect the information limits.

###### Remark 3.

If the features are conditionally independent but not identically distributed, it is easy to show the necessary and sufficient conditions are:

 (K−1)D(P||Q)+M∑m=1D(Vm||Um)→∞ , liminfn→∞(K−1)D(P||Q)+2M∑m=1D(Vm||Um)>2log(nK)

where and are analogous to and earlier, except specialized to each feature.

The assumption that the size of the community is known a-priori is not always reasonable: we might need to detect a small community whose size is not known in advance. In that case, the performance is characterized by the following lemma.

###### Lemma 1.

For single-community detection under bounded-LLR side information, if the size of the community is not known in advance but obeys a probability distribution satisfying:

 P(∣∣|C∗|−K∣∣≤Klog(K))≥1−o(1) (6)

for some known . If conditions (5) hold, then:

 P(|^C△C∗|K≤2ϵ+1log(K))≥1−o(1) (7)

where

 ϵ=(min(log(K),(K−1)D(P||Q)+MD(V||U)))−12=o(1).

#### Iii-A2 Exact Recovery

The sufficient conditions for exact recovery are derived using a two-step algorithm (see Table I). Its first step consists of any algorithm achieving weak recovery, e.g. maximum likelihood (see Lemma 1). The second step applies a local voting procedure.

###### Lemma 2.

Define and assume achieves weak recovery, i.e.

 P(|^Ck△C∗k|≤δK for 1≤k≤1δ)→1. (8)

If

 liminfn→∞EQU(log(nK),K,M)>log(n) (9)

then .

###### Proof.

Then the main result of this section follows:

###### Theorem 2.

In single community detection under bounded-LLR side information, assume (5) holds, then exact recovery is achieved if and only if:

 liminfn→∞EQU(log(nK),K,M)>log(n) (10)
###### Proof.

For sufficiency, please see Appendix F. For necessity see Appendix G. ∎

###### Remark 4.

The assumption that (5) holds is necessary because otherwise weak recovery is not achievable, and by extension, exact recovery.

###### Remark 5.

Theorem 2 shows if grows with slowly enough, e.g., is fixed and independent of or , side information will not affect the information limits of exact recovery.

To illustrate the effect of side information on information limits, consider the following example:

 K=cnlog(n),q=blog2(n)n,p=alog2(n)n (11)

for positive constants . Then, , and hence, weak recovery is achieved without side information, and by extension, with side information. Moreover, exact recovery without side information is achieved if and only if:

 supt∈[0,1]tc(a−b)+bc−bc(ab)t>1 (12)

Assume noisy label side information with error probability . By Theorem 2, exact recovery is achieved if and only if:

 supt∈[0,1] tc(a−b)+bc−bc(ab)t−Mlog(n)log((1−α)tα(1−t)+(1−α)(1−t)αt)>1 (13)

If , then (13) reduces to (12), thus side information does not improve the information limits of exact recovery. If , then since . It follows that (13) is less restrictive than (12), thus improving the information limit.

Let denote the left hand side of (13) with , i.e.,

 ψ =supt∈[0,1]tc(a−b)+bc−bc(ab)t−log((1−α)tα(1−t)+(1−α)(1−t)αt) (14)

The behavior of against describes the influence of side information on exact recovery and is depicted in Fig. 1.

### Iii-B Variable-Quality Features

In this section, the number of features, , is assumed to be constant but the LLR of each feature is allowed to vary with .

#### Iii-B1 Weak Recovery

Recall that the probability distribution side information feature is when the node is inside and outside the community, and when the node is outside the community.

###### Theorem 3 (Necessary Conditions for Weak Recovery).

For single community detection under bounded-LLR side information, weak recovery is achieved only if:

 (K−1)D(P||Q)+M∑m=1(D(Vm||Um)+D(Um||Vm))→∞liminfn→∞(K−1)D(P||Q)+2M∑m=1D(Vm||Um)≥2log(nK) (15)
###### Proof.

The proof follows similar to Theorem 1. ∎

#### Iii-B2 Exact Recovery

We begin by concentrating on the following regime, and will subsequently show its relation to the set of problems that are both feasible and interesting.

 K=ρnlog(n),p=alog(n)2nq=blog(n)2n (16)

with constants and .

The alphabet for each feature is denoted with , where is the cardinality of feature which, in this section, is assumed to be bounded and constant across . The likelihoods of the features are defined as follows:

 αm+,ℓm≜P(yi,m=umℓm|xi=1) (17) αm−,ℓm≜P(yi,m=umℓm|xi=0) (18)

Recall that in our side information model, all features are independent conditioned on the labels. To ensure that the quality of the side information is increasing with , both and are assumed to be either constant or monotonic in .

To better understand the behavior of information limits, we categorize side information outcomes based on the trends of LLR and likelihoods. For simplicity we speak of trends for one feature; extension to multiple features is straight forward. An outcome is called informative if and non-informative if . An outcome is called rare if and not rare if . Among the four different combinations, the worst case is when the outcome is both non-informative and not rare for nodes inside and outside the community. We will show that if such an outcome exists, then side information will not improve the information limit. The best case is when the outcome is informative and rare for the nodes inside the community, or for the nodes outside the community, but not both. Two cases are in between: (1) an outcome that is non-informative and rare for nodes inside and outside the community and (2) an outcome that is informative and not rare for nodes inside and outside the community. It will be shown that the last three cases can affect the information limit under certain conditions.

For convenience we define:

 T ≜log(ab) (19)

We introduce the following functions whose value, as shown in the sequel, characterizes the exact recovery threshold:

 η1(ρ,a,b) ≜ρ(b+a−bTlog(a−bebT)) (20) η2(ρ,a,b,β) ≜ρb+ρ(a−b)−βTlog(ρ(a−b)−βρebT)+β (21) η3(ρ,a,b,β) ≜ρb+ρ(a−b)+βTlog(ρ(a−b)+βρebT) (22)

For example in the regime (16), one can conclude using (10) that exact recovery without side information is achieved if and only if .

The LLR of each feature is denoted:

 hmℓm≜log(αm+,ℓmαm−,ℓm) (23)

We also define the following functions of the likelihood and LLR of side information, whose evolution with is critical to the phase transition of exact recovery [30].

 f1(n) ≜M∑m=1hmℓm, (24) f2(n) ≜M∑m=1log(αm+,ℓm), (25) f3(n) ≜M∑m=1log(αm−,ℓm) (26)

In the following, the side information outcomes are represented by their index without loss of generality. Throughout, dependence on of outcomes and their likelihood is implicit.

###### Theorem 4.

In the regime characterized by (16), assume is constant and and are either constant or monotonic in . Then, necessary and sufficient conditions for exact recovery depend on side information statistics in the following manner:

1. If there exists any sequence (over ) of side information outcomes such that , , are all , then must hold.

2. If there exists any sequence (over ) of side information outcomes such that and evolve according to with , then must hold.

3. If there exists any sequence (over ) of side information outcomes such that with and furthermore , then must hold.

4. If there exists any sequence (over ) of side information outcomes such that with and furthermore , then must hold.

5. If there exists any sequence (over ) of side information outcomes such that with and furthermore , then must hold.

6. If there exists any sequence (over ) of side information outcomes such that with and furthermore , then must hold.

###### Proof.

For necessity, see Appendix H. For sufficiency, see Appendix I. ∎

###### Remark 6.

The six items in Theorem 4 are concurrent. For example, if some side information outcome sequences fall under Item 2 and some fall under Item 3, then the necessary and sufficient condition for exact recovery is .

###### Remark 7.

Theorem 4 does not address because it leads to a trivial problem. For example, for noisy label side information, if the noise parameter , then side information alone is sufficient for exact recovery. Also, when with , a necessary condition is easily obtained but a matching sufficient condition for this case remains unavailable.

In the following, we specialize the results of Theorem 4 to noisy-labels and partially-revealed-label side information.

###### Corollary 1.

For side information consisting of noisy labels with error probability , Theorem 4 combined with Lemma 17 state that exact recovery is achieved if and only if:

 {η1(ρ,a,b)>1, when log(1−αα)=o(log(n))η2(ρ,a,b,β)>1, when log(1−αα)=(β+o(1))log(n),0<β<ρ(a−b−bT)

Figure 2 shows the error exponent for the noisy label side information as a function of .

###### Corollary 2.

For side information consisting of a fraction of the labels revealed, Theorem 4 states that exact recovery is achieved if and only if:

 {η1(ρ,a,b)>1, when log(ϵ)=o(log(n))η1(ρ,a,b)+β>1, when log(ϵ)=(−β+o(1))log(n),β>0

Figure 3 shows the error exponent for partially revealed labels, as a function of .

We now comment on the coverage of the regime (16). If the average degree of a node is , then the graph will have isolated nodes and exact recovery is impossible. If the average degree of the node is , then the problem is trivial. Therefore the regime of interest is when the average degree is . This restricts and in a manner that is reflected in (16). Beyond that, in the system model of this paper , so is either or approaching a constant . The regime (16) focuses on the former, but the proofs are easily modified to cover the latter. For the convenience of the reader, we highlight the places in the proof where a modification is necessary to cover the latter case.

## Iv Belief Propagation

Belief propagation for recovering a single community was studied without side information in [25, 23] in terms of a signal-to-noise ratio parameter , showing that weak recovery is achieved if and only if . Moreover, belief propagation followed by a local voting procedure was shown to achieve exact recovery if , as long as information limits allow exact recovery.

In this section , i.e. we consider scalar side information random variables that are discrete and take value from an alphabet size . Extension to a vector side information is straight forward as long as dimensionality is constant across ; the extension is outlined in Corollary 3.

Denote the expectation of the likelihood ratio of the side information conditioned on by:

 Λ≜L∑ℓ=1α2+,ℓα−,ℓ (27)

By definition, , where is the chi-squared divergence between the conditional distributions of side information. Thus, .

### Iv-a Bounded LLR

We begin by demonstrating the performance of belief propagation algorithm on a random tree with side information. Then, we show that the same performance is possible on a random graph drawn from , using a coupling lemma [25] expressing local approximation of random graphs by trees.

#### Iv-A1 Belief Propagation on a Random Tree with Side Information

We model random trees with side information in a manner roughly parallel to random graphs. Let be an infinite tree with nodes , each of them possessing a label . The root is node . The subtree of depth rooted at node is denoted . For brevity, the subtree rooted at with depth is denoted . Unlike the random graph counterpart, the tree and its node labels are generated together as follows: is a Bernoulli- random variable. For any , the number of its children with label is a random variable that is Poisson with parameter if , and Poisson with parameter if . The number of children of node with label is a random variable which is Poisson with parameter , regardless of the label of node . The side information takes value in a finite alphabet . The set of all labels in is denoted with , all side information with , and the labels and side information of with and respectively. The likelihood of side information continues to be denoted by , as earlier.

The problem of interest is to infer the label given observations and . The error probability of an estimator can be written as:

 pte ≜KnP(^τ0=0|τ0=1)+n−KnP(^τ0=1|τ0=0) (28)

The maximum a posteriori (MAP) detector minimizes and can be written in terms of the log-likelihood ratio as , where and:

 Γt0=log(P(Tt,~τt|τ0=1)P(Tt,~τt|τ0=0)) (29)

The probability of error of the MAP estimator can be bounded as follows [33]:

 K(n−K)n2ρ2 ≤pte≤√K(n−K)nρ (30)

where .

###### Lemma 3.

Let denote the children of node , and . Then,

 Γt+1i =−K(p−q)+hi+∑k∈Nilog(pqeΓtk−ν+1eΓtk−ν+1) (31)

See Appendix L

##### Lower and Upper Bounds on ρ

Define for and any node :

 ψti =−K(p−q)+∑j∈NiM(hj+ψt−1j) (32)

where

 M(x)≜log(pqex−ν+1ex−ν+1)=log(1+pq−11+e−(x−ν)).

Then, and . Let and denote random variables drawn according to the distribution of conditioned on and , respectively. Similarly, let and denote random variables drawn according to the distribution of conditioned on and , respectively. Thus, . Define:

 bt ≜E[eZt1+U11+eZt1+U1−ν] (33) at ≜E[eZt1+U1] (34)
###### Lemma 4.

Let . Then:

 E[eU02]e−λ8bt≤ρ≤E[eU02]e−λ8Bbt (35)
###### Proof.

See Appendix M. ∎

Thus to bound , lower and upper bounds on are needed.

###### Lemma 5.

For all , if , then .

###### Proof.

See Appendix N. ∎

###### Lemma 6.

Define and . Assume that . Then,

 bt+1≥Λeλbt(1−Λ′Λe−ν2) (36)
###### Proof.

See Appendix O. ∎

###### Lemma 7.

The sequences and are non-decreasing in .

###### Proof.

The proof follows directly from [25, Lemma 5]. ∎

###### Lemma 8.

Define to be the number of times the logarithm function must be iteratively applied to to get a result less than or equal to one. Let and . Suppose . Then there are constants and depending only on and such that:

 b¯to+log∗(ν)+2≥Λeλν2(C−λ)(1−Λ′Λe−ν2) (37)

whenever and .

###### Proof.

See Appendix P. ∎

##### Achievability and Converse for the MAP Detector
###### Lemma 9.

Let , and . If , then:

 pte≥K(n−K)n2E2[eU02]e−λΛe4 (38)

If , then:

 pte≤√K(n−K)n2E[eU02]e−λΛ8Beλν2(C−λ)(1−Λ′Λe−ν2) (39)

Moreover, since :

 pte≤√K(n−K)n2E[eU02]e−ν(r+12)=Kne−ν(r+o(1)) (40)

for some .

###### Proof.

The proof follows directly from (30) and Lemmas 5 and 8. ∎

#### Iv-A2 Belief Propagation Algorithm for Community Recovery with Side Information

In this section, the inference problem defined on the random tree is coupled to the problem of recovering a hidden community with side information. This can be done via a coupling lemma [25] that shows that under certain conditions, the neighborhood of a fixed node in the graph is locally a tree with probability converging to one, and hence, the belief propagation algorithm defined for random trees in Section IV-A1 can be used on the graph as well. The proof of the coupling lemma depends only on the tree structure, implying that it also holds for our system model, where the side information is independent of the tree structure given the labels.

Define to be the subgraph containing all nodes that are at a distance at most from node and define and to be the set of labels and side information of all nodes in , respectively.

###### Lemma 10 (Coupling Lemma [25]).

Suppose that are positive integers such that . Then:

• If the size of community is deterministic and known, i.e., , then for any node in the graph, there exists a coupling between and such that:

 P((G^tu,x^tu,Y^tu)=(T^t,τ^t,~τ^t))≥1−n−1+o(1) (41)

where for convenience of notation, the dependence of on is made implicit.

• If obeys a probability distribution so that with , then for any node , there exists a coupling between and such that:

 P((G^tu,x^tu,Y^tu)=(T^t,τ^t,~τ^t))≥1−n−12+o(1) (42)

Now, we are ready to present the belief propagation algorithm for community recovery with bounded side information. Define the message transmitted from node to its neighboring node at iteration as:

 Rt+1i→j=hi−K(p−q)+∑k∈Ni∖jM(Rtk→i) (43)

where , is the set of neighbors of node and . The messages are initialized to zero for all nodes , i.e., for all and . Define the belief of node at iteration as:

 Rt+1i=hi−K(p−q)+∑k∈NiM(Rtk→i) (44)

Algorithm II presents the proposed belief propagation algorithm for community recovery with side information.

If in Algorithm II we have , according to Lemma 10 with probability converging to one , where was the log-likelihood defined for the random tree. Hence, the performance of Algorithm II is expected to be the same as the MAP estimator defined as , where . The only difference is that the MAP estimator decides based on while Algorithm II selects the largest