DeGroot-Friedkin Map in Opinion Dynamics is Mirror Descent

12/29/2018 ∙ by Abhishek Halder, et al. ∙ University of California Santa Cruz 0

We provide a variational interpretation of the DeGroot-Friedkin map in opinion dynamics. Specifically, we show that the nonlinear dynamics for the DeGroot-Friedkin map can be viewed as mirror descent on the standard simplex with the associated Bregman divergence being equal to the generalized Kullback-Leibler divergence, i.e., an entropic mirror descent. Our results reveal that the DeGroot-Friedkin map elicits an individual's social power to be close to her social influence while minimizing the so called "extropy" -- the entropy of the complimentary opinion.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The DeGroot-Freidkin map [1] in opinion dynamics is a nonlinear recursion of the form

where denotes the standard simplex in

, i.e., convex hull of the standard basis vectors

in , and denotes its interior. The state vector models the opinion of individuals in a social network on a particular issue. The recursion index codifies a sequence of issues. The parameter vector

is the Perron-Frobenius left eigenvector of an

row stochastic, zero-diagonal, irreducible222A nonnegative matrix is irreducible if its associated digraph is strongly connected. The digraph associated with an nonnegative matrix is constructed by adding a directed edge from node to , where , provided the -th element of the matrix is positive. matrix , typically referred to as the “relative influence” or “relative interaction matrix”. As in the original DeGroot-Friedkin model, we will assume that the matrix is constant. Under the stated structural assumptions on matrix , the vector satisfies (see e.g., [1, Lemma 2.3, part (i)]) for all . Intuitively, the elements of the matrix model the relative influence of an individual’s social network in her opinion, and they affect the opinion dynamics via vector . Thus, the DeGroot-Freidkin map describes how the opinion of a group of individuals evolve over a sequence of issues accounting that social interactions influence opinion.

To ease notation, let . In the DeGroot-Freidkin model, the map is explicitly given by

(1)

for , where the symbol denotes element-wise division, and denotes the column vector of ones. The explicit form of the recursion appeared first in [1, Lemma 2.2], and was proposed as a combination of the DeGroot model [2] and the Friedkin’s model of reflected appraisal [3] in the evolution of social power (therefore, the name “DeGroot-Friedkin model”). Various extensions of the basic DeGroot-Friedkin model have appeared in [4, 5, 6].

The convergence properties for the DeGroot-Friedkin map depend on whether the digraph associated with has star topology or not. An -vertex digraph has star topology if there exists a node , referred to as the “center node”, so that all directed edges of the digraph share the -th vertex. In the opinion dynamics context, interpreting the vertices of the digraph as individuals, existence of star topology means that a single individual holds the social power to influence the opinion of the group.

From (1), it is evident that the map leaves the vertices of the simplex invariant, and hence are fixed points. For333The DeGroot-Friedkin dynamics for the map (1) is degenerate for since in that case, and all points on the simplex are fixed points. In this paper, we thus consider . , it is known [1, Theorem 4.1] that in addition to the simplex vertices, there exists a unique fixed point , provided the digraph associated with does not have star topology. In that case, for all initial conditions , the iterates as . On the other hand, if the digraph associated with has star topology, then the simplex vertices are the only fixed points [1, Lemma 3.2], and for all initial conditions , the iterates as , where is the index of the center node. In the rest of this paper, we will tacitly assume that the digraph associated with does not have star topology, i.e., the map (1) admits fixed points where .

We can interpret the vertices of the simplex as “autocratic” fixed points. The fixed point in the interior of the simplex, is purely “democratic” when it is equal to , which happens if and only if is doubly stochastic. In general, the location of depends on the parameter vector (or equivalently, on the matrix ).

The results mentioned in the preceding two paragraphs were derived in [1] through Lyapunov analysis. The purpose of this paper is to present a variational interpretation of the opinion dynamics for the DeGroot-Friedkin map. Specifically, we show that the DeGroot-Friedkin map can be viewed as mirror descent of a convex function on the standard simplex with the associated Bregman divergence being equal to the generalized Kullback-Leibler divergence. On one hand, our development provides novel geometric insight for the opinion dynamics on standard simplex. On the other hand, it answers the natural question: what is the collective utility (i.e., “social welfare”) that the DeGroot-Friedkin map elicits over a given influence network?

This paper is organized as follows. Section II provides an expository overview of mirror descent. Our main results are collected in Section III. Several implications of our variational interpretation are provided in Section IV. Section V concludes the paper.

Notations and preliminaries

We denote the entropy of a vector as , and the Kullback-Leibler divergence between as . As is well-known, both and are . In this paper, the operators and are to be understood element-wise. We use the symbols and to denote element-wise multiplication and division, respectively. Given a vector , by we mean a diagonal matrix with diagonal elements being equal to the entries of the vector . The notations and stand for domain and range of a function, respectively; stands for closure of an open set; stands for boundary of a closed set. By closure of a function, we mean that its epigraph is a closed set. We use to denote the standard Euclidean inner product. The Legendre-Fenchel conjugate [7, Section 12] of a function , is given by

We clarify here the notation that a function with superscript denotes the Legendre-Fenchel conjugate, while a vector with superscript denotes fixed point. The following property of the Legendre-Fenchel conjugate will be useful in this paper. Let where is nonsingular, and . Then

(2)

Ii Mirror Descent

The mirror descent [8] is a generalization of the well-known projected gradient descent algorithm to account the pertinent geometry of the optimization problem. Recall that for solving a convex optimization problem of the form

(3)

(i.e., is a convex function; is a convex set), the projected gradient descent with constant step-size is a two-step algorithm, given by

(4a)
(4b)

where the Euclidean projection operator , the subgradient (the subdifferential), and . The mirror descent generalizes (4) by introducing the so-called mirror map and its associated Bregman divergence [9].

Definition 1

(Mirror map) Given the convex optimization problem (4), suppose is a differentiable, strictly convex function on an open convex set , i.e., , such that the constraint set , , and as . Then is called a mirror map.

Definition 2

(Bregman divergence) Let be a mirror map as in Definition 1. The associated Bregman divergence is given by

(5)

and can be interpreted as the error at due to first order Taylor approximation of about . In general, is non-symmetric and hence not a metric.

With Definitions 1 and 2 in place, the mirror descent algorithm associated with the mirror map is a modified version of (4), given by

(6a)
(6b)

where the Bregman projection operator , and .

The main insight behind (6) is the following. As the subgradient is an element of the dual space, the subtraction in (4a) does not make sense unless the decision variable in (3) belongs to a Hilbert space (since the dual space of a Hilbert space is isometrically isomorphic to the Hilbert space, thanks to the Riesz representation theorem [13, Ch. 4]). To circumvent this issue, (6a) takes an element from the primal space to the dual space via , performs the gradient update in the dual space, and maps back the updated value in the primal space. To ensure that be in the set , the Bregman projection is performed in (6b). The choice of the mirror map is usually guided by the geometry of the set .

We note that (6) reduces to (4) by setting and (in this case, ).

Of particular importance to us, is the choice (the negative entropy), , resulting in

(7)

the generalized Kullback-Leibler divergence, named so because it equals when . In the opinion dynamics context, we set , and seek an equivalence between (1) and (3). Per Definition 1, notice that is a valid mirror map since it is strictly convex and differentiable; furthermore, , , and as . Using (7), direct computation gives

(8)

Therefore, for the mirror map , the mirror descent algorithm (6) becomes

(9a)
(9b)

where . Notice that for , the map (1) is indeed in the form of a generalized Kullback-Leibler projection for a positive vector onto the standard simplex. We next develop this correspondence between (1) and (9).

Iii Main Results

In order to associate a variational problem of the form (3) with the DeGroot-Friedkin map, we transcribe (1) in the form (9) by setting

(10)

where , . Rearranging (10), we get

(11)

which implies

(12a)
(12b)

where we used . Since both and are nonnegative functions, hence from (12b) it follows that for all , . Furthermore, we have the following.

Theorem 1

The function is convex over .

Notice that

(13)

The following non-trivial444Notice that is not convex on . Yet, the function is “simplex-convex”. result was proved in [10, Theorem 20]: the function is convex for . Therefore, (13) being the sum of a convex and a linear function, is also convex in . From (12b), the statement follows.

Remark 1

In [11], the quantity , , was referred to as the “extropy”, and was argued to be a complimentary concept of the entropy

. Like entropy, the extropy is permutation invariant, achieves maximum at the uniform distribution

, and minimum at the simplex vertices , . The quantities entropy and extropy coincide for , but are different for (see e.g., [11, Section 2]).

Theorem 1 and its preceding discussion reveal that computing the fixed point for the DeGroot-Friedkin map is equivalent to solving a convex optimization problem over . We summarize this in the following proposition.

Proposition 1

For , and for a given , let be the non-autocratic fixed point of the DeGroot-Friedkin map (1). Then equals

(14a)
(14b)

The equivalence between the mirror descent (3) with and the DeGroot-Friedkin map is due to (10), (11), (12). The convexity of the objective follows from Theorem 1. What remains to prove is that we must have , i.e., cannot be on the boundary of the simplex. One way to show this is to observe from (1) that , i.e.,

(15)

Since for all , from (15) it follows that , i.e., . We will see below that (15) can also be derived from the conditions of optimality for (14b).

An immediate corollary of the above is that the fixed point is unique and its basin of attraction is . These facts were established in [1] via non-smooth Lyapunov analysis.

Problem (14) minimizes the extropy (i.e., entropy of complimentary opinion) while staying close to the vector in Kullback-Leibler sense. This can be interpreted as follows. The entries of , termed as “eigenvector centrality scores”, reveal social influence of an individual. The entries of the argmin reveal the individual’s social power. The Kullback-Leibler term in the objective in (14) implies that an individual’s social power tends to be close to her social influence. The extropy term promotes collective non-uniformity in complimentary opinion, i.e., penalizes the “spread” of the complimentary opinion for the group. The overall objective in (14) encapsulates the combined effect of these two tendencies.

For , the fixed point is known [1, p. 380] to have the same ordering as the vector , i.e., for all . We can recover this fact from (14) as follows.

Theorem 2

For a given , let be the argmin for the convex problem (14). For any permutation matrix , let

Then .

We start by noting that

(16)

and that . Since , hence letting , we can rewrite the right-hand-side of (16) as

(17)

where we have used that , as both entropy and extropy are permutation invariant. Therefore,

This completes the proof.

For problem (14), since the objective is convex, and the constraint is linear, strong duality holds. Let be the Lagrange multiplier associated with the constraint . The corresponding Lagrangian

(19)

yields the following Karush-Kuhn-Tucker (KKT) conditions for the optimal pair :

(20a)
(20b)

Summing (20a) over , then using (20b) and reveals that

(21)

Using (21) to substitute for in (20a) results the map , given by

(22)

which is what we obtained in (15).

At this point, recall that the matrix being doubly stochastic is equivalent to . We next use (22) to further prove that if and only if .

Theorem 3

Let be the argmin for the convex problem (14). Then, if and only if .

For any , using in (22), we obtain , since , and = constant (from (21)). This gives , for all . Notice that since otherwise, remaining entries of the vector would be zero, which contradicts the premise . Hence for all . The condition then yields for all .

On the other hand, directly substituting in (22) results for all .

Fig. 1: The colormap of the convex objective function in (14) for on the simplex for . In this figure, we also plot the fixed point (black diamond) and the first four iterates of six randomly chosen initial conditions (indicated by six different colored circles), showing they converge to , which is the minimizer for the objective function in (14).
Fig. 2: The colormap of the convex objective function in (14) for on the simplex for . In this figure, we also plot the fixed point (black diamond) and the first four iterates of six randomly chosen initial conditions (indicated by six different colored circles), showing they converge to , which is the minimizer for the objective function in (14).
Remark 2

Notice that for , the problem (14) reduces to computing the argmin of (due to (13)). Therefore, a corollary of Theorem 3 is that is the minimizer of the convex function .

We now provide some numerical evidence to help visualize the development so far. In Fig. 1, we plot the colormap of the objective function in (14) for on the simplex for . This colormap suggests that the objective function achieves minimum at , which is in accordance with Theorem 3. In the same figure, we overlay the fixed point (black diamond) and the first few iterates of six randomly chosen initial conditions (indicated by six different colored circles) for recursion (1), showing that all the iterates converge to , which is indeed the minimum of over the simplex.

Likewise, in Fig. 2, we plot the colormap of the objective function in (14) on the simplex for . In this case, the fixed point (black diamond), which can be verified by direct substitution in (1). Again, in Fig. 2, we overlay the first few iterates of six randomly chosen initial conditions (indicated by six different colored circles) for recursion (1) showing that all the iterates converge to , which is indeed the minimum of over the simplex.

Iv Ramifications

Next, we collect some consequences which follow from our variational interpretation.

Iv-a Proximal Recursion

A consequence of identifying with the mirror descent is that we can express the (transient) DeGroot-Friedkin iterates via proximal recursion:

(23)

where , and is given by (11). This proximal recursion perspective of mirror descent is due to [12]. One can view (23) as minimizing the local linearization of while not being too far (in Kullback-Leibler sense) from the previous iterate.

Iv-B Lagrange Dual Problem

Since the constraint in the primal problem (14b) is linear, we can derive the Lagrange dual problem associated with it using the Legendre-Fenchel conjugate (see e.g., [14, p. 221, Section 5.1.6]). Specifically, let , , , and . From (13), ; its Legendre-Fenchel conjugate . Thus, the Lagrange dual function associated with the primal problem (14b) is

(24)

where as before, is the Lagrange multiplier associated with the constraint .

The Legendre-Fenchel conjugate in (24) can be written as infimal convolution of and , i.e.,

(25a)
(25b)
(25c)

where we used (2) to derive (25b). Performing the unconstrained minimization in (25c), we obtain

(26)

where

(27)

Thus, the dual problem associated with the primal problem (14b) is , where is given by (24), and is given by (26).

Iv-C Equivalent Natural Gradient Descent

Natural gradient descent [15] generalizes the standard gradient descent to a Riemannian manifold. Specifically, let be an

-dimensional Riemannian manifold with metric tensor

. For an optimization problem of the form

(28)

the natural gradient descent on with fixed step size , is given by

(29)

where , i.e., (29) steps in the steepest descent direction of along the manifold . We now exploit an equivalence established in [16] between the mirror descent with twice differentiable mirror map , and the natural gradient descent along the dual Riemannian manifold as follows. Since is strictly convex and twice differentiable, the Hessian is positive definite. Thus, the Bregman divergence induces the Riemannian manifold . Let be the image of under map , i.e., , and let be the Legendre-Fenchel conjugate of . Then, the dual Bregman divergence induces the Riemannian manifold . In [16], was interpreted as the dual Riemannian manifold of the primal Riemannian manifold . For the unconstrained case (see [16, Theorem 1]), i.e., for in (6), the mirror descent with mirror map is equivalent to the natural gradient descent (29) along the dual manifold . For the constrained case, i.e., for in (6), we modify (29) as projected natural gradient descent, i.e.,

(30)

where , and . Thus, in general, the mirror descent (6) is equivalent to the projected natural gradient descent (30).

For our particular instance (9), is twice differentiable. Furthermore, , , , and (30) becomes

(31)

where is given by (12b). Substituting in (31) gives

(32)

The Lemma below helps in computing the projection in (32).

Lemma 1

Let , and . Then .

The proof follows from the definitions of the Bregman divergence and the Legendre-Fenchel conjugate. In (32), let , , and . Thanks to Lemma 1,

(33)

where , and is given by (7). Direct calculation yields . Therefore, (32) gives

(34)

Substituting back in (34) followed by algebraic simplification results

(35)

Since , hence the natural gradient recursion (35) is exactly the DeGroot-Friedkin map (1).

Remark 3

The equivalence between the mirror descent (9) and the natural gradient descent (31) allows us to interpret the DeGroot-Friedkin map as steepest descent of along the manifold , where is the image of under map . In other words, the steepest descent occurs on the space of (shifted) log-likelihood.

Remark 4

In the information geometry literature [17, 18], (33) is called the moment or M-projection while (8) is called the information or I-projection. For arbitrary , (8) and (33) are not equal in general.

V Conclusions

The DeGroot-Friedkin model for opinion dynamics describes the evolution of social power as a group of individuals discuss a sequence of issues over a network. We show that the DeGroot-Friedkin dynamics has a variational interpretation, i.e., the group of individuals collectively minimize a convex function of the opinions or self-weights. In particular, we prove that the nonlinear recursion associated with the DeGroot-Friedkin map can be viewed as entropic mirror descent over the standard simplex. Our variational formulation recovers known properties of the DeGroot-Friedkin map which were proved earlier via non-smooth Lyapunov analysis. Furthermore, the mirror descent framework reveals new interpretations of the DeGroot-Friedkin dynamics – as a proximal recursion, and as a steepest descent on the space of log-likelihood. We hope that our results will motivate further investigations of opinion dynamics models from a variational perspective.

References

  • [1] P. Jia, A. Mirtabatabaei, N.E. Friedkin, and F. Bullo, “Opinion Dynamics and the Evolution of Social Power in Influence Networks”, SIAM Review, Vol. 57, pp. 367-397, 2015.
  • [2] M.H. DeGroot, “Reaching a Consensus”, Journal of the American Statistical Association, Vol. 69, No. 345, pp. 118–121, 1974.
  • [3] N.E. Friedkin, “A Formal Theory of Reflected Appraisals in the Evolution of Power”, Administrative Science Quarterly, Vol. 56, No. 4, pp. 501–529, 2011.
  • [4] G. Chen, X. Duan, N.E. Friedkin, F. Bullo, “Social Power Dynamics over Switching and Stochastic Influence Networks”, IEEE Transactions on Automatic Control, 2018.
  • [5] Z. Xu, J. Liu, T. Başar, “On A Modified DeGroot-Friedkin Model of Opinion Dynamics”, 2015 American Control Conference (ACC), pp. 1047–1052, 2015.
  • [6] Z. Askarzadeh, R. Fu, A. Halder, Y. Chen, and T.T. Georgiou, “Stability Theory of Stochastic Models in Opinion Dynamics”, IEEE Transactions on Automatic Control, 2019.
    preprint: https://arxiv.org/pdf/1706.03158.pdf.
  • [7] R.T. Rockafeller, Convex Analysis, Princeton University Press, 1970.
  • [8] A.S. Nemirovsky, and D.B. Yudin, Problem Complexity and Method Efficiency in Optimization. Wiley, 1983.
  • [9] L.M. Bregman, “The Relaxation Method of Finding the Common Point of Convex Sets and Its Application to the Solution of Problems in Convex Programming”, USSR Computational Mathematics and Mathematical Physics, Vol. 7, No. 3, pp. 200–217, 1967.
  • [10] P.O. Vontobel, “The Bethe Permanent of a Nonnegative Matrix”, IEEE Transactions on Information Theory, Vol. 59, No. 3, pp. 1866–1901, 2013.
  • [11] F. Lad, G. Sanfilippo, and G. Agro, “Extropy: Complementary Dual of Entropy”, Statistical Science, Vol. 30, No. 1, pp. 40–58, 2015.
  • [12] A. Beck, and M. Teboulle, “Mirror Descent and Nonlinear Projected Subgradient Methods for Convex Optimization”, Operations Research Letters, Vol. 31, No. 3, pp. 167–175, 2003.
  • [13] W. Rudin, Real and Complex Analysis, 3rd ed., McGraw-Hill Book Company, 1987.
  • [14] S. Boyd, and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
  • [15] S. Amari, “Natural Gradient Works Efficiently in Learning”, Neural Computation, Vol. 10, No. 2, pp. 251–276, 1998.
  • [16] G. Raskutti, and S. Mukherjee, “The Information Geometry of Mirror Descent”, IEEE Transactions on Information Theory, Vol. 61, No. 3, pp. 1451–1457, 2015.
  • [17]

    I. Csiszár, “I-divergence Geometry of Probability Distributions and Minimization Problems”,

    The Annals of Probability, Vol. 3, No. 1, pp. 146–158, 1975.
  • [18] S. Amari, and H. Nagaoka, Methods of Information Geometry, American Mathematical Society, Vol. 191, 2007.