Belief Control Strategies for Interactions over Weak Graphs

01/16/2018
by   Hawraa Salami, et al.
EPFL
0

In diffusion social learning over weakly-connected graphs, it has been shown recently that influential agents end up shaping the beliefs of non-influential agents. This paper analyzes this control mechanism more closely and addresses two main questions. First, the article examines how much freedom influential agents have in controlling the beliefs of the receiving agents. That is, the derivations clarify whether receiving agents can be driven to arbitrary beliefs and whether the network structure limits the scope of control by the influential agents. Second, even if there is a limit to what influential agents can accomplish, this article develops mechanisms by which these agents can lead receiving agents to adopt certain beliefs. These questions raise interesting possibilities about belief control over networked agents. Once addressed, one ends up with design procedures that allow influential agents to drive other agents to endorse particular beliefs regardless of their local observations or convictions. The theoretical findings are illustrated by means of several examples.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/30/2019

Interplay between Topology and Social Learning over Weak Graphs

We consider a social learning problem, where a network of agents is inte...
05/27/2019

Civic Crowdfunding for Agents with Negative Valuations and Agents with Asymmetric Beliefs

In the last decade, civic crowdfunding has proved to be effective in gen...
02/12/2020

Learning Graph Influence from Social Interactions

In social learning, agents form their opinions or beliefs about certain ...
02/04/2021

The Wisdom of the Crowd and Higher-Order Beliefs

The classic wisdom-of-the-crowd problem asks how a principal can "aggreg...
11/19/2018

Distributed Learning of Average Belief Over Networks Using Sequential Observations

This paper addresses the problem of distributed learning of average beli...
03/07/2000

Applying Maxi-adjustment to Adaptive Information Filtering Agents

Learning and adaptation is a fundamental property of intelligent agents....
10/24/2019

Non-Bayesian Social Learning with Gaussian Uncertain Models

Non-Bayesian social learning theory provides a framework for distributed...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction and Motivation

Several studies have examined the propagation of information over social networks and the influence of the graph topology on this dynamics [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. In recent works [27, 28, 29], an intriguing phenomenon was revealed whereby it was shown that weakly-connected graphs enable certain agents to control the opinion of other agents to great degree, irrespective of the observations sensed by these latter agents. For example, agents can be made to believe that it is “raining” while they happen to be observing “sunny conditions”. Weak graphs arise in many contexts, including in popular social platforms like Twitter and similar online tools. In these graphs, the topology consists of multiple sub-networks where at least one sub-network (called a sending sub-network) feeds information in one direction to other network components without receiving back (or being interested in) any information from them. For example, a celebrity user in Twitter may have a large number of followers (running into the millions), while the user himself may not be tracking or following any (or only a small fraction) of these users. For such networks with weak graphs, it was shown in [28, 29] that, irrespective of the local observations sensed by the receiving agents, a sending sub-network plays a domineering role and influences the beliefs of the other groups in a significant manner. In particular, receiving agents can be made to arrive at incorrect inference decisions; they can also be made to disagree on their inferences among themselves.

The purpose of this article is to examine this dynamics more closely and to reveal new critical properties, including the development of control mechanisms. We have three main contributions. First, we show that the internal graph structure connecting the receiving agents imposes a form of resistance to manipulation, but only to a certain degree. Second, we characterize the set of states that can be imposed on receiving networks; while this set is large, it turns out that it is not unlimited. And, third, for any attainable state, we develop a control mechanism that allows sending agents to force the receiving agents to reach that state and behave in that manner.

I-a Weakly-Connected Graphs

We start the exposition by reviewing the structure of weak graphs from [27, 28, 29] and by introducing the relevant notation. As explained in [27], a weakly-connected network consists of two types of sub-networks: (sending) sub-networks and (receiving) sub-networks. Each individual sub-network is a connected graph where any two agents are connected by a path. In addition, every sending sub-network is strongly-connected, meaning that at least one of its agents has a self-loop. The flow of information between and sub-networks is asymmetric, as it only happens in one direction from to . Figure 1 shows one example of a weakly-connected network. The two top sub-networks are sending sub-networks and the two bottom sub-networks are receiving sub-networks. The weights on the connections from to networks are positive but can be arbitrarily small. Observe how links from subnetworks to subnetworks flow in one direction only, while all other links can be bi-directional.

Fig. 1: An example of a weakly connected network. The two sub-networks on top are type, while the two sub-networks in the bottom are type. Observe how links from networks to networks flow in one direction only, while all other links can be bi-directional.

We index the strongly-connected sub-networks by , and the receiving sub-networks by . Each sub-network has agents, and the total number of agents in the sub-networks is denoted by . Similarly, each sub-network has agents, and the total number of agents in the sub-networks is denoted by . We let denote the total number of agents across all sub-networks, i.e., , and use to refer to the indexes of all agents. We assign a pair of non-negative weights, , to the edge connecting any two agents and . The scalar represents the weight with which agent scales data arriving from agent and, similarly, for . We let denote the neighborhood of agent , which consists of all agents connected to . Each agent scales data arriving from its neighbors in a convex manner, i.e., the weights satisfy:

(1)

Following [27, 29], and without loss in generality, we assume that the agents are numbered such that the indexes of represent first the agents from the sub-networks, followed by those from the sub-networks. In this way, if we collect the into a large combination matrix , then this matrix will have an upper block-triangular structure of the following form:

(2)

The matrices on the upper left corner are left-stochastic primitive matrices corresponding to the strongly-connected sub-networks. Likewise, the matrices in the lower right-most block correspond to the internal weights of the sub-networks. We denote the block structure of in (2) by:

(3)

Notation:

We use lowercase letters to denote vectors, uppercase letters for matrices, plain letters for deterministic variables, and boldface for random variables. We also use

for transposition, for matrix inversion, and and for vector element-wise comparisons.

Ii Diffusion Social learning

In order to characterize the set of attainable states, and to design mechanisms for belief control over weak graphs, we need to summarize first the main finding from [29]. The work in that reference revealed the limiting states that are reached by receiving agents over weak-graphs. An expression was derived for these states. Once we review that expression, we will then examine its implications closely. In particular, we will conclude from it that not all states are attainable and that receiving sub-networks have an inherent resistance mechanism. We characterize this mechanism analytically. We then show how sending sub-networks can exploit this information to control the beliefs of receiving agents and to sow discord among the agents.

Thus, following [29], we assume that each sub-network is observing data that arise from a true state value, denoted generically by , which may differ from one sub-network to another. We denote by the set of all possible states, by the true state of sending sub-network and by the true state of receiving sub-network , where both and are in . At each time , each agent will possess a belief

, which represents a probability distribution over

. Agent continuously updates its belief according to two information sources:

  1. The first source consists of observational signals streaming in locally at agent . These signals are generated according to some known likelihood function parametrized by the true state of agent . We denote the likelihood function by if agent belongs to receiving sub-network or if agent belongs to sending sub-network .

  2. The second source consists of information received from the neighbors of agent , denoted by . Agent and its neighbors are connected by edges and they continuously communicate and share their opinions.

Using these two pieces of information, each agent then updates its belief according to the following diffusion social learning rule [2]:

(4)

In the first step of (4), agent updates its belief, , based on its observed private signal by means of the Bayesian rule and obtains an intermediate belief . In the second step, agent learns from its social neighbors through cooperation.

A consensus-based strategy can also be employed in lieu of (4), as was done in the insightful works [3, 30], although the latter reference focuses mainly on the problem of pure averaging and not on social learning and requires the existence of certain anchor nodes. In this work, we assume all agents are homogeneous and focus on the diffusion strategy (4) due to its enhanced performance and wider stability range, as already proved in [2] and further explained in the treatments [31, 32]. Other models for social learning can be found in [4, 5, 12, 7, 18, 33, 34].

When agents of sending sub-networks follow this model, they can learn their own true states. Specifically, it was shown in [2, 29] that

(5)

for any agent that belongs to sending sub-network . Result (5) means that the probability measure concentrates at location , while all other possibilities in have zero probability. On the other hand, agents of receiving sub-networks will not be able to find their true states. Instead, their beliefs will converge to a fixed distribution defined over the true states of the sending sub-networks as follows [29]. First, let

(6)

collect all beliefs from agents that belong to sub-network , where the notation denotes the index of the -th agent within sub-network , i.e.,

(7)

and . Likewise, let

(8)

collect all beliefs from agents that belong to sub-network , where the notation denotes the index of the -th agent within sub-network , i.e.,

(9)

and . Furthermore, let

(10)

collect all beliefs from all type sub-networks. Likewise, let

(11)

collect the beliefs from all type sub-networks. Note that these belief vectors are evaluated at a specific . Then, the main result in [28, 29] shows that, under some reasonable technical assumptions, it holds that

(12)

where is the matrix given by:

(13)

and

is the identity matrix of size

. The matrix has non-negative entries and the sum of the entries in each of its columns is equal to one [27]. Expression (12) shows how the beliefs of the sending sub-networks determine the limiting beliefs of the receiving sub-networks through the matrix . We can expand (12) to reveal the influence of the sending networks more explicitly as follows.

Let denote the row in that corresponds to receiving agent and partition it into sub-vectors as follows111The index of the row in that corresponds to agent is .:

(14)

where the are the number of agents in each sub-network . Then, according to (12), we have

(15)

Note that this relation is for a specific . Let us focus on the case when , assuming it is the true state parameter of the -th sending network only. We know from [2] and (5) that each agent in the sending sub-network will learn its true state . Therefore, from (10),

(16)

where denotes a column vector of length whose elements are all one. Similarly, denotes a column vector of length whose elements are all zero. Combining (15) and (16) we get

(17)

This means that the likelihood of state at the receiving agent is equal to the sum of the entries of the weight vector, , corresponding to sub-network . More generally, for any other state parameter , its likelihood is given from (12) by

(18)

where

(19)

Result (18) means that the belief of receiving agent will converge to a distribution defined over the true states of the sending sub-networks, which we collect into the set:

(20)

Expression (12) shows how the limiting distributions of the sending sub-networks determine the limiting distributions of the receiving sub-networks through the matrix . In other words, it indicates how influential agents (from within the sending sub-networks) can control the steady-state beliefs of receiving agents. Two critical questions arise at this stage: (a) first, how much freedom do influential agents have in controlling the beliefs of the receiving agents? That is, can receiving agents be driven to arbitrary beliefs or does the network structure limit the scope of control by the influential agents? and (b) second, even if there is a limit to what influential agents can accomplish, how can they ensure that receiving agents will end up with particular beliefs?

Questions (a) and (b) raise interesting possibilities about belief (or what we will sometimes refer to as “mind”) control. In the next sections, we will address these questions and we will end up with the conditions that allow influential agents to drive other agents to endorse particular beliefs regardless of their local observations (or “convictions”).

Iii Belief Control Mechanism

Observe from expression (18) that the limiting beliefs of receiving agents depend on the columns of . Note also that the entries of are determined by the internal combination weights within the receiving networks (i.e., ), and the combination weights from the to the sub-networks (i.e., ). The question we would like to examine now is that given a set of desired beliefs for the receiving agents, is this set always attainable? Or does the internal structure of the receiving sub-networks impose limitations on where their beliefs can be driven to? To answer this useful question, we consider the following problem setting. Let denote some desired limiting distribution for receiving agent (i.e., denotes what we desire the limiting distribution in (18) to become as ). We would like to examine whether it is possible to force agent to converge to any , i.e., whether it is possible to find a matrix so that the belief of receiving agent converges to this specific .

Iii-a Motivation

In this first approach, we are interested in designing while is assumed fixed and known. This scenario allows us to understand in what ways the internal structure of the receiving networks limits the effect of external influence by the sending sub-networks. This approach also allows us to examine the range of belief control over the receiving sub-networks (i.e., how much freedom the sending sub-networks have in selecting these beliefs). Note that the entries of correspond to weights by which the receiving agents scale information from the sending sub-networks. These weights are set by the receiving agents and, therefore, are not under the direct control of the sending sub-networks. As such, it is fair to question whether it is useful to pursue a design procedure for selecting since its entries are not under the direct control of the designer or the sending sub-networks. The useful point to note here, however, is that the entries of , although set by the receiving agents, can still be interpreted as a measure of the level of trust that receiving agents have in the sending agents they are connected to. The higher this level of confidence is between two agents, the larger the value of the scaling weight on the link connecting them. In many applications, these levels of confidence (and, therefore, the resulting scaling weights) can be influenced by external campaigns (e.g., through advertisement or by way of reputation). In this way, we can interpret the problem of designing as a way to guide the campaign that influences receiving agents to set their scaling weights to desirable values. The argument will show that by influencing and knowing , sending agents end up controlling the beliefs of receiving agents in desirable ways. For the analysis in the sequel, note that by fixing and designing , we are in effect fixing the sum of each column of and, accordingly, fixing the overall external influence on each receiving agent. In this way, the problem of designing amounts to deciding on how much influence each individual sub-network should have in driving the beliefs of the receiving sub-networks.

Iii-B Conditions for Attainable Beliefs

Given these considerations, let us now show how to design to attain certain beliefs. As is already evident from (18), the desired belief at any agent needs to be a probability distribution defined over the true states of all sending sub-networks, . We assume, without loss of generality, that the true states of the sending sub-networks are distinct, so that . If two or more sending sub-networks have the same true state, we can merge them together and treat them as corresponding to one sending sub-network; although this enlarged component is not necessarily connected, it nevertheless consists of strongly-connected elements and the same arguments and conclusions will apply.

We collect the desired limiting beliefs for all receiving agents into the vector:

(21)

which has length . Then, from (12), we must have:

(22)

Evaluating this expression at the successive states , we get

(23)

where is the matrix that collects the desired beliefs for all receiving agents. Using (13), we rewrite (23) more compactly in matrix form as:

(24)

Therefore, given and , the design problem becomes one of finding a matrix that satisfies (24) subject to the following constraints:

(25)
(26)
(27)

The first condition (25) is because the entries on each column of defined in (3) add up to one. The second condition (26) ensures that each element of is a non-negative combination weight. The third condition (27) takes into account the network structure, where represents the column of that corresponds to receiving agent , and represents the entry of this column (which corresponds to sending agent –see Fig. 2). In other words, if receiving agent is not connected to sending agent , the corresponding entry in should be zero.

Fig. 2: An illustration of the th column of and the th entry on that column.

It is useful to note that condition (25) is actually unnecessary and can be removed. This is because if we can find that satisfies (24), then condition (25) will be automatically satisfied. To see this, we first sum the elements of the columns on the left-hand side of (24) and observe that

(28)

We then sum the elements of the columns on the right-hand side of (24) to get

(29)

This is because since the entries on each column of add up to one. Thus, equating (28) and (29), we find that (25) must hold. The problem we are attempting to solve is then equivalent to finding that satisfies (24) subject to

(30)
(31)

To find that satisfies (24) under the constraints (30)-(31), we can solve separately for each column of . Let and , respectively, denote the columns of and that correspond to receiving agent . Then, relations (24) and (30)–(31) imply that column must satisfy:

(32)

subject to

(33)
(34)

The problem is then equivalent to finding for each receiving agent such that satisfies (32)-(34). For to be attainable (i.e., for the beliefs of all receiving agents to converge to the desired beliefs), finding such should be possible for each receiving agent . However, finding that satisfies (32) under the constraints (33)-(34) may not be always possible. The desired belief matrix will need to satisfy certain conditions so that it is not possible to drive the receiving agents to any belief matrix . Before stating these conditions, we introduce two auxiliary matrices. We define first the following difference matrix, which appears on the right-hand side of (24) — this matrix is known:

(35)

Note that has dimensions . The th column of , which we denote by appears on the right-hand side of (32), i.e.,

(36)

The th entry of is then:

(37)

Each th entry of represents the difference between the desired limiting belief at of receiving agent and a weighted combination of the desired limiting beliefs of its neighboring receiving agents. We remark that this sum includes agent if is not zero. Similarly, it includes any receiving agent if is not zero. In this way, the sum runs only over the neighbors of agent , because any agent that is not a neighbor of agent has its corresponding entry in as zero.

Let denote an binary matrix, with as many rows as the number of sending sub-networks and as many columns as the number of receiving agents. The matrix is an indicator matrix that specifies whether a receiving agent is connected or not to a sending sub-network. The th entry of is one if receiving agent is connected to sending sub-network ; otherwise, it is zero. We are now ready to state when a given set of desired beliefs is attainable.

Theorem 1.

(Attainable Beliefs) A given belief matrix is attainable if, and only if, the entries of will be zero wherever the entries of are zero, and the entries of will be positive wherever the entries of are one.

Before proving theorem 1, we first clarify its statement. For to be achievable, the matrices and must have the same structure with the unit entries of translated into positive entries in . This theorem reveals two possible cases for each receiving agent and gives, for each case, the condition required for the desired beliefs to be attainable.

In the first case, receiving agent is not connected to any agent of sending sub-network (the th entry of is zero). Then, according to Theorem 1, receiving agent achieves its desired limiting belief if, and only if,

(38)

That is, the cumulative influence from the agent’s neighbors must match the desired limiting belief.

In the second case, receiving agent is connected to at least one agent of sending sub-network (the th entry of is one). Now, according to Theorem 1 again, receiving agent achieves its desired limiting belief if, and only if,

(39)
Proof of Theorem 1.

We start by first proving that if is attainable, then and have the same structure. If is attainable, then there exists for each receiving agent that satisfies (32)-(34). Using the definition of in (23), the th row on the left-hand side of (32) is:

(40)

where represents the set of indexes of sending agents that belong to sending sub-network . Expression (40) represents the sum of the elements of the block of that correspond to sending sub-network . Therefore, if is attainable, then the th row of (32) satisfies the following relation:

(41)

From this relation, we see that if agent is not connected to any agent in sub-network , then which implies that is zero. On the other hand, if agent is connected to sub-network , then which implies that . In other words, and have the same structure.

Conversely, if and have the same structure, then it is possible to find for each receiving agent that satisfies (32)-(34). In particular, if agent is not connected to sub-network , then the ()-th entry of is zero. Since and have the same structure, then . By setting to zero the entries of that correspond to sending sub-network , relation (41) is satisfied. On the other hand, if agent is connected to sub-network (connected to at least one agent in sub-network ), then the ()-th entry of is one. Since and have the same structure, we get . Therefore, since the entries of must be non-negative, we first set to zero the entries of that correspond to agents of sub-network that are not connected to agent and the remaining entries can be set to non-negative values such that relation (41) is satisfied. That is, if and have the same structure, then is attainable. ∎

We next move to characterize the set of solutions, i.e., how we can design assuming the conditions on are met.

Iii-C Characterizing the Set of Possible Solutions

In the sequel, we assume that the conditions on from Theorem 1 are satisfied. That is, if receiving agent is not connected to sub-network , then . Otherwise, . The desired beliefs are then attainable. This means that for each receiving agent , we can find that satisfies (32)-(34). Many solutions may exist. In this section, we characterize the set of possible solutions.

First of all, to meet (31), we set the required entries of to zero. We then remove the corresponding columns of , and label the reduced by . Similarly, we remove the zero elements of and label the reduced by . On the other hand, if agent is not connected to some sub-network , then the corresponding row in will be removed and will have fewer number of rows, denoted by . Without loss of generality, we assume agent is connected to the first sending sub-networks. We denote by the number of agents of sending sub-network that are connected to receiving agent and by the total number of all sending agents connected to agent . The matrix will then have the form (this matrix is obtained from by removing rows and columns with zero entries; the resulting dimensions are now denoted by and ):

(42)

Note that if receiving agent is connected to all sending sub-networks, then and will have the same number of rows, . In the case where agent is not connected to some sub-network , condition (38) should be satisfied, and the corresponding row in should be removed to obtain the reduced vector . We are therefore reduced to determining by solving a system of equations of the form:

(43)

subject to

(44)

We can still have some of the entries of the solution turn out to be zero. Now note that the number of rows of is (number of sending sub-networks connected to ), which is always smaller than or equal to . Moreover, the rows of are linearly independent and thus

is a right-invertible matrix. Its right-inverse is given by

[35]:

(45)

Therefore, if we ignore condition (44) for now, then equation (43) has an infinite number of solutions parametrized by the expression [35]:

(46)

where is an arbitrary vector of length . We still need to satisfy condition (44). Let

(47)

and note that

(48)

where represents the entry of vector . Likewise,

(49)

and if we partition into sub-vectors as

(50)

then expression (46) becomes:

(51)

This represents the general form of all possible solutions, but from these solutions we want only those which are nonnegative in order to satisfy condition (44). From (51), the vector is partitioned into multiple blocks, where each block has the form:

(52)

We already have from the conditions of attainable beliefs (39) that . Therefore, we can choose as zero or set it to arbitrary values as long as (52) stays non-negative. We also know that for the beliefs to be attainable, we cannot have . Otherwise, no solution can be found. Indeed, if , then to make (52) non-negative, we would need to select such that:

(53)

However, there is no that satisfies this relation because if we sum the elements of the vector on the left-hand side of (53), we obtain:

(54)

While if we sum the elements of the vector on the right-hand side of (53), we obtain:

(55)

This means that we cannot find such that when any of the entries of or is negative.

In summary, we have established the validity of the following statement.

Theorem 2.

Assume receiving agent is connected to agents in sending sub-network . If , then all possible choices for the weights from sending agents in network to receiving agent are parameterized as:

(56)

where is an arbitrary vector of length chosen so that (56) stays non-negative.

Iii-D Enforcing Uniform Beliefs

In this section, we explore one special case of attainable beliefs, which is driving all receiving agents towards the same belief. In this case, is of the following form:

(57)

for some column that represents the desired limiting belief (the entries of are non-negative and add up to one). We verify that the conditions that ensure that uniform beliefs are attainable by all receiving agents. In this case, is of the following form:

(58)

and the ()-th entry of is:

(59)

Now we know that when agent is connected to at least one agent from any sending sub-network, and that when it is not connected to any sending sub-network. In the second case where , expression (59) implies that for any . Therefore, in this case, we have agent not connected to any sending sub-network and for any , and condition (38) is satisfied. In the first case where (i.e., agent is connected to some sending sub-networks but not necessarily to all of them), expression (59) implies that no matter whether agent is connected or not to sending sub-network . However, when agent is not connected to sending sub-network , condition (38) requires that for agent to achieve its desired belief at . In summary, we arrive at the following conclusion.

Lemma 1.

For the scenario of uniform beliefs to be attainable, agent should be connected either to all sending sub-networks or to none of them.

We provide in reference [arXiv] two numerical examples that illustrate this construction.

Iii-E Example 1

Consider the network shown in Fig. 3. It consists of agents, two sending sub-networks and one receiving sub-network, with the following combination matrix: