I Introduction and Motivation
Several studies have examined the propagation of information over social networks and the influence of the graph topology on this dynamics [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. In recent works [27, 28, 29], an intriguing phenomenon was revealed whereby it was shown that weaklyconnected graphs enable certain agents to control the opinion of other agents to great degree, irrespective of the observations sensed by these latter agents. For example, agents can be made to believe that it is “raining” while they happen to be observing “sunny conditions”. Weak graphs arise in many contexts, including in popular social platforms like Twitter and similar online tools. In these graphs, the topology consists of multiple subnetworks where at least one subnetwork (called a sending subnetwork) feeds information in one direction to other network components without receiving back (or being interested in) any information from them. For example, a celebrity user in Twitter may have a large number of followers (running into the millions), while the user himself may not be tracking or following any (or only a small fraction) of these users. For such networks with weak graphs, it was shown in [28, 29] that, irrespective of the local observations sensed by the receiving agents, a sending subnetwork plays a domineering role and influences the beliefs of the other groups in a significant manner. In particular, receiving agents can be made to arrive at incorrect inference decisions; they can also be made to disagree on their inferences among themselves.
The purpose of this article is to examine this dynamics more closely and to reveal new critical properties, including the development of control mechanisms. We have three main contributions. First, we show that the internal graph structure connecting the receiving agents imposes a form of resistance to manipulation, but only to a certain degree. Second, we characterize the set of states that can be imposed on receiving networks; while this set is large, it turns out that it is not unlimited. And, third, for any attainable state, we develop a control mechanism that allows sending agents to force the receiving agents to reach that state and behave in that manner.
Ia WeaklyConnected Graphs
We start the exposition by reviewing the structure of weak graphs from [27, 28, 29] and by introducing the relevant notation. As explained in [27], a weaklyconnected network consists of two types of subnetworks: (sending) subnetworks and (receiving) subnetworks. Each individual subnetwork is a connected graph where any two agents are connected by a path. In addition, every sending subnetwork is stronglyconnected, meaning that at least one of its agents has a selfloop. The flow of information between and subnetworks is asymmetric, as it only happens in one direction from to . Figure 1 shows one example of a weaklyconnected network. The two top subnetworks are sending subnetworks and the two bottom subnetworks are receiving subnetworks. The weights on the connections from to networks are positive but can be arbitrarily small. Observe how links from subnetworks to subnetworks flow in one direction only, while all other links can be bidirectional.
We index the stronglyconnected subnetworks by , and the receiving subnetworks by . Each subnetwork has agents, and the total number of agents in the subnetworks is denoted by . Similarly, each subnetwork has agents, and the total number of agents in the subnetworks is denoted by . We let denote the total number of agents across all subnetworks, i.e., , and use to refer to the indexes of all agents. We assign a pair of nonnegative weights, , to the edge connecting any two agents and . The scalar represents the weight with which agent scales data arriving from agent and, similarly, for . We let denote the neighborhood of agent , which consists of all agents connected to . Each agent scales data arriving from its neighbors in a convex manner, i.e., the weights satisfy:
(1) 
Following [27, 29], and without loss in generality, we assume that the agents are numbered such that the indexes of represent first the agents from the subnetworks, followed by those from the subnetworks. In this way, if we collect the into a large combination matrix , then this matrix will have an upper blocktriangular structure of the following form:
(2) 
The matrices on the upper left corner are leftstochastic primitive matrices corresponding to the stronglyconnected subnetworks. Likewise, the matrices in the lower rightmost block correspond to the internal weights of the subnetworks. We denote the block structure of in (2) by:
(3) 
Notation:
We use lowercase letters to denote vectors, uppercase letters for matrices, plain letters for deterministic variables, and boldface for random variables. We also use
for transposition, for matrix inversion, and and for vector elementwise comparisons.Ii Diffusion Social learning
In order to characterize the set of attainable states, and to design mechanisms for belief control over weak graphs, we need to summarize first the main finding from [29]. The work in that reference revealed the limiting states that are reached by receiving agents over weakgraphs. An expression was derived for these states. Once we review that expression, we will then examine its implications closely. In particular, we will conclude from it that not all states are attainable and that receiving subnetworks have an inherent resistance mechanism. We characterize this mechanism analytically. We then show how sending subnetworks can exploit this information to control the beliefs of receiving agents and to sow discord among the agents.
Thus, following [29], we assume that each subnetwork is observing data that arise from a true state value, denoted generically by , which may differ from one subnetwork to another. We denote by the set of all possible states, by the true state of sending subnetwork and by the true state of receiving subnetwork , where both and are in . At each time , each agent will possess a belief
, which represents a probability distribution over
. Agent continuously updates its belief according to two information sources:
The first source consists of observational signals streaming in locally at agent . These signals are generated according to some known likelihood function parametrized by the true state of agent . We denote the likelihood function by if agent belongs to receiving subnetwork or if agent belongs to sending subnetwork .

The second source consists of information received from the neighbors of agent , denoted by . Agent and its neighbors are connected by edges and they continuously communicate and share their opinions.
Using these two pieces of information, each agent then updates its belief according to the following diffusion social learning rule [2]:
(4) 
In the first step of (4), agent updates its belief, , based on its observed private signal by means of the Bayesian rule and obtains an intermediate belief . In the second step, agent learns from its social neighbors through cooperation.
A consensusbased strategy can also be employed in lieu of (4), as was done in the insightful works [3, 30], although the latter reference focuses mainly on the problem of pure averaging and not on social learning and requires the existence of certain anchor nodes. In this work, we assume all agents are homogeneous and focus on the diffusion strategy (4) due to its enhanced performance and wider stability range, as already proved in [2] and further explained in the treatments [31, 32]. Other models for social learning can be found in [4, 5, 12, 7, 18, 33, 34].
When agents of sending subnetworks follow this model, they can learn their own true states. Specifically, it was shown in [2, 29] that
(5) 
for any agent that belongs to sending subnetwork . Result (5) means that the probability measure concentrates at location , while all other possibilities in have zero probability. On the other hand, agents of receiving subnetworks will not be able to find their true states. Instead, their beliefs will converge to a fixed distribution defined over the true states of the sending subnetworks as follows [29]. First, let
(6) 
collect all beliefs from agents that belong to subnetwork , where the notation denotes the index of the th agent within subnetwork , i.e.,
(7) 
and . Likewise, let
(8) 
collect all beliefs from agents that belong to subnetwork , where the notation denotes the index of the th agent within subnetwork , i.e.,
(9) 
and . Furthermore, let
(10) 
collect all beliefs from all type subnetworks. Likewise, let
(11) 
collect the beliefs from all type subnetworks. Note that these belief vectors are evaluated at a specific . Then, the main result in [28, 29] shows that, under some reasonable technical assumptions, it holds that
(12) 
where is the matrix given by:
(13) 
and
is the identity matrix of size
. The matrix has nonnegative entries and the sum of the entries in each of its columns is equal to one [27]. Expression (12) shows how the beliefs of the sending subnetworks determine the limiting beliefs of the receiving subnetworks through the matrix . We can expand (12) to reveal the influence of the sending networks more explicitly as follows.Let denote the row in that corresponds to receiving agent and partition it into subvectors as follows^{1}^{1}1The index of the row in that corresponds to agent is .:
(14) 
where the are the number of agents in each subnetwork . Then, according to (12), we have
(15) 
Note that this relation is for a specific . Let us focus on the case when , assuming it is the true state parameter of the th sending network only. We know from [2] and (5) that each agent in the sending subnetwork will learn its true state . Therefore, from (10),
(16) 
where denotes a column vector of length whose elements are all one. Similarly, denotes a column vector of length whose elements are all zero. Combining (15) and (16) we get
(17) 
This means that the likelihood of state at the receiving agent is equal to the sum of the entries of the weight vector, , corresponding to subnetwork . More generally, for any other state parameter , its likelihood is given from (12) by
(18) 
where
(19) 
Result (18) means that the belief of receiving agent will converge to a distribution defined over the true states of the sending subnetworks, which we collect into the set:
(20) 
Expression (12) shows how the limiting distributions of the sending subnetworks determine the limiting distributions of the receiving subnetworks through the matrix . In other words, it indicates how influential agents (from within the sending subnetworks) can control the steadystate beliefs of receiving agents. Two critical questions arise at this stage: (a) first, how much freedom do influential agents have in controlling the beliefs of the receiving agents? That is, can receiving agents be driven to arbitrary beliefs or does the network structure limit the scope of control by the influential agents? and (b) second, even if there is a limit to what influential agents can accomplish, how can they ensure that receiving agents will end up with particular beliefs?
Questions (a) and (b) raise interesting possibilities about belief (or what we will sometimes refer to as “mind”) control. In the next sections, we will address these questions and we will end up with the conditions that allow influential agents to drive other agents to endorse particular beliefs regardless of their local observations (or “convictions”).
Iii Belief Control Mechanism
Observe from expression (18) that the limiting beliefs of receiving agents depend on the columns of . Note also that the entries of are determined by the internal combination weights within the receiving networks (i.e., ), and the combination weights from the to the subnetworks (i.e., ). The question we would like to examine now is that given a set of desired beliefs for the receiving agents, is this set always attainable? Or does the internal structure of the receiving subnetworks impose limitations on where their beliefs can be driven to? To answer this useful question, we consider the following problem setting. Let denote some desired limiting distribution for receiving agent (i.e., denotes what we desire the limiting distribution in (18) to become as ). We would like to examine whether it is possible to force agent to converge to any , i.e., whether it is possible to find a matrix so that the belief of receiving agent converges to this specific .
Iiia Motivation
In this first approach, we are interested in designing while is assumed fixed and known. This scenario allows us to understand in what ways the internal structure of the receiving networks limits the effect of external influence by the sending subnetworks. This approach also allows us to examine the range of belief control over the receiving subnetworks (i.e., how much freedom the sending subnetworks have in selecting these beliefs). Note that the entries of correspond to weights by which the receiving agents scale information from the sending subnetworks. These weights are set by the receiving agents and, therefore, are not under the direct control of the sending subnetworks. As such, it is fair to question whether it is useful to pursue a design procedure for selecting since its entries are not under the direct control of the designer or the sending subnetworks. The useful point to note here, however, is that the entries of , although set by the receiving agents, can still be interpreted as a measure of the level of trust that receiving agents have in the sending agents they are connected to. The higher this level of confidence is between two agents, the larger the value of the scaling weight on the link connecting them. In many applications, these levels of confidence (and, therefore, the resulting scaling weights) can be influenced by external campaigns (e.g., through advertisement or by way of reputation). In this way, we can interpret the problem of designing as a way to guide the campaign that influences receiving agents to set their scaling weights to desirable values. The argument will show that by influencing and knowing , sending agents end up controlling the beliefs of receiving agents in desirable ways. For the analysis in the sequel, note that by fixing and designing , we are in effect fixing the sum of each column of and, accordingly, fixing the overall external influence on each receiving agent. In this way, the problem of designing amounts to deciding on how much influence each individual subnetwork should have in driving the beliefs of the receiving subnetworks.
IiiB Conditions for Attainable Beliefs
Given these considerations, let us now show how to design to attain certain beliefs. As is already evident from (18), the desired belief at any agent needs to be a probability distribution defined over the true states of all sending subnetworks, . We assume, without loss of generality, that the true states of the sending subnetworks are distinct, so that . If two or more sending subnetworks have the same true state, we can merge them together and treat them as corresponding to one sending subnetwork; although this enlarged component is not necessarily connected, it nevertheless consists of stronglyconnected elements and the same arguments and conclusions will apply.
We collect the desired limiting beliefs for all receiving agents into the vector:
(21) 
which has length . Then, from (12), we must have:
(22) 
Evaluating this expression at the successive states , we get
(23) 
where is the matrix that collects the desired beliefs for all receiving agents. Using (13), we rewrite (23) more compactly in matrix form as:
(24) 
Therefore, given and , the design problem becomes one of finding a matrix that satisfies (24) subject to the following constraints:
(25)  
(26)  
(27) 
The first condition (25) is because the entries on each column of defined in (3) add up to one. The second condition (26) ensures that each element of is a nonnegative combination weight. The third condition (27) takes into account the network structure, where represents the column of that corresponds to receiving agent , and represents the entry of this column (which corresponds to sending agent –see Fig. 2). In other words, if receiving agent is not connected to sending agent , the corresponding entry in should be zero.
It is useful to note that condition (25) is actually unnecessary and can be removed. This is because if we can find that satisfies (24), then condition (25) will be automatically satisfied. To see this, we first sum the elements of the columns on the lefthand side of (24) and observe that
(28) 
We then sum the elements of the columns on the righthand side of (24) to get
(29) 
This is because since the entries on each column of add up to one. Thus, equating (28) and (29), we find that (25) must hold. The problem we are attempting to solve is then equivalent to finding that satisfies (24) subject to
(30)  
(31) 
To find that satisfies (24) under the constraints (30)(31), we can solve separately for each column of . Let and , respectively, denote the columns of and that correspond to receiving agent . Then, relations (24) and (30)–(31) imply that column must satisfy:
(32) 
subject to
(33)  
(34) 
The problem is then equivalent to finding for each receiving agent such that satisfies (32)(34). For to be attainable (i.e., for the beliefs of all receiving agents to converge to the desired beliefs), finding such should be possible for each receiving agent . However, finding that satisfies (32) under the constraints (33)(34) may not be always possible. The desired belief matrix will need to satisfy certain conditions so that it is not possible to drive the receiving agents to any belief matrix . Before stating these conditions, we introduce two auxiliary matrices. We define first the following difference matrix, which appears on the righthand side of (24) — this matrix is known:
(35) 
Note that has dimensions . The th column of , which we denote by appears on the righthand side of (32), i.e.,
(36) 
The th entry of is then:
(37) 
Each th entry of represents the difference between the desired limiting belief at of receiving agent and a weighted combination of the desired limiting beliefs of its neighboring receiving agents. We remark that this sum includes agent if is not zero. Similarly, it includes any receiving agent if is not zero. In this way, the sum runs only over the neighbors of agent , because any agent that is not a neighbor of agent has its corresponding entry in as zero.
Let denote an binary matrix, with as many rows as the number of sending subnetworks and as many columns as the number of receiving agents. The matrix is an indicator matrix that specifies whether a receiving agent is connected or not to a sending subnetwork. The th entry of is one if receiving agent is connected to sending subnetwork ; otherwise, it is zero. We are now ready to state when a given set of desired beliefs is attainable.
Theorem 1.
(Attainable Beliefs) A given belief matrix is attainable if, and only if, the entries of will be zero wherever the entries of are zero, and the entries of will be positive wherever the entries of are one.
Before proving theorem 1, we first clarify its statement. For to be achievable, the matrices and must have the same structure with the unit entries of translated into positive entries in . This theorem reveals two possible cases for each receiving agent and gives, for each case, the condition required for the desired beliefs to be attainable.
In the first case, receiving agent is not connected to any agent of sending subnetwork (the th entry of is zero). Then, according to Theorem 1, receiving agent achieves its desired limiting belief if, and only if,
(38) 
That is, the cumulative influence from the agent’s neighbors must match the desired limiting belief.
In the second case, receiving agent is connected to at least one agent of sending subnetwork (the th entry of is one). Now, according to Theorem 1 again, receiving agent achieves its desired limiting belief if, and only if,
(39) 
Proof of Theorem 1.
We start by first proving that if is attainable, then and have the same structure. If is attainable, then there exists for each receiving agent that satisfies (32)(34). Using the definition of in (23), the th row on the lefthand side of (32) is:
(40) 
where represents the set of indexes of sending agents that belong to sending subnetwork . Expression (40) represents the sum of the elements of the block of that correspond to sending subnetwork . Therefore, if is attainable, then the th row of (32) satisfies the following relation:
(41) 
From this relation, we see that if agent is not connected to any agent in subnetwork , then which implies that is zero. On the other hand, if agent is connected to subnetwork , then which implies that . In other words, and have the same structure.
Conversely, if and have the same structure, then it is possible to find for each receiving agent that satisfies (32)(34). In particular, if agent is not connected to subnetwork , then the ()th entry of is zero. Since and have the same structure, then . By setting to zero the entries of that correspond to sending subnetwork , relation (41) is satisfied. On the other hand, if agent is connected to subnetwork (connected to at least one agent in subnetwork ), then the ()th entry of is one. Since and have the same structure, we get . Therefore, since the entries of must be nonnegative, we first set to zero the entries of that correspond to agents of subnetwork that are not connected to agent and the remaining entries can be set to nonnegative values such that relation (41) is satisfied. That is, if and have the same structure, then is attainable. ∎
We next move to characterize the set of solutions, i.e., how we can design assuming the conditions on are met.
IiiC Characterizing the Set of Possible Solutions
In the sequel, we assume that the conditions on from Theorem 1 are satisfied. That is, if receiving agent is not connected to subnetwork , then . Otherwise, . The desired beliefs are then attainable. This means that for each receiving agent , we can find that satisfies (32)(34). Many solutions may exist. In this section, we characterize the set of possible solutions.
First of all, to meet (31), we set the required entries of to zero. We then remove the corresponding columns of , and label the reduced by . Similarly, we remove the zero elements of and label the reduced by . On the other hand, if agent is not connected to some subnetwork , then the corresponding row in will be removed and will have fewer number of rows, denoted by . Without loss of generality, we assume agent is connected to the first sending subnetworks. We denote by the number of agents of sending subnetwork that are connected to receiving agent and by the total number of all sending agents connected to agent . The matrix will then have the form (this matrix is obtained from by removing rows and columns with zero entries; the resulting dimensions are now denoted by and ):
(42) 
Note that if receiving agent is connected to all sending subnetworks, then and will have the same number of rows, . In the case where agent is not connected to some subnetwork , condition (38) should be satisfied, and the corresponding row in should be removed to obtain the reduced vector . We are therefore reduced to determining by solving a system of equations of the form:
(43) 
subject to
(44) 
We can still have some of the entries of the solution turn out to be zero. Now note that the number of rows of is (number of sending subnetworks connected to ), which is always smaller than or equal to . Moreover, the rows of are linearly independent and thus
is a rightinvertible matrix. Its rightinverse is given by
[35]:(45) 
Therefore, if we ignore condition (44) for now, then equation (43) has an infinite number of solutions parametrized by the expression [35]:
(46) 
where is an arbitrary vector of length . We still need to satisfy condition (44). Let
(47) 
and note that
(48) 
where represents the entry of vector . Likewise,
(49) 
and if we partition into subvectors as
(50) 
then expression (46) becomes:
(51) 
This represents the general form of all possible solutions, but from these solutions we want only those which are nonnegative in order to satisfy condition (44). From (51), the vector is partitioned into multiple blocks, where each block has the form:
(52) 
We already have from the conditions of attainable beliefs (39) that . Therefore, we can choose as zero or set it to arbitrary values as long as (52) stays nonnegative. We also know that for the beliefs to be attainable, we cannot have . Otherwise, no solution can be found. Indeed, if , then to make (52) nonnegative, we would need to select such that:
(53) 
However, there is no that satisfies this relation because if we sum the elements of the vector on the lefthand side of (53), we obtain:
(54) 
While if we sum the elements of the vector on the righthand side of (53), we obtain:
(55) 
This means that we cannot find such that when any of the entries of or is negative.
In summary, we have established the validity of the following statement.
Theorem 2.
Assume receiving agent is connected to agents in sending subnetwork . If , then all possible choices for the weights from sending agents in network to receiving agent are parameterized as:
(56) 
where is an arbitrary vector of length chosen so that (56) stays nonnegative.
IiiD Enforcing Uniform Beliefs
In this section, we explore one special case of attainable beliefs, which is driving all receiving agents towards the same belief. In this case, is of the following form:
(57) 
for some column that represents the desired limiting belief (the entries of are nonnegative and add up to one). We verify that the conditions that ensure that uniform beliefs are attainable by all receiving agents. In this case, is of the following form:
(58) 
and the ()th entry of is:
(59) 
Now we know that when agent is connected to at least one agent from any sending subnetwork, and that when it is not connected to any sending subnetwork. In the second case where , expression (59) implies that for any . Therefore, in this case, we have agent not connected to any sending subnetwork and for any , and condition (38) is satisfied. In the first case where (i.e., agent is connected to some sending subnetworks but not necessarily to all of them), expression (59) implies that no matter whether agent is connected or not to sending subnetwork . However, when agent is not connected to sending subnetwork , condition (38) requires that for agent to achieve its desired belief at . In summary, we arrive at the following conclusion.
Lemma 1.
For the scenario of uniform beliefs to be attainable, agent should be connected either to all sending subnetworks or to none of them.
We provide in reference [arXiv] two numerical examples that illustrate this construction.
IiiE Example 1
Consider the network shown in Fig. 3. It consists of agents, two sending subnetworks and one receiving subnetwork, with the following combination matrix:
Comments
There are no comments yet.