Opinion Dynamics with Stubborn Agents

06/29/2018 ∙ by D. Scott Hunter, et al. ∙ MIT 0

We consider the problem of optimizing the placement of stubborn agents in a social network in order to maximally impact population opinions. We assume individuals in a directed social network each have a latent opinion that evolves over time in response to social media posts by their neighbors. The individuals randomly communicate noisy versions of their latent opinion to their neighbors. Each individual updates his opinion using a time-varying update rule that has him become more stubborn with time and be less affected by new posts. The dynamic update rule is a novel component of our model and reflects realistic behaviors observed in many psychological studies. We show that in the presence of stubborn agents with immutable opinions and under fairly general conditions on the stubbornness rate of the individuals, the opinions converge to an equilibrium determined by a linear system. We give an interesting electrical network interpretation of the equilibrium. We also use this equilibrium to present a simple closed form expression for harmonic influence centrality, which is a function that quantifies how much a node can affect the mean opinion in a network. We develop a discrete optimization formulation for the problem of maximally shifting opinions in a network by targeting nodes with stubborn agents. We show that this is an optimization problem with a monotone and submodular objective, allowing us to utilize a greedy algorithm. Finally, we show that a small number of stubborn agents can non-trivially influence a large population using simulated networks.



There are no comments yet.


page 25

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Online social networks can be hijacked by malicious actors who run massive influence campaigns on a population and potentially disrupt societies. There have been multiple reports alleging that foreign actors attempted to penetrate U.S. social networks in order to manipulate elections (Parlapiano and Lee 2018, Shane 2017, Guilbeault and Woolley 2016, Byrnes 2016). There have been studies showing similar operations in European elections (Ferrara 2017). The perpetrators created fake or “bot” accounts which shared politically polarizing content, much of it fake news, in order to amplify it and extend its reach, or directly interacted with humans to promote their agenda (Shane 2018). While no one knows exactly how many people were impacted by these influence campaigns, it has still become a concern for the U.S. government. Members of Congress have not been satisfied with the response of major social networks (Fandos and Shane 2017) and have asked them to take actions to prevent future interference in the U.S. democratic process by foreign actors (Price 2018).

Social network counter-measures are needed to combat these influence campaigns. This could consist of one using social network agents to influence a network in a way that negates the effect of the malicious influence campaign. There are multiple components to such counter-measures, but a key one is identifying targets in the network for influence by these agents. If one has a limited supply of agents, then one needs a method to optimally identify high value targets.

Our Contributions. In this work we present a method to identify such targets and quantify the impact of so called stubborn agents on the opinions of others in the network. We begin by proposing a model for opinion dynamics in a social network. A novel aspect of our model is that the individuals are allowed to grow stubborn with time and be less affected by new posts. This reflects real behaviors in social networks and is motivated by research in both social psychology and political science. We prove that under fairly general conditions on the stubbornness rate, when there are stubborn agents in the network, the opinions converge to an equilibrium given by a linear system. Using our equilibrium, we derive a simple closed form expression for the function known as harmonic influence centrality, which quantifies how much a single node can shift the average opinion in a network.

Using our equilibrium solution, we then present a discrete optimization formulation for the problem of optimally placing stubborn agents in a network to maximally shift opinions. We consider a slight variant of the traditional influence maximization approaches where instead of converting a non-stubborn agent into a stubborn agent, we simply introduce a stubborn agent into the network and constrain the number of individuals that this agent can communicate with. We consider two objective functions: the average opinion in the network and the number of individuals in the network whose opinion is above a given threshold. We show that the average opinion is a monotone and submodular function, allowing us to utilize a greedy approach where we place stubborn agents one at a time in the network. Finally, we show in simulated networks that having stubborn agents target a small number of nodes can non-trivially influence a large population. The effect is especially pronounced when one wants to maximize the threshold objective function.

This paper is outlined as follows. We begin with a literature review in Section 2. We then present our opinion dynamics model and convergence results in Section 3. Our optimization formulation for agent placement is presented in Section 4. We present our expression for harmonic influence centrality in Section 5. Simulation results for the impact of stubborn agents are presented in Section 6. We conclude in Section 7.

2 Literature Review

There has been a rich literature studying opinion dynamics in social networks. One of the most popular models here is the voter model (Clifford and Sudbury 1973, Holley and Liggett 1975) where each node updates its opinion to match that of a randomly selected neighbor. There is a large body of literature studying limiting behavior in this model (Cox and Griffeath 1986, Gray 1986, Krapivsky 1992, Liggett 2012, Sood and Redner 2005). The model of DeGroot (1974) is another popular way to describe opinion dynamics. In this model, a node’s opinion is updated to a convex combination of the opinions of itself and its neighbors. This model has connections with distributed consensus algorithms (Tsitsiklis 1984, Tsitsiklis et al. 1986, Olshevsky and Tsitsiklis 2009, Jadbabaie et al. 2003). In contrast to these approaches, there are also Bayesian models of opinion dynamics in social networks (Bikhchandani et al. 1992, Banerjee and Fudenberg 2004, Acemoglu et al. 2011, Banerjee 1992, Jackson 2010)

. In these model, a node’s opinion is updated using Bayes’ Theorem applied to the opinions of its neighbors.

The notion of stubborn agents with immutable opinions was introduced by Mobilia (2003). Analysis has been done of the impact of stubborn agents in various opinion models (Galam and Jacobs 2007, Wu and Huberman 2004, Chinellato et al. 2015, Mobilia et al. 2007, Yildiz et al. 2013, Acemoğlu et al. 2013, Ghaderi and Srikant 2013). In Ghaderi and Srikant (2013) the authors studied stubborn agents in the model of DeGroot (1974) and observed that an analogy can be made between the equilibrium opinions and voltages in an electrical circuit. We find a similar connection in our results. This electric circuit connection led Vassio et al. (2014) to propose a function known as harmonic influence centrality, which measured how much a single node could shift the average opinion in the network by switching its own opinion.

The question of optimizing the placement of agents in a social network to maximize some type of influence was first proposed by Kempe et al. (2003) for a diffusion model. Subsequent results have presented a variety of algorithms for this problem (Kempe et al. 2005, Leskovec et al. 2007, Chen et al. 2009, 2010). Yildiz et al. (2013) studied optimal stubborn agent placement in the voter model. Generally speaking, these algorithms make use of the fact that the objective function is submodular, so a good solution can be found using a greedy approach, as shown by Nemhauser et al. (1978). Our optimization formulation for placing agents in a network also makes use of this property.

While much analysis has been done on the effect of stubborn agents, the models used assume that the other individuals in the network have stationary behavior. However, numerous psychological studies have found that people grow stubborn over time (see the review in Roberts et al. (2006) and the references therein). In politics especially, the bulk of empirical evidence supports the hypothesis that susceptibility to changes in ideology and partisanship is very high during early adulthood and significantly lower later in life (Alwin et al. 1991, Alwin and Krosnick 1991, Sears 1975, 1981, 1983, Glenn 1980, Jennings and Markus 1984, Jennings and Niemi 2014, Markus 1979, Converse and Markus 1979, Sears and Funk 1999). Therefore, we believe that opinion dynamics models should include time-varying opinion update processes, where agents become stubborn with time. Convergence conditions under time-varying dynamics have been studied in Chatterjee and Seneta (1977) and later in Hatano and Mesbahi (2005), Wu (2006), Tahbaz-Salehi and Jadbabaie (2008). These models do not explicitly consider increasing stubbornness nor the presence of stubborn agents. To the best of our knowledge, our work is the first to rigorously analyze convergence in an opinion dynamics model with stubborn agents and increasing stubbornness.

3 Model

We consider a finite set of agents situated in a social network represented by a directed graph , where is the set of edges representing the connectivity among these individuals. An edge is considered to be directed from to and this means that content posted by can be seen by . One can view the direction of the edges as indicating the flow of information. In social networks parlance, we say follows . We define the neighbor set of an agent as . This is the set of individuals who follows, i.e. whose posts can be seen by .

At each time , each agent holds an opinion

. We definite the full vector of beliefs at time

by for simplicity. We also allow there to be two types of agents: non-stubborn and stubborn. Non-stubborn agents have an opinion update rule based on communication with their neighbors that we will specify later, while stubborn agents never change their opinions. We will denote the set of stubborn agents by and the set of non-stubborn agents by . For clarity of exposition, we assume that .

At time , each agent starts with an initial belief . The beliefs of the stubborn agents stays constant in time, meaning

We will now introduce the belief update rule for the non-stubborn agents. In our analysis, we will focus on a scenario where at each time a random agent communicates by posting content which is broadcast to all of its neighbors in .111Our results naturally extend to other settings of communication, we simply choose to focus on this broadcast model because of it captures how many popular social networks, such as Twitter and Instagram, operate. If agent communicates at time , we assume that its post has a random opinion where . If agent communicates at time , all agents such that update their belief to a convex combination of their own current belief and agent ’s communicated opinion:


where is some stubbornness factor that is changing in time. On the other hand, if none of the agents that follows communicate at time , then .

We now note that in many previous studies, the random opinion is often always assumed to be agent ’s exact opinion at time , given by . In our analysis, we relax this and only assume that agent communicates an opinion that is unbiased, meaning .

One should note that as shrinks to zero, agents become more and more stubborn. In previous studies, is assumed to be constant in time, which is not necessarily an accurate model of human behavior. As suggested in Mason, Conroy, and Smith Mason et al. (2007) and Roberts and Viechtbauer Roberts et al. (2006), a model with limited verbalistion and time-evolving update rules is a more realistic model of opinion dynamics.

One interesting case is where and an agent observes exactly one opinion at every unit of time. This corresponds to an update rule where an individual’s opinion is simply the average of all previous posts he has seen. Assume the individual has an opinion at time given by the average of all previous posts:

Above we dropped the subscript for the origin of the posts for simplicity. Using the update rule from equation (1), the opinion at time is then

Thus, agents becoming stubborn at a rate is equivalent to them forming an opinion that is the average of all posts they have seen.

Next we describe the communication pattern of the agents. We depart from the model in Ghaderi and Srikant (2013), where at each discrete time-step all agents in the network communicate. Rather, we allow the agents to communicate randomly, which is a more accurate model of how individuals in real social networks behave. For notation, let

denote the probability that agent

communicates at time . We assume the communication probabilities are constant in time. This reflects a situation where people’s rate of activity on social networks do not change. This is a reasonable assumption for many real social networks. For our model, we have the following stochastic update rule for non-stubborn agents :

Taking expectations, we have for non-stubborn agents

Then, we can write that

where , and is a matrix given by


Due to the structure of we can write it in the block-matrix form

where is a matrix and is a matrix. The matrix captures communications from the stubborn agents to the non-stubborn agents, while captures the communication network among the non-stubborn agents. Finally, to simplify our notation, let denote the vector of the initial opinions of the stubborn agents and denote the vector of the opinions of the non-stubborn agents at time .

One consequence of our model is that the non-stubborn agent opinions are governed only by the stubborn agents. For any non-stubborn agent to have a unique opinion, it must be reachable from at least one stubborn agent. Therefore, we make the following assumption throughout this work: The underlying graph is connected and there exists a directed path from every non-stubborn agent to some stubborn agent. Note that Assumption 1 is not especially stringent. First off, if the graph has multiple connected components then the results can be applied to each connected component separately. Furthermore, if there are some non-stubborn agents which are not reachable by any stubborn agent, then there is no link in connecting the set of such non-stubborn agents to . Then, once again, one can decompose the subgraph by only considering the part of containing , and apply the standard results that do not consider stubborn agents.

3.1 Convergence Results

We now present our convergence results for the opinions in our model. Since who communicates and what is communicated are random, the opinions of non-stubborn agents are also random. Therefore, there are some key questions we aim to answer. The overarching question is under what conditions on the stubbornness factor

do the opinions convergence. In more detail, we would like to know if the opinions converge in expectation to an equilibrium, and if so, what are the equilibrium values. Also, are the equilibrium opinions themselves random or do they converge to deterministic values. To answer these questions, we have obtained results for the limiting values of the expectation and variance of the opinions.

We begin with the expectation result. Let Assumption 1 hold and suppose that diverges. Then,


The result states that for convergence in expectation to occur, cannot decay too fast. If this occurs, then what happens is that new opinions are ignored and updates will become too small. This can result in the final opinion depending upon the initial condition. However, if decays slow enough, where slow means diverges, then the agents will keep listening to new posts and updating their opinions. In this case, the expectation of their final opinions are independent of their initial value. Theorem 3.1 is a generalization of the result in Ghaderi and Srikant (2013) which applies to a constant update rule. Interestingly, we obtain the same equilibrium condition as them, despite the time varying stochastic update rule.

The equilibrium solution from Theorem 3.1 has a structure that resembles Ohm’s Law from circuit theory. Such a connection was also made in Ghaderi and Srikant (2013). An electric circuit can be viewed as a graph. Each node has a voltage and each edge has a conductance . A current from node to node is defined as . From Ohm’s Law we have that (Agarwal and Lang 2005). Conservation of current requires that all current flowing to a node must sum to zero (current flowing away from a node has a negative value). Mathematically, this is given by . To connect the circuit model with our opinion equilibrium, we write down equation (3) for a single non-stubborn node . It can be shown using equation (2) that this gives


To see the circuit analogy, we define the “voltage” of a node as its expected opinion () and the conductance on an edge pointing from to as the posting rate of (). Using Ohm’s Law, we obtain a “current” from to given by . If we enforce conservation of current at each node, we obtain our equilibrium solution given by equation (4) (or in matrix form by equation (3)). The analogy is very natural for our model. The voltage is the polarity of an individual’s opinion and stubborn agents are fixed voltage sources. The conductance measures how easily information flows along on edge, analogous to an electrical circuit where conductance measures how easily current flows along an edge. The equilibrium condition simply gives how opinions/voltages are distributed in a social network/circuit. It also suggests that one can place stubborn agents in the network to manipulate the non-stubborn opinions, just as one can place voltage sources in a circuit to manipulate the node voltages. We discuss this more in Section 4.

We next consider the variance of the opinions. Let denote the covariance matrix of . We have the following result. Let Assumption 1 hold and suppose that diverges and converges. Then,

This theorem characterizes what types of result in the covariance going to zero. Effectively, one requires that decays sufficiently fast. This can be interpreted qualitatively as the individuals in the network stop listening to new posts, so their opinions stop changing.

Taken together, Theorems 3.1 and 3.1 characterize the class of stubbornness factors where convergence occurs. If decreases sufficiently rapidly ( converges), then the opinions’ covariance will go to zero. This sets an upper bound on how fast can decrease and still have final opinions which are not random. However, if does not decrease too rapidly ( diverges) then the final opinions do not depend upon the initial condition. This is the lower bound on how fast decreases. Within this region, the opinions will converge to constants which are independent of the initial condition. To parameterize this region, assume has the form . Then Theorems 3.1 and 3.1 are satisfied for . We illustrate these convergence regions in Figure 1.

Figure 1: Plot of the different convergence regions for our opinion dynamics model as a function of the stubbornness factor .

3.2 Stochastic Approximation and Opinion Dynamics

There is an interesting connection between our convergence results for opinion dynamics and well known convergence results from stochastic approximation. To make the connection, we assume that each non-stubborn agent

in the network has a loss function

. This function can depend upon all opinions in the network, but typically it measures the disagreement between the opinion of with its neighbors. The goal of each non-stubborn agent in the network is to achieve some form of consensus by minimizing its loss function through adjustments of its own opinion.

One way to minimize the loss function is by gradient descent. However, in many situations, one does not observe the gradient, but a noisy version of it. Optimizing a function with a noisy gradient is known as stochastic approximation. We assume the noise term at step is and has zero mean. If the step size of the update at step is , then the noisy gradient step is given by


In a famous result of Robbins and Monro, it was shown that convergence occurs in if diverges and converges (Robbins and Monro 1951). These conditions are the same as our convergence conditions in Theorems 3.1 and 3.1. This is interesting because our model is very different from the setting considered by Robbins and Monro. In our model, we are not trying to minimize a single function. Rather, each non-stubborn agent in the network is trying to minimize a different loss function which depends on the opinions of itself and its neighbors. Also, the agents can only modify their own opinion. This means that the loss functions are constantly changing as the neighbors update their opinions.

To make a connection with stochastic approximation, we need to update opinions using a noisy gradient of a loss function. We choose a quadratic loss that gives more weight to neighbors that have a higher posting probability. For non-stubborn agent , the loss function is

and the gradient is

We will now show how our opinion update rule is equivalent to this gradient plus zero mean noise. To do so, we define three types of random variables. We let

be a random variable which is one if posts to at time and zero otherwise. We let be one if no neighbor of posts at time and zero otherwise. The expectations of these random variables are and . We let be the opinion of a post of at time , with conditional expectation . We can write the opinion update rule from equation (1) for agent using these random variables as


By adding and subtracting the means of these random variables conditioned on we obtain

where we have defined the zero mean noise term . We see that in our model, whenever an agent posts, all of its neighbors update their opinions using a noisy gradient, where the noise comes from uncertainty in who posts and what they post. Viewed from the stochastic approximation perspective, every social media post is a gradient update for opinions in the network. The stubbornness factor is just the step size used for the update, and this step size decreases to zero.

3.3 Proof of Theorem 3.1


be an arbitrary eigenvalue of

with corresponding eigenvector

. Then, due to the fact that

is a stochastic matrix, by the Perron-Frobenius Theorem we know that

for all eigenvalues of . Now, consider the Jordan canonical form of the matrix . First off, because is stochastic and reducible with aperiodic recurrent states, then the algebraic and geometric multiplicities of the eigenvalue equal to one (of ) are equal to the number of communication classes (Brémaud 2013, p. 198). Because of this, we know that the Jordan Canonical form of can be written as

where are Jordan block matrices of the form

and for all . Furthermore, the matrix has as columns the respective generalized eigenvectors of . By expressing the Jordan matrix as , we have the standard Jordan canonical form . Now, consider the function given by

Using this functional form, we have that

Due to the block structure of the Jordan matrix , we have that

Now, from the form of it is clear that . For the Jordan block matrices, we will utilize the following Lemma. For all Jordan blocks , we have the following

Using Lemma 3.3, we now have the following

To obtain our final result, we must express the above equation in terms of and . To do this, we define the matrices

Using this notation, the limiting matrix becomes

where we have assumed the identity matrix

has the proper size determined by the context to simplify the notation. Now we use the condition to obtain

This gives us the following two useful relationships:

Next, we use the condition to obtain

This gives us the following useful relationships:

Using all of these relationships we obtain

Putting all of these results together we get

which provides the desired result

3.4 Proof of Lemma 3.3

For notation let , and let be the size of the Jordan block . Because the function is a product of analytic functions on , then is analytic on the set . Thus we have that

where is the -th derivative of the function (see (Higham 2008, p. 2-4) for more information on analytic functions applied to the Jordan canonical form of a matrix).

Now, because diverges, this implies that the sequence of functions converges uniformly to 0 on any such that is a closed subset of . For take such that . Then, from Cauchy’s Differentiation formula we have that

Now, we have that uniformly on . Thus, from the above . By applying the above to every eigenvalue , we obtain the desired result.

3.5 Proof of Theorem 3.1

We begin by writing in the following form:

where is a bounded random vector of dimension such that and . Because of the above relation, we have that

Due to the fact that and are uncorrelated, we have the following:

Using the fact that and that the stubborn agents opinion vector is a constant in time we have that

where , the matrix is the submatrix of corresponding to the non-stubborn agents, and is the subvector of corresponding to the non-stubborn agents. Now, for convention let for . For notation let if is contained in the positive semi-definite cone. Then, we have that

Now, because , by Popoviciu’s inequality we know that for all . Furthermore, by Cauchy-Schwarz we know that for all and . Thus, we know that for all , where is the matrix of all ones and the inequality holds element-wise.

Now, we have that

where the second inequality holds from the fact that the matrices are substochastic for all . Then by the dominated convergence theorem, because converges absolutely, we have that

Now, we must prove that for all . However, this follows immediately from the fact that is a substochastic irreducible matrix and diverges. Thus, using the fact that is a positive semi-definite matrix for all , we get the desired result:

4 Optimization of Stubborn Agent Placement

The equilibrium condition in equation (3) can be rewritten as . This linear system of equations constrains the opinions of the non-stubborn individuals. However, the terms in the matrices can be adjusted by the placement of stubborn agents in the network. We now consider the problem of how one can optimize a function of non-stubborn opinions via stubborn agent placement. We consider two different objective functions. First, there is the sum of the non-stubborn opinions. Second, there is the number of non-stubborn individuals whose opinion exceeds a given threshold. This is a non-linear objective which may be relevant if the individuals take some sort of action when their opinion exceeds the threshold (buy a product, watch a movie, etc.).

Consider the scenario where we add one stubborn agent to the network with communication rate . Without loss of generality, we assume that this agents’ opinion . This simplifies our formulation and is also a reasonable assumption if the goal is to maximize the shift of opinion in the network. Suppose that we begin with some equilibrium solution that satisfies . Here we are assuming the opinions are deterministic, which is a valid assumption under the conditions of Theorem 3.1. Consider adding this new stubborn agent to the network and having it connect to non-stubborn agent . Let be a vector that has component equal to one, and all other components equal to zero. Then, by adding this new stubborn agent and having it connect to agent we then achieve the new equilibrium solution given by

Formally, consider the set function defined to be

where is the vector of all one’s. Then the problem of determining which non-stubborn agents to connect with in order to maximize the sum of influence of the non-stubborn agents can be written as:


Additionally, once can also consider the set function defined to be

where is equal to one if the -th component of is greater than some predetermined threshold , and zero otherwise. Maximizing this set function is equivalent to maximizing the number of non-stubborn agents with final opinion greater than , which could correspond to for instance buying a product or voting for a particular candidate.

In practice, these discrete optimization formulations may become too large to solve for all stubborn agents simultaneously. One solution to this is to solve the formulations for one stubborn agent at a time (actually one stubborn agent edge to a non-stubborn agent at a time). This greedy approach greatly reduces the complexity of the problems and allows them to be solved for large networks. While we cannot provide any performance guarantees for the threshold objective using this greedy approach, we do have a guarantee for the sum of opinions. For an arbitrary instance of , , and the set function is monotone and submodular. Because the objective is monotone and submodular, a greedy approach to problem 7 will produce a solution within a factor of of the optimum (Nemhauser et al. 1978). In Section 6 we present performance results of greedy solutions for these two objective functions.

4.1 Integer Programming Formulation

We now present a linear integer programming formulation for the optimal stubborn agent placement problems with both sum and threshold objective functions. We assume that the stubborn agents are advocating on specific point of view as part of an influence campaign, so their opinions are equal to one. We define binary decision variables which equal one if stubborn agent connects to non-stubborn individual . We also define the opinion for non-stubborn individual . Here we are assuming the opinions are deterministic, which is a valid assumption under the conditions of Theorem 3.1. We let the posting probability for individual to individual be . To simplify our formulation we assume the posting probability of the stubborn agents is fixed. Then the equilibrium conditions can be written as

Above, we have introduced the auxiliary variable in order to make the constraints linear. We also want to limit the out-degree of each stubborn agent and the in-degree of each non-stubborn individual. This is a practical consideration, as if these constraints are violated, the agents may have less ability to influence as they will appear to be artificial. These constraints are simply

For the opinion sum, the objective is simply . The threshold objective can be written in linear form as follows. Let the opinion threshold be . Then the number of individuals whose opinion is above is given by

, where the binary variables


To summarize, we have two integer programming formulations for optimizing the opinions of non-stubborn agents in a network using stubborn agents. To maximize the sum of opinions, or equivalently the average opinion in the network one solves

To maximize the number of non-stubborn agents with opinion exceeding a threshold one solves