Defending Elections Against Malicious Spread of Misinformation

09/14/2018 ∙ by Bryan Wilder, et al. ∙ University of Southern California Washington University in St Louis 0

The integrity of democratic elections depends on voters' access to accurate information. However, modern media environments, which are dominated by social media, provide malicious actors with unprecedented ability to manipulate elections via misinformation, such as fake news. We study a zero-sum game between an attacker, who attempts to subvert an election by propagating a fake new story or other misinformation over a set of advertising channels, and a defender who attempts to limit the attacker's impact. Computing an equilibrium in this game is challenging as even the pure strategy sets of players are exponential. Nevertheless, we give provable polynomial-time approximation algorithms for computing the defender's minimax optimal strategy across a range of settings, encompassing different population structures as well as models of the information available to each player. Experimental results confirm that our algorithms provide near-optimal defender strategies and showcase variations in the difficulty of defending elections depending on the resources and knowledge available to the defender.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Free and fair elections are essential to democracy. However, the integrity of elections depends on voters’ access to accurate information about candidates and issues. Oftentimes, such information comes via news media or political advertising. When these information sources are accurate and transparent, they serve an important role in producing well-functioning elections. However, because of the great impact that messaging can have on voter behavior [Gerber, Karlan, and Bergan2009, DellaVigna and Kaplan2007, Brader2005], such information can also subvert legitimate elections when deliberately falsified by malicious actors.

In traditional media environments, such subversion is relatively difficult because professional news organizations serve as gatekeepers to information spread. However, modern media environments are increasingly decentralized due to the importance of social networks such as Facebook or Twitter, which allow outside actors to spread political information directly amongst voters [Chi and Yang2011, Wattal et al.2010, Holcomb, Gottfried, and Mitchell2013]. This presents an unprecedented opportunity for malicious actors to spread deliberately falsified information – “fake news” – and in doing so, influence the results of democratic elections. Such concerns are particularly salient in light of the 2016 U.S. presidential election. Recent research shows that, on average, an American adult was exposed to at least one fake news story during the campaign [Allcott and Gentzkow2017] and that these stories influenced voter attitudes [Pennycook, Cannon, and Rand2017].

Prior work on election control has considered a number of mechanisms for election interference, including bribery [Faliszewski et al.2009, Baumeister et al.2015, Erdélyi, Reger, and Yang2017, Yang, Shrestha, and Guo2016], adding or deleting voters [Erdélyi, Hemaspaandra, and Hemaspaandra2015, Loreggia et al.2015, Faliszewski, Hemaspaandra, and Hemaspaandra2011, Liu et al.2009], and adding or deleting candidates [Chen et al.2015, Liu et al.2009]. Only recently has social influence been explicitly studied as a means of election control [Wilder and Vorobeychik2018, Faliszewski et al.2018]. Further, with only a few exceptions which do not consider social influence [Li, Jiang, and Wu2017, Yin et al.2018], election control has so far primarily been studied from the attacker’s perspective (to establish the computational complexity of controlling an election when the attacker is the only actor).

We therefore ask the following natural question: how can a defender mitigate the impact of fake news on an election? For instance, a social media platform or a news organization may have the ability to detect and label fake news stories on a given advertising channel, or propagate a counter-message with more accurate information. We model this interaction as a zero-sum game between an attacker, attempting to influence voters by advertising on a subset of possible channels, and a defender who enacts counter-measures on a subset of channels. The goal for the attacker is to maximize the expected number of candidates who switch to the attacker’s preferred candidate, whereas the defender’s goal is to minimize this quantity. Note that in this model the defender is neutral with respect which candidate actually wins; they focus solely on minimizing the attacker’s malicious influence.

Computing equilibria is computationally challenging due to the exponential number of possible actions for each player. Complicating the problem, in practice the defender may have considerable uncertainty about which candidate each voter prefers at the start of the game (information which is needed to effectively target limited resources). We provide efficient algorithms, backed by theoretical guarantees and empirical analysis, across a range of settings:

  1. In the disjoint case, each voter can be reached by only one advertising channel, modeling a case where each channel corresponds to a different demographic group. We give an FPTAS for the minimax equilibrium strategies.

  2. In the nondisjoint case, each voter can be reached by an arbitrary set of channels. We first prove that the associated computational problem is APX-hard. We then provide an algorithm with a bicriteria guarantee: it guarantees the defender a constant-factor approximation to the optimal payoff but relaxes the budget constraint.

  3. We consider three models of uncertainty about voter preferences. The first is stochastic uncertainty where the preference profile is drawn from a distribution. The second is asymmetric uncertainty where the preference profile is drawn from a distribution and the attacker observes the realized draw. The third is adversarial uncertainty where the preference profile is chosen to be the worst possible for the defender within an uncertainty set. Collectively, these models allow us to capture a range of assumptions about the information available to each player. Surprisingly, we show that across all three models, and in both the disjoint and nondisjoint cases, the defender can obtain exactly the same approximation ratios as when preferences are known exactly.

Problem Formulation

We consider a set of voters (with ) and a set of advertising channels (with ). and form a bipartite graph; that is, each voter is reachable by one or more advertising channels. The voters participate in an election between two candidates, and . An attacker aims to ensure that one of these candidates, , wins the election. A defender aims to protect the election against this manipulation. Each voter has a preferred candidate who they vote for. Let if initially prefers and 0 otherwise.

The attacker attempts to alter election results by spreading a message (a fake news story) amongst the voters. More precisely, the attacker has a limited advertising budget and can send the message through at most channels. If channel is chosen by the attacker, then any voter with an edge to switches their vote to

with probability

, where all such events are independent. The defender can protect voters from the attacker’s misinformation, for example by detecting and labeling falsified stories on a given advertising channel, or by attempting to propagate a counter-message of their own. If the defender protects channel , each voter connected to is “immunized” against the attacker’s message independently with probability . The defender may select up to channels.

We model this interaction as a zero-sum game between the attacker and defender. In this setting, equilibrium strategies are unaffected by whether one party must first commit to a strategy (formally, the Nash and Stackelberg equilibria are equivalent). Hence without loss of generality, we consider a simultaneous-move game and seek to compute a Nash equilibrium. The defender’s strategy space is all subsets of channels to protect, while the attacker’s strategy space consists of all subsets of channels to attack. Hence, each player has an exponentially large number of pure strategies, substantially complicating equilibrium computation.

We now introduce the attacker’s objective, which determines the payoffs for the game. When the defender chooses a set of channels and the attacker chooses , let be the expected number of voters who previously preferred but switch their vote to . The randomness is over which voters are reached by the attacker’s message (determined by the probabilities and ). Formally, we can express as

where the first product is the probability that the defender fails to reach voter and the second is the probability that the attacker succeeds. The term means that only voters who initially prefer count (since they are the only ones who can switch). The attacker’s payoff is simply , while the payoff for the defender is ; in words, the defender aims to minimize the spread of misinformation.

We consider two models for how the population may be structured. In the disjoint model, the advertising channels partition the population so that each voter has an edge to exactly one channel. This models a case where the channels represent demographic groups and the attacker is deciding which demographics to target. In the more general nondisjoint model, voters may be reached through multiple channels; thus, the edges can form an arbitrary bipartite graph.

We begin by considering the case where (the voters’ initial preferences) are common knowledge. Subsequently, we consider the setting in which voter preferences are uncertain.

Related Work

We survey related work in two areas. First, recent work in social choice studies the interaction between social influence and elections. However, all such work examines the attacker’s problem of manipulating the election, leaving open the question of how elections can be defended against misinformation. Most closely related is the work of Wilder and Vorobeychik wilder2018controlling, who study the attacker’s problem of manipulating an election in a model where social influence spreads amongst voters from an attacker’s chosen “seed nodes”. However, they do not study the corresponding defender problem. Our model is also somewhat different in that we consider advertising to voters across a set of channels, rather than influence among the voters themselves. The work of Berdereck et al. bredereck2016large is also closely related. They study the attacker’s problem in a bribery setting where a single action (e.g., placing an ad) can sway multiple voters. Faliszewski et al. faliszewski2018opinion extend this to a domain where the initially bribed agents can influence others. Bredereck and Elkind bredereck2017manipulating also study a problem of manipulating diffusions on social networks, though not specifically in the context of elections.

1:Arbitrarily initialize and
2:for  do
3:     Draw uniformly at random from
4:     //TopK returns the set consisting of the indices of the smallest

entries of the given vector

5:     
6:     
7:return and
Algorithm 1 FPLT()

This body of work demonstrates substantial interest in the election control literature in emerging threats such as fake news. Our contribution is the first study of these problems from the perspective of a defender.

Second, our work is related to a complementary literature on budget allocation problems. Budget allocation is the attacker’s problem in our model with no defender intervention: allocating an advertising budget to maximize the number of people reached. Efficient algorithms are available for a number of variants on this model [Alon, Gamzu, and Tennenholtz2012, Soma et al.2014, Miyauchi et al.2015, Staib and Jegelka2017]. None of this work studies the game-theoretic problem of a defender trying to prevent an attacker from reaching voters. Soma et al. soma2014optimal study a game where multiple advertisers compete for consumers, but not where one advertiser solely attempts to block the other. Their game is a potential game with pure strategy equilibria; however, it is easy to give examples in our model where the zero-sum nature of the attacker-defender interaction requires randomization. This makes equilibrium computation harder because we cannot simply use the best response dynamics. Our work is also related to the influence blocking maximization (IBM) problem [He et al.2012] where one player attempts to limit the spread of a cascade in a social network. However, in IBM the starting points of the cascade are fixed in advance; in our problem the adversary chooses a randomized strategy to evade the defender.

Disjoint populations

In this setting, the population of voters is partitioned by the channels. Let denote the set of voters affiliated with channel . Exploiting the disjoint structure of the population, we can use linearity of expectation to rewrite the utility function as

Importantly, this expression is linear in each player’s decisions. More formally, let denote the indicator vector of a set . Define the loss vector to have value in coordinate . Then, we have that , where the first term is constant with respect to .

1: for
2:for  do
3:     //Greedily maximizes a function subject to budget
4:      = Greedy(, )
5:     
6:      = Update(, )
7:return
8:function ExponentiatedGradientUpdate
9:     
10:     
11:function EuclideanUpdate
12:     
Algorithm 2 OnlineGradient

Similarly, we can define a loss vector which encapsulates the attacker’s payoff for any defender action .

To exploit this structure, we employ an algorithmic strategy based on online linear optimization. In such problems, a player seeks to optimize a (possibly adversarially chosen) sequence of linear functions over a feasible set. The aim is to achieve low regret, which measures the gap in hindsight to the best fixed decision over rounds. We map online linear optimization onto our problem as follows. The feasible set for each player consists of -dimensional binary vectors, where a 1 indicates that the player has chosen the corresponding channel and a 0 indicates that they have not. A vector is feasible if it sums to at most (for the defender) or (for the attacker). Both the attacker and defender will choose a series of actions from the corresponding feasible sets. In iteration , if the attacker chooses a set , and the defender receives a loss vector and suffers loss

. The attacker’s loss functions are defined similarly.

Each player will generate their actions using the classical Follow The Perturbed Leader (FTPL) algorithm of Kalai and Vempala kalai2005efficient (Algorithm 1

). At each iteration, each player best responds to the uniform distribution over all strategies played so far by their opponent, plus a small random perturbation. Note that best response here corresponds to linear optimization over the player’s feasible set. Since any budget-satisfying vector is feasible, we simply select the highest-weighted

elements (or for the attacker). Since FTPL has a no-regret guarantee for online linear optimization neither player can gain significantly by deviating from their history of play once the number of iterations is sufficiently high. More precisely, we have the following:

Theorem 1.

With iterations of FTPL, uniform distributions on and form an -equilibrium.

Nondisjoint populations

When voters may be reachable from multiple advertising channels, the approach from the previous section breaks down because utility is no longer linear for either player: selecting one channel reaches a subset of voters and hence reduces the gain from selecting additional channels. Indeed, we can obtain the following hardness result:

Theorem 2.

In the nondisjoint setting, computing an optimal defender mixed strategy is APX-hard.

The intuition is that the maximum coverage problem is essentially a special case of ours. However, diminishing returns provides useful algorithmic structure. Formally, both players’ best response functions are closely related to submodular optimization problems. A set function is submodular if for all and , . We will only deal with monotone functions, where holds for all .

Our overall approach is to work in the marginal space of the attacker, by keeping track of only the marginal probability that they select each channel. That is, the attacker’s current mixed strategy is concisely represented by a fractional vector , where gives the probability of selecting channel . We run an approximate no-regret learning algorithm to update over a series of iterations. At each iteration , is updated via a gradient step on a reward function induced by a set played by the defender. Specifically, we will choose to be an approximate best response to the current attacker mixed strategy.

There are two principal challenges that must be solved to enable this approach. First, we need to design an appropriate no-regret algorithm for the attacker. This is a challenging task as the attacker’s utility is no longer linear (or even concave) in the marginal vector . Second, we need to compute approximate best responses for the defender, which itself is NP-hard.

We resolve the first challenge by running an online gradient algorithm for the attacker, where the continuous objective at each iteration is the multilinear extension of an objective induced by the defender’s strategy . The multilinear extension is a fractional relaxation of a submodular set function. We define the multilinear extension induced by a defender strategy as

That is, is the expected value of when each channel is independently included in with probability . This is a special case of the multilinear extension more generally defined for arbitrary submodular set functions [Calinescu et al.2011].

While is in general not concave, we show that gradient-ascent style algorithms enjoy a no-regret guarantee against a -approximation of the optimal strategy in hindsight. Our general strategy is to analyze online mirror ascent for continuous submodular functions. By making specific choices for the mirror map, we obtain two concrete algorithms (the update rules in Algorithm 2). The first is standard online gradient ascent, which takes a gradient step followed by Euclidean projection onto the feasible set . The second is an exponentiated gradient algorithm, which scales each entry of according to the gradient and then normalizes to enforce the budget constraint. We have the following convergence guarantees:

Theorem 3.

Suppose that we apply Algorithm 2 to a sequence of multilinear extensions . Let . Then, after iterations, we have that

where for the exponentiated gradient update, and and for the Euclidean update, and .

Our proof builds on the fact that for any single continuous submodular function, any local optimum is a -approximation to the global optimum and translates this into the online setting. We remark that a no-regret guarantee for online gradient ascent for submodular functions was recently shown in [Chen, Hassani, and Karbasi2018]. Our more general analysis based on mirror ascent gives their result as a special case, and also allows us to analyze the exponentiated gradient update. The advantage is that the theoretical convergence rate is substantially better for exponentiated gradient, reducing the dimension dependence from to . However, we also include the result for online gradient ascent since it tends to perform better empirically.

The second challenge is computing defender best responses. We show that the defender’s best response problem is also closely related to a submodular maximization problem. Accordingly, we can compute approximate best responses via a greedy algorithm. Specifically, we show that the defender can obtain an -approximation to the optimal best response when the greedy algorithm is given an expanded budget of nodes.

In more detail: fix an attacker mixed strategy, denoted as . The defender best response problem is That is, we wish to minimize the number of voters who switch their vote, in expectation over . We consider the following equivalent problem

i.e., maximizing the number of voters who do not switch as a result of the defender’s action. Define . The key observation enabling efficient best response computations is the following:

Lemma 1.

For any attacker mixed strategy , is a monotone submodular function.

Accordingly, we can compute -optimal best responses by running the greedy algorithm with an expanded budget:

Theorem 4.

Running the greedy algorithm on the function with a budget of outputs a set satisfying

Note that running greedy with the original budget would give a approximation for the function . However, a constant factor approximation for maximizing may not translate into any approximation for minimizing because of the constant term in the definition of . Expanding the budget by a logarithmic factor gives a approximation with respect to , and when is small enough the guarantee can be translated back in terms of .

Combining the no-regret guarantee for the attacker and the best response approximation guarantee for the defender yields the following guarantee for the sequence of sets :

Theorem 5.

After iterations, let be the uniform distribution on output by Algorithm 2. The defender’s payoff using is bounded as

Now, if we take and run greedy with , we obtain that is a 2-approximation Nash equilibrium strategy for the defender up to additive loss , using a budget of . Each iteration takes time where the first term is to compute the attacker’s gradient, the second to project onto their feasible strategy set, and the third is to run greedy for the defender (see the supplement for details).

Preference uncertainty

The previous two sections showed how to compute approximately optimal equilibrium strategies for the defender when both players know the starting preferences of the voters exactly. However, in practice the preferences will be subject to uncertainty, complicating the problem of optimally targeting resources. We now explore three models of preference uncertainty, each of which makes an increasingly conservative assumption about the information available to the defender. In each case, we show how to extend our algorithmic techniques to obtain approximately optimal defender strategies.

Stochastic uncertainty

We start with the least conservative assumption that the joint preference profile of the voters is drawn from a distribution which is known to both players. Each aims to maximize their payoff in expectation over the unknown draw from this distribution. We show that in both the disjoint and nondisjoint settings, the same algorithmic techniques go through with a natural modification to account for uncertainty.

Recall that denotes the voter preferences.

is now drawn from a known joint distribution

. Let denote the expected number of voters who switch to under preferences . The payoffs are given by . Via linearity of expectation, we can write this as

1:Arbitrarily initialize and
2:for  do
3:     Draw uniformly at random from
4:     //TopK returns the set consisting of the indices of the smallest entries of the given vector
5:     
6:     
7:return and
Algorithm 3 FPLT-Asymmetric()
1:Draw iid from
2: for ,
3:for  do
4:      = Greedy
5:     for  do
6:         
7:          Update(, )      
8:return
Algorithm 4 OG-Asymmetric

Dependence on the random preferences appears only through the term . This has two important consequences. First, we can evaluate the objective and implement the corresponding algorithms using access only to the marginals of the distribution. For many distributions of interest (e.g., product distributions where each voter adopts a preference independently), these will be known explicitly, and they can in general be evaluated to arbitrary precision via random sampling. Second, since the probability term is a nonnegative constant with respect to the strategies and , the payoffs retain properties such as linearity (in the disjoint case) or submodularity (in the nondisjoint case). Accordingly, we can obtain exactly the same computational guarantees as in the deterministic case, merely substituting the above expression for the payoffs:

Theorem 6.

By substituting for in the definition of , FTPL achieves the same guarantee for the stochastic objective as in Theorem 1. Further, making this substitution in the definition of and running Algorithm 2 yields the same guarantee as in Theorem 5.

Asymmetric uncertainty

We now consider a case where the true voter preferences are still drawn from a distribution, but the players have access to asymmetric information about the draw. Specifically, the defender knows only the prior distribution, while the attacker has access to the true realized draw. We aim to solve the defender problem:

(1)

Here, the defender minimizes in expectation over the distribution of voter preferences, but the attacker maximizes knowing the actual draw . We show how to compute approximately optimal defender strategies for an arbitrary distribution , assuming only the ability to draw i.i.d. samples. We first prove a concentration bound for the number of samples required to approximate the true problem over defender mixed strategies with bounded support:

Lemma 2.

Draw samples. With probability at least , for defender mixed strategy with support size at most ,

We now give generalizations of our earlier algorithms for the disjoint and nondisjoint settings. Each algorithm first draws sufficient samples for Lemma 7 to hold. Then, it simulates a separate adversary for each of the samples, mimicking the ability of the adversary to respond to the true draw of . Each adversary runs a separate instance of a no-regret learning algorithm (FTPL for the disjoint case and online gradient for the nondisjoint case). In each iteration, the defender updates according to the expectation over all of the adversaries (since the defender does not know the true ). More precisely, in the disjoint case, the defender’s loss function in iteration is given by the average of the loss functions generated by each of the individual adversaries. The defender takes a FTPL step according to this average loss. In the nondisjoint case, the defender computes a greedy best response where the objective is given by average influence averted over all of the current adversary strategies. We show the following approximation guarantee for each setting:

Theorem 7.

Using inputs , and for Algorithm 3, the uniform distribution over is an -equilibrium defender strategy.

Theorem 8.

Run Algorithm 4 with iterations, , , and samples. Let be the uniform distribution on . With probability at least , the defender’s payoff using is bounded as

where is the optimal value for Problem 1.

That is, the defender can obtain the same approximation guarantee in the same number of iterations. Each iteration takes time to update all of the adversaries, while the defender best response problem still requires one call to greedy as before.

Adversarial uncertainty

1:Arbitrarily initialize and
2:for  do
3:     Draw uniformly at random from
4:     //TopK returns the set consisting of the indices of the smallest entries of the given vector
5:     
6:     
7:     
8:return and
Algorithm 5 FPLT-Adversarial()
1: for
2:for  do
3:      = Greedy(, )
4:     
5:      = Update()
6:      = Update
7:return
Algorithm 6 OG-Adversarial

We now consider the most conservative uncertainty model, in which the voters’ preferences are chosen adversarially within some uncertainty set. Specifically, there is a nominal preference profile (e.g.,

may be an estimate from historical data). We are guaranteed that the true

lies within the uncertainty set . That is, the true may differ in up to places from our estimate. The defender solves the robust optimization problem

(2)

which optimizes against the worst case . Note that Problem 2 essentially places the choice of under the control of the attacker (formally, we can combine the two max operations). We show that the attacker component of the algorithms when payoffs are common knowledge can be generalized to handle this expanded strategy set. Essentially, the attacker will now have two kinds of actions. First, selecting a channel for a fake news message (as before). Second, directly reaching a given voter by changing their initial preference. We equivalently simulate the second class of actions by adding a new channel for each voter . The new channel has and . That is, the attacker always succeeds in influencing and can never be stopped by the defender. The attacker’s pure strategy set now consists of all choices of normal channels and of the new channels.

Our result from the disjoint case goes through essentially unchanged. Algorithm 5 runs FTPL for both players, as before. The only change is in the linear optimization step for the attacker, which now selects separately the top regular channels and new channels (lines 5 and 6). We have the following guarantee:

Theorem 9.

Using for Algorithm 5, the uniform distribution over is an -equilibrium defender strategy.

The main technical difference is in the nondisjoint case, where the attacker’s problem now corresponds to submodular maximization over a partition matroid (since the budget constraint is now split into two categories instead of a single category as before). More general matroid constraints can complicate submodular maximization, e.g., the greedy algorithm no longer obtains the optimal approximation ratio. Fortunately, our use of a continuous relaxation and online gradient ascent for the attacker can be shown to generalize without loss to arbitrary matroid constraints:

Theorem 10.

After iterations, let be the uniform distribution on output by Algorithm 6. The defender’s payoff using (with respect to Problem 2) is bounded as

Experiments

We now examine our algorithms’ empirical performance, and what the resulting values reveal about the difficulty of defending elections across different settings. We focus on the nondisjoint setting for two reasons. First it is the more general case. Second, FTPL is guaranteed to converge to an -optimal strategy in the disjoint setting, while in the nondisjoint setting is important to empirically assess our algorithm’s approximation quality. Our experiments use the Yahoo webscope dataset [Yahoo2007]. The dataset logs bids placed by advertisers on a set of phrases. We create instances where the phrases are advertising channels and the accounts are voters. To generate each instance, we sample a random subset of 100 channels and 500 voters. Each propagation probability is drawn uniformly at random from for each player. Each voter’s preference is also drawn uniformly at random. All results are averaged over 30 iterations.

We start with fully known preferences and examine the approximation quality of Algorithm 2. Importantly, we do not increase the defender’s budget (i.e., ). Empirically, Algorithm 2 performs substantially better than its theoretical guarantee, rendering bicriteria approximation unnecessary.

We use the mixed strategies that Algorithm 2 outputs to compute upper and lower bounds on the value of the game. The upper bound is the attacker’s best response to the defender mixed strategy, while the lower bound is the defender’s best response to the attacker mixed strategy. It is easy to see that the defender cannot obtain utility better than , and Algorithm 2’s mixed strategy guarantees utility no worse than . Hence, we use as an upper bound on the optimality gap. Since finding exact best responses is NP-hard, we use mixed integer programs (see the supplement).

Table 1 shows that Algorithm 2 computes highly accurate defender equilibrium strategies across a range of values for and . We use iterations with . The average optimality gap is always (provably) under 6%. Moreover, this value is an upper bound, and the real gap may be smaller. We conclude that Algorithm 2 is highly effective at computing near-optimal defender strategies. Next, Figure 1 examines how the attacker’s payoff varies as a function of and . Even for large , the defender cannot completely erase the attacker’s impact (to be expected since and so the defender’s message is not perfectly effective). However, the defender can obtain a large reduction in the attacker’s influence when is high. The empirical payoffs are convex in , meaning that the defender achieves this reduction with a moderate value of and sees little improvement afterwards. When is low, even large defender expenditures have a relatively little impact. Intuitively, it is harder for the defender to ensure an intersection between their own strategy and the attacker’s when the attacker only picks a small number of channels to begin with.

Next, we examine the impact of uncertainty. Figure 1 shows the attacker’s payoff under stochastic, asymmetric, and adversarial uncertainty compared to fully known payoffs. Stochastic uncertainty leaves the attacker’s payoff virtually identical. Surprisingly, this also holds for the asymmetric case. However, in the adversarial setting, the attacker’s payoff scales linearly with , indicating that the defender cannot mitigate the impact of such uncertainty. Hence, the defender can benefit substantially from gathering enough information to at least estimate the distribution of , even if the attacker still has privileged information.

Conclusion: We introduce and study the problem of a defender mitigating the impact of adversarial misinformation on an election. Across a range of population structures and uncertainty models, we provide polynomial time approximation algorithms to compute equilibrium defender strategies, which empirically provide near-optimal payoffs. Our results show that the defender can substantially benefit from modest resource investments, and from gathering enough information to estimate voter preferences.

5 10 20
Table 1: Upper bound on optimality gap for Algorithm 2. Average over 30 instances;

denotes standard deviation.

Figure 1: Top left: Attacker’s payoff as the budget constraint for each player varies. Top right: attacker payoff with stochastic uncertainty. Bottom left: asymmetric uncertainty. Bottom right: adversarial uncertainty, varying the uncertainty set size .

References

  • [Allcott and Gentzkow2017] Allcott, H., and Gentzkow, M. 2017. Social media and fake news in the 2016 election. Journal of Economic Perspectives 31(2):211–236.
  • [Alon, Gamzu, and Tennenholtz2012] Alon, N.; Gamzu, I.; and Tennenholtz, M. 2012. Optimizing budget allocation among channels and influencers. In WWW, 381–388. ACM.
  • [Baumeister et al.2015] Baumeister, D.; Erdélyi, G.; Erdélyi, O. J.; and Rothe, J. 2015. Complexity of manipulation and bribery in judgment aggregation for uniform premise-based quota rules. Mathematical Social Sciences 76:19–30.
  • [Brader2005] Brader, T. 2005. Striking a responsive chord: How political ads motivate and persuade voters by appealing to emotions. American Journal of Political Science 49(2):388–405.
  • [Bredereck and Elkind2017] Bredereck, R., and Elkind, E. 2017. Manipulating opinion diffusion in social networks. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI).
  • [Bredereck et al.2016] Bredereck, R.; Faliszewski, P.; Niedermeier, R.; and Talmon, N. 2016. Large-scale election campaigns: Combinatorial shift bribery. Journal of Artificial Intelligence Research 55:603–652.
  • [Calinescu et al.2011] Calinescu, G.; Chekuri, C.; Pál, M.; and Vondrák, J. 2011. Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing 40(6):1740–1766.
  • [Chekuri, Vondrak, and Zenklusen2010] Chekuri, C.; Vondrak, J.; and Zenklusen, R. 2010. Dependent randomized rounding via exchange properties of combinatorial structures. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, 575–584.
  • [Chen et al.2015] Chen, J.; Faliszewski, P.; Niedermeier, R.; and Talmon, N. 2015. Elections with few voters: Candidate control can be easy. In AAAI, volume 15, 2045–2051.
  • [Chen, Hassani, and Karbasi2018] Chen, L.; Hassani, H.; and Karbasi, A. 2018. Online continuous submodular maximization. In International Conference on Artificial Intelligence and Statistics, 1896–1905.
  • [Chi and Yang2011] Chi, F., and Yang, N. 2011. Twitter adoption in congress. Review of Network Economics 10(1).
  • [DellaVigna and Kaplan2007] DellaVigna, S., and Kaplan, E. 2007. The fox news effect: Media bias and voting. The Quarterly Journal of Economics 122(3):1187–1234.
  • [Erdélyi, Hemaspaandra, and Hemaspaandra2015] Erdélyi, G.; Hemaspaandra, E.; and Hemaspaandra, L. A. 2015. More natural models of electoral control by partition. In International Conference on Algorithmic DecisionTheory, 396–413. Springer.
  • [Erdélyi, Reger, and Yang2017] Erdélyi, G.; Reger, C.; and Yang, Y. 2017. The complexity of bribery and control in group identification. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, 1142–1150. International Foundation for Autonomous Agents and Multiagent Systems.
  • [Faliszewski et al.2009] Faliszewski, P.; Hemaspaandra, E.; Hemaspaandra, L. A.; and Rothe, J. 2009. Llull and copeland voting computationally resist bribery and constructive control. Journal of Artificial Intelligence Research 35:275–341.
  • [Faliszewski et al.2018] Faliszewski, P.; Gonen, R.; Kouteckỳ, M.; and Talmon, N. 2018. Opinion diffusion and campaigning on society graphs. In IJCAI, 219–225.
  • [Faliszewski, Hemaspaandra, and Hemaspaandra2011] Faliszewski, P.; Hemaspaandra, E.; and Hemaspaandra, L. A. 2011. Multimode control attacks on elections. Journal of Artificial Intelligence Research 40(1):305–351.
  • [Gerber, Karlan, and Bergan2009] Gerber, A. S.; Karlan, D.; and Bergan, D. 2009. Does the media matter? a field experiment measuring the effect of newspapers on voting behavior and political opinions. American Economic Journal: Applied Economics 1(2):35–52.
  • [Hassani, Soltanolkotabi, and Karbasi2017] Hassani, H.; Soltanolkotabi, M.; and Karbasi, A. 2017. Gradient Methods for Submodular Maximization. In Advances in Neural Information Processing Systems 30, 5843–5853.
  • [Hazan and others2016] Hazan, E., et al. 2016. Introduction to online convex optimization. Foundations and Trends in Optimization 2(3-4):157–325.
  • [He et al.2012] He, X.; Song, G.; Chen, W.; and Jiang, Q. 2012. Influence blocking maximization in social networks under the competitive linear threshold model. In Proceedings of the 2012 SIAM International Conference on Data Mining, 463–474. SIAM.
  • [Holcomb, Gottfried, and Mitchell2013] Holcomb, J.; Gottfried, J.; and Mitchell, A. 2013. News use across social media platforms. Pew Research Journalism Project.
  • [Kalai and Vempala2005] Kalai, A., and Vempala, S. 2005. Efficient algorithms for online decision problems. Journal of Computer and System Sciences 71(3):291–307.
  • [Karimi et al.2017] Karimi, M.; Lucic, M.; Hassani, H.; and Krause, A. 2017. Stochastic Submodular Maximization: The Case of Coverage Functions. In Advances in Neural Information Processing Systems 30, 6856–6866.
  • [Li, Jiang, and Wu2017] Li, Y.; Jiang, Y.; and Wu, W. 2017. Protecting elections with minimal resource consumption. In AAMAS.
  • [Liu et al.2009] Liu, H.; Feng, H.; Zhu, D.; and Luan, J. 2009. Parameterized computational complexity of control problems in voting systems. Theoretical Computer Science 410(27-29):2746–2753.
  • [Loreggia et al.2015] Loreggia, A.; Narodytska, N.; Rossi, F.; Venable, K. B.; and Walsh, T. 2015. Controlling elections by replacing candidates or votes. In AAMAS.
  • [Miyauchi et al.2015] Miyauchi, A.; Iwamasa, Y.; Fukunaga, T.; and Kakimura, N. 2015. Threshold influence model for allocating advertising budgets. In ICML, 1395–1404.
  • [Pennycook, Cannon, and Rand2017] Pennycook, G.; Cannon, T. D.; and Rand, D. G. 2017. Prior exposure increases perceived accuracy of fake news.
  • [Soma et al.2014] Soma, T.; Kakimura, N.; Inaba, K.; and Kawarabayashi, K.-i. 2014. Optimal budget allocation: Theoretical guarantee and efficient algorithm. In ICML, 351–359.
  • [Staib and Jegelka2017] Staib, M., and Jegelka, S. 2017. Robust budget allocation via continuous submodular functions. In ICML.
  • [Wattal et al.2010] Wattal, S.; Schuff, D.; Mandviwalla, M.; and Williams, C. B. 2010. Web 2.0 and politics: the 2008 us presidential election and an e-politics research agenda. MIS quarterly 669–688.
  • [Wilder and Vorobeychik2018] Wilder, B., and Vorobeychik, Y. 2018. Controlling elections through social influence. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, 265–273. International Foundation for Autonomous Agents and Multiagent Systems.
  • [Wilder2018] Wilder, B. 2018. Equilibrium computation and robust optimization in zero sum games with submodular structure. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
  • [Yahoo2007] Yahoo. 2007. Yahoo! webscope dataset ydata-ysm-advertiser-bids-v1 0. http://research.yahoo.com/Academic_Relations.
  • [Yang, Shrestha, and Guo2016] Yang, Y.; Shrestha, Y. R.; and Guo, J. 2016. How hard is bribery with distance restrictions? In ECAI, 363–371.
  • [Yin et al.2018] Yin, Y.; An, B.; Hazon, N.; and Vorobeychik, Y. 2018. Optimal defense against election control by deleting voter groups. Artificial Intelligence 259:32–51.

Appendix A Hardness result

We reduce from maximum coverage to the defender equilibrium computation problem. Suppose that we are given a family of sets from a universe . The objective of maximum coverage is to select a subset of sets which maximize . We create an instance of our game as follows. Each set corresponds to a channel and each element to a voter . Each voter has . Each has an edge to every such that . This edge has and . The attacker has budget and the defender has budget . Regardless of what the defender plays, an equilibrium strategy for the attacker is the pure strategy which selects all of the channels. Hence, the defender’s equilibrium computation problem is identical to finding the pure strategy which maximizes the number of voters reached, since the attacker always reaches every voter, and every voter counts towards the objective since . This is just the maximum coverage problem. Since it is well-known that it is NP-hard to approximate maximum coverage to within a factor better than , the theorem follows.

Appendix B Analysis of FTPL

Theorem 11.

Let . After iterations of FTPL, the uniform distribution on each player’s history forms an -equilibrium.

Proof.

FTPL guarantees that after iterations, the defender’s reward is bounded compared to the optimum as

By adding and subtracting the constant term in the utility function and dividing by , we get

Applying the same reasoning from the perspective of the attacker yields

Let denote the value of the game and . We have

This implies that

and so

In other words, the empirical attacker strategy guarantees the attacker payoff at least against any pure strategy for the defender. This implies that is a -approximate equilibrium strategy for the attacker. The same line of reasoning applied to the defender completes the argument.

Appendix C Regret guarantee for online mirror ascent

We analyze the general online mirror ascent algorithm. Our analysis draws heavily on the analysis of online mirror descent for convex functions in [Hazan and others2016], to which refer the reader for additional background. Define the Bregman divergence with respect to a function as

Let be the norm induced by the Bregman divergence and be the corresponding dual norm. Let be an upper bound on and be an upper bound on . We have the following general guarantee:

Theorem 12.

Let be a sequence of DR-submodular functions and . If we set then

where .

Proof.

We start out by relating regret to an intermediate quantity at each step:

Lemma 3.

Proof.

Define , . Via Equation 7.2 of [Hassani, Soltanolkotabi, and Karbasi2017], we have that

and so it suffices to bound . As a first step, we show

Lemma 4.

For any ,

Proof.

By induction on . For the base case, we have that and so . Now assume for some that

Now we will prove that the statement holds for . Since we have

where the third line uses the induction hypothesis for .

Accordingly we have

which concludes the proof of Lemma 3. ∎

We now proceed to prove the main theorem. Define . Using the definition of the Bregman divergence, we have that

The inequality uses the fact that minimizes over