# Reasoning in Bayesian Opinion Exchange Networks Is PSPACE-Hard

We study the Bayesian model of opinion exchange of fully rational agents arranged on a network. In this model, the agents receive private signals that are indicative of an unkown state of the world. Then, they repeatedly announce the state of the world they consider most likely to their neighbors, at the same time updating their beliefs based on their neighbors' announcements. This model is extensively studied in economics since the work of Aumann (1976) and Geanakoplos and Polemarchakis (1982). It is known that the agents eventually agree with high probability on any network. It is often argued that the computations needed by agents in this model are difficult, but prior to our results there was no rigorous work showing this hardness. We show that it is PSPACE-hard for the agents to compute their actions in this model. Furthermore, we show that it is equally difficult even to approximate an agent's posterior: It is PSPACE-hard to distinguish between the posterior being almost entirely concentrated on one state of the world or another.

## Authors

• 2 publications
• 50 publications
• 36 publications
• 5 publications
• ### Learning without Recall by Random Walks on Directed Graphs

We consider a network of agents that aim to learn some unknown state of ...
09/14/2015 ∙ by Mohammad Amin Rahimian, et al. ∙ 0

• ### The Bounded Bayesian

The ideal Bayesian agent reasons from a global probability model, but re...
03/13/2013 ∙ by Kathryn Blackmond Laskey, et al. ∙ 0

• ### Distributed Learning with Adversarial Agents Under Relaxed Network Condition

This work studies the problem of non-Bayesian learning over multi-agent ...
01/07/2019 ∙ by Pooja Vyavahare, et al. ∙ 0

• ### Switching to Learn

A network of agents attempt to learn some unknown state of the world dra...
03/11/2015 ∙ by Shahin Shahrampour, et al. ∙ 0

• ### Bayesian Social Learning in a Dynamic Environment

Bayesian agents learn about a moving target, such as a commodity price, ...
01/06/2018 ∙ by Krishna Dasaratha, et al. ∙ 0

• ### Asynchronous Majority Dynamics in Preferential Attachment Trees

We study information aggregation in networks where agents make binary de...
07/12/2019 ∙ by Maryam Bahrani, et al. ∙ 0

• ### Feasible Joint Posterior Beliefs

We study the set of possible joint posterior belief distributions of a g...
02/26/2020 ∙ by Itai Arieli, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

#### Background

The problem of dynamic opinion exchange is an important field of study in economics, with its roots reaching as far as the Condorcet’s jury theorem and, in the Bayesian context, Aumann’s agreement theorem. Economists use different opinion exchange models as inspiration for explaining interactions and decisions of market participants. More generally, there is extensive interest in studying how social agents exchange information, form opinions and use them as a basis to make decisions. For a more comprehensive introduction to the subject we refer to surveys addressed to economists [AO11] and mathematicians [MT17].

Many models have been proposed and researched, with the properties studied including, among others, if the agents converge to the same opinion, the rate of such convergence, and if the consensus decision is optimal with high probability (this is called learning

). Two interesting aspects of the differences between models are rules for updating agents’ opinions (e.g., fully rational or heuristic) and presence of network structure.

For example, in settings where the updates are assumed to be rational (Bayesian), there is extensive study of models where the agents act in sequence (see, e.g., [Ban92, BHW92, SS00, ADLO11] for a non-exhaustive selection of works that consider phenomena of herding and information cascades), as well as models with agents arranged in a network and repeatedly exchanging opinions as time progresses (see some references below). In this work we are interested in the network models, arguably becoming more and more relevant given the ubiquity of networks in modern society.

On the other hand, similar questions are studied for models with so-called bounded rationality, where the Bayesian updates are replaced with simpler, heuristic rules. Some well-known examples include DeGroot model [DeG74, GJ10], the voter model [CS73, HL75] and other related variants [BG98, AOP10].

One commonly accepted reason for studying bounded rationality is that, especially in the network case, Bayesian updates become so complicated as to make fully rational behavior intractable, and therefore unrealistic. However, we are not aware of previous theoretical evidence or formalization of that assertion. Together with another paper of the same authors addressed to economists [HJMR18], we consider this work as a development in that direction.

More precisely, we show that computing an agent’s opinion in one of the most important and studied Bayesian network models is

-hard. Furthermore, it is -hard even to approximate the rational opinion in any meaningful way. This improves on our -hardness result for the same problem shown in [HJMR18].

#### Our model and results

We are concerned with a certain Bayesian model of opinion exchange and reaching agreement on a network. We are going to call it the (Bayesian) binary action model. The idea is that there is a network of honest, fully rational agents trying to learn a binary piece of information, e.g., will the price of an asset go up or down, or which political party’s policies are more beneficial to the society. We call this information the state of the world. Initially, each agent receives an independent piece of information (a private signal) that is correlated with the state of the world. According to the principle that “actions speak louder than words”, at every time step the agents reveal to their neighbors which of the two possible states they consider more likely. On the other hand, we assume that the agents are honest truth-seekers and always truthfully reveal their preferred state: According to economic terminology they act myopically rather than strategically.

More specifically, we assume that the state of the world is encoded in a random variable

(standing for True and False), distributed according to a uniform prior, shared by all agents. A set of Bayesian agents arranged on a directed graph performs a sequence of actions at discrete times Before the process starts, each agent receives a random private signal . The collection of random variables is independent conditioned on . The idea is that indicates a piece of evidence for and is evidence favoring .

At each time , the agents simultaneously broadcast actions to their neighbors in . The action is the best guess for the state of the world by agent at time : Letting

be the respective Bayesian posterior probability that

, the action if and only if . In subsequent steps, agents update their posteriors based on their neighbors’ actions (we assume that everyone is rational, and that this fact and the description of the model are common knowledge) and broadcast updated actions. The process continues indefinitely.

We are interested in computational resources required for the agents to participate in the process described above. That is, we consider complexity of computing the action given the private signal and history of observations , where denotes the set of neighbors of in . Our main result is that it is worst-case -hard for an agent to distinguish between cases where and , where is a naturally defined size of the problem. As a consequence, it is -hard to compute the action .

Note a hardness of approximation aspect of our result: A priori one can imagine a reduction where it is difficult to compute the action when the Bayesian posterior is close to the threshold . However, we demonstrate that it is already hard to distinguish between situations where the posterior is concentrated on one of the extreme values (and therefore almost certainly ) and (and therefore ).

Our hardness results carry over to other models. In particular, they extend to the case where the signals are continuous, where the prior state of the world is not uniform etc. We also note that we may assume that the agents are never tied or close to tied in their posteriors, see Remark 13 for more details.

A good deal is known about the model we are considering. From a paper by Gale and Kariv [GK03] (with an error corrected by Rosenberg, Solan and Vieille [RSV09], see also similar analysis of earlier, related models in [BV82, TA84]) it follows that if the network is strongly connected, then the agents eventually converge to the same action (or they become indifferent). The work of Geanakoplos [Gea94] implies that this agreement is reached in at most time steps. Furthermore, Mossel, Sly and Tamuz [MST14] showed that in large undirected networks with non-atomic signals, learning occurs: The common agreed action is equal to the state of the world , except with probability that goes to zero with the number of agents. A good deal remains open, too. For example, it is not known if the bound on the agreement speed can be improved. In this context is also interesting to note the results of [MOT16] who consider a special model with Gaussian structure and revealed beliefs. In contrast to the results presented here, it is shown that in this case, agents’ computations are efficient (polynomial time) and convergence time is .

#### Proof idea

Our proof is by direct reduction from the canonical -complete language of true quantified Boolean formulas. It maps true formulas onto networks where one of the agents’ posteriors is almost entirely concentrated on and false formulas onto networks where the posterior is concentrated on . The reduction and the proof are by induction on the number of quantifier alternations in the Boolean formula. The base case of the induction corresponds to such mapping for satisfiable and unsatisfiable instances.

The basic idea of the reduction is to map variables and clauses of the Boolean formula onto agents or small sub-networks of agents (gadgets) in the Bayesian network. We use other gadgets to implement some useful procedures, like counting or logical operations. One challenging aspect of the reduction is that, since each such operation is implemented by Bayesian agents by broadcasting their opinions, these “measurements” themselves might shift the posterior belief of the “observer” agent. Therefore, we need to carefully compensate those unintended effects at every step.

Another interesting technical aspect of the proof is related to its recursive nature. When we establish hardness of approximation for quantifier alternations, it means that we can place an agent in our network such that the agent will be solving a “-hard” computational problem. We then use this agent, together with another gadget that modifies relative likelihoods of different private signal configurations, to amplify hardness to alternations.

#### Related literature

One intriguing aspect of our result is a connection to the Aumann’s agreement theorem. There is a well-known discrepancy (see [CH02]

for a distinctive take) between reality, where we commonly observe (presumably) honest, well-meaning people “agreeing to disagree”, and the Aumann’s theorem, stating that this cannot happen for Bayesian agents with common priors and knowledge, i.e., the agents will always end up with the same estimate of the state of the world after exchanging all relevant information. Our result hints at a computational explanation, suggesting that reasonable agreement protocols might be intractable in the presence of network structure. This is notwithstanding some positive computational results by Hanson

[Han03] and Aaronson [Aar05], which focus on two agents and come with their own (perhaps unavoidable) caveats.

We find it interesting that the agents’ computations in the binary action model turn out to be not just hard, but -hard.

-hardness of partially observed Markov decision processes (PMODPs) established by Papadimitriou and Tsitsiklis

[PT87] seems to be a result of similar kind. On the other hand, there are clear differences: We do not see how to implement our model as PMODP, and embedding a instance in a PMODP looks more straightforward than what happens in our reduction. Furthermore, and contrary to [PT87], we establish hardness of approximation. We are not aware of many other -hardness of approximation proofs, especially in recent years. Exceptions are results obtained via versions of the PCP theorem [CFLS95, CFLS97] and a few other reductions [MHR94, HMS94, Jon97, Jon99] that concern, among others, some problems on hierarchically generated graphs and an AI-motivated problem of planning in propositional logic.

We note that there are some results on hardness of Bayesian reasoning in static networks in the AI and cognitive science context (see [Kwi18] and its references), but this setting seems quite different from dynamic opinion exchange models.

Finally, we observe that a natural exhaustive search algorithm for computing the action in the binary action model requires exponential space (see [HJMR18] for a description) and that we are not aware of a faster, general method (but again, see [MOT16] for a polynomial time algorithm in a variant with Gaussian signals).

#### Organization of the paper

In Section 2 we give a full description of our model, state the results precisely and give some remarks about the proofs. Section 3 contains the proof of -hardness, which is then used in Section 4 in the proof of -hardness. Section 5 modifies the proof to use only a fixed number of private signal distributions. Section 6 provides a proof of -hardness in a related revealed belief model. Finally, Section 7 contains some suggestions for future work.

## 2 The model and our results

In Section 2.1 we restate the binary action model in more precise terms and introduce some notation. In Section 2.2 we discuss our results in this model. In Section 2.3 we define the revealed belief model and state a -hardness result for it. Finally, in Section 2.4 we explain our main proof ideas.

### 2.1 Binary action model

We consider the binary action model of Bayesian opinion exchange on a network. There is a directed graph , the vertices of which we call agents. The world has a hidden binary state

with uniform prior distribution. We will analyze a process with discrete moments

At time each agent receives a private signal . The signals are random variables with distributions that are independent across agents after conditioning on . Accordingly, the distribution of is determined by its signal probabilities

 pθ0(u):=Pr[S(u)=1∣θ=θ0],θ0∈{T,F}.

Equivalently, it is determined by its (log)-likelihoods

 ℓb(u)=lnPr[S(u)=b∣θ=T]Pr[S(u)=b∣θ=F]=lnPr[θ=T∣S(u)=b]Pr[θ=F∣S(u)=b],b∈{0,1}.

Note that there is a one-to-one correspondence between probabilities and with , and likelihoods , with . We will always assume that a signal is evidence towards and vice versa. This is equivalent to saying that or and . We allow some agents to not receive private signals: This can be “simulated” by giving them non-informative signals with . We will refer to all signal probabilities taken together as the signal structure of the Bayesian network. A specific pattern of signals (where denotes the subset of agents that receive informative signals) will be called a signal configuration.

We assume that all this structural information is publicly known, but the agents do not have direct access to or others’ private signals. Agents are presumed to be rational, to know that everyone else is rational, to know that everyone knows, etc. (common knowledge of rationality). At each time , we define to be the belief of agent : The conditional probability that given everything that observed at times . More precisely, letting be the (out)neighbors of in and defining

 H(u,t):={A(v,t′):t′

as the observation history of agent we let

 μ(u,t):=Pr[θ=T∣S(u),H(u,t)].

Accordingly, if we will say that agent observes agent .

Agent broadcasts to its in-neighbors the action , which is the state of the world that considers more likely according to (assume that ties are broken in an arbitrary deterministic manner, say, in favor of ). Then, the protocol proceeds to time step and the agents update their beliefs and broadcast updated actions. The process continues indefinitely. Note that the beliefs and actions become deterministic once the private signals are fixed.

The first two time steps of the process are relatively easy to understand: At time an agent broadcasts if and only if and the belief can be easily computed from the likelihood . At time , an agent broadcasts if and only if

 ℓS(u)(u)+∑v∈N(u)ℓS(v)(v)>0, (1)

where the private signals can be inferred from observed actions . The sum (1) determines the likelihood associated with belief . However, at later times the actions of different neighbors are not independent anymore and accounting for those dependencies seems difficult.

Let be a Bayesian network, i.e., a directed graph together with the signal structure. We do not commit to any particular representation of probabilities of private signals. Our reduction remains valid for any reasonable choice. We are interested in hardness of computing the actions that the agents need to broadcast. More precisely, we consider complexity of computing the function

 BINARY-ACTION(Π,t,u,S(u),H(u,t)):=A(u,t)

that computes the action given the Bayesian network, time , agent , its private signal and observation history . Relatedly, we will consider computing belief

 BINARY-BELIEF(Π,t,u,S(u),H(u,t)):=μ(u,t).

Note that computing is equivalent to distinguishing between and .

### 2.2 Our results

Our first result implies that computing at time is -hard. We present it as a standalone theorem, since the reduction and its analysis are used as a building block in the more complicated reduction.

###### Theorem 1.

There exists an efficient reduction from a formula with variables and clauses to an input of such that:

• The size (number of agents and edges) of Bayesian graph is , time is set to and agent does not receive a private signal.

• All probabilities of private signals are efficiently computable real numbers satisfying

 exp(−(O(N))≤pθ0(v)≤1−exp(−O(N)),v∈V,θ0∈{T,F}.
• If is satisfiable, then the posterior satisfies

 μ(u,2)=1−exp(−Θ(N)).
• If is not satisfiable, then we have

 μ(u,2)=exp(−Θ(N)).
###### Corollary 2.

Both distinguishing between and and computing are -hard.

Our main result improves Theorem 1 to -hardness. It is a direct reduction from the canonical -complete language of true quantified Boolean formulas .

###### Theorem 3.

There exists an efficient reduction from a formula

 Φ=QKxK⋯∃x1ϕ(xK,…,x1),

where is a 3-CNF formula with variables and clauses, there are variable blocks with alternating quantifiers and the last quantifier is existential, to an input of such that:

• The number of agents in Bayesian graph is , time is set to and agent does not receive a private signal.

• All probabilities of private signals are efficiently computable real numbers satisfying

 exp(−(O(N))≤pθ0(v)≤1−exp(−O(N)),v∈V,θ0∈{T,F}. (2)
• If is true, then . If is false, then .

###### Corollary 4.

Both distinguishing between and and computing are -hard.

###### Remark 5.

Note that the statement of Theorem 3 immediately gives - and -hardness of approximating at time .

###### Remark 6.

For ease of exposition we define networks in the reductions to be directed, but due to additional structure that we impose (see paragraph “Network structure and significant times” in Section 3) it is easy to see that they can be assumed to be undirected. This is relevant insofar as a strong form of learning occurs only on undirected graphs (see [MST14] for details).

One possible objection to Theorem 3 is that it uses signal distributions with probabilities exponentially close to zero and one. We do not think this is a significant issue, and it helps avoid some technicalities. Nevertheless, in Section 5 we prove a version of Theorem 3 where all private signals come from a fixed family of say, at most fifty distributions. This is at the cost of (non-asymptotic) increase in the size of the graph.

###### Theorem 7.

The reduction from Theorem 3 can be modified such that all private signals come from a fixed family of at most fifty distributions.

###### Remark 8.

It is possible to modify our proofs to give hardness of distinguishing between and for any constant (recall that is the number of variables in the formula ). This is at the cost of allowing signal probabilities in the range

 exp(−(O(NK))≤pθ0(v)≤1−exp(−O(NK))

or, in the bounded signal case, increasing the network size to . Consequently, in the latter case we get hardness of approximation up to factor for any constant , where is the number of agents.

### 2.3 Revealed beliefs

In a natural variant of our model the agents act in exactly the same manner, except that they reveal their full beliefs rather than just estimates of the state . Accordingly, we call it the revealed belief model. We suspect that binary action and revealed belief models have similar computational powers. Furthermore, we conjecture that if the agents broadcast their beliefs rounded to a (fixed in advance) polynomial number of significant digits, then our techniques can be extended to establish a similar -hardness result.

However, if one instead assumes that the beliefs are broadcast up to an arbitrary precision, our proof fails for a rather annoying reason: When implementing alternation from to in the binary action model, if a formula has no satisfying assignments, we can exactly compute the belief of the observer agent. However, in case has a satisfying assignment, we can compute the belief only with high, but imperfect precision. The reason is that the exact value of the belief depends on the number of satisfying assignments of . This imperfection can be “rounded away” if the agents output a discrete guess for , but we do not know how to handle it if the beliefs are broadcast exactly.

Nevertheless, in Section 6 we present a -hardness proof in the revealed belief model. The proof is by reduction from counting satisfying assignments in a formula. However, since the differences in the posterior corresponding to different numbers of satisfying assignments are small, it is not clear if they can be amplified, and consequently we do not demonstrate hardness of approximation (similar as in [PT87]). For ease of exposition we introduce an additional relaxation to the model by allowing some agents to receive ternary private signals.

###### Theorem 9.

Assume the revealed belief model with beliefs transmitted up to arbitrary precision and call the respective computational problem . Additionally, assume that some agents receive ternary signals .

There exists an efficient reduction that maps a formula with variables, clauses and satisfying assignments to an instance of such that:

• Bayesian network has size , time is set to and agent does not receive a private signal.

• All private signal probabilities come from a fixed family of at most ten distributions.

• The likelihood of at time satisfies

 A2N(1−14N)≤μ(u,2)1−μ(u,2)≤A2N(1+14N).

In particular, rounding this likelihood to the nearest multiple of yields and allows to recover .

### 2.4 Main proof ideas

The NP-hardness proof (in Section 3) uses an analysis of a composition of several gadgets. We will think of the agent from input to as “observer” and accordingly call it . The Bayesian network features gadgets that represent variables and clauses. The private signals in variable gadgets correspond to assignments to the formula . Furthermore, there is an “evaluation agent” that interacts with all clause gadgets. We use more gadgets that “implement” counting to ensure that what observes is consistent with one of two possible kinds of signal configurations:

• and the signals of variable agents correspond to an arbitrary assignment .

• and the signals of variable agents correspond to a satisfying assignment .

Then, we use another gadget to “amplify” the information that is conveyed about the state of the world by the signal . If has no satisfying assignment, then and this becomes amplified to a near-certainty that (for technical reasons this is the opposite conclusion than suggested by ). On the other hand, we design the signal structure such that even a single satisfying assignment tips the scales and amplifies to with high probability (whp).

We note that one technical challenge in executing this plan is that some of our gadgets are designed to “measure” (e.g., count) certain properties of the network, but these measurements use auxiliary agents with their own private signals, affecting Bayesian posteriors. We need to be careful to cancel out these unintended effects at every step.

The high-level idea to improve on the -hardness proof is that once we know that agents can solve hard problems, we can use them to help the observer agent solve an even harder problem. Of course this has to be done in a careful way, since the answer to a partial problem cannot be directly revealed to the observer (the whole point is that we do not know a priori what this answer is).

The reduction is defined and Theorem 3 proved by induction. The base case is the Bayesian network from Theorem 1, but with observer agent directly observing private signals in the first variable blocks. Then, we proceed to add “intermediate observers”, each of them observing one variable block less, and interacting via a gadget with the previous observer to implement the quantifier alternation by adjusting likelihoods of different assignments to variables in .

It is useful to view as a game where two players set quantified variables (proceeding left-to-right) in . One player sets variables under existential quantifiers with the objective of evaluating the -CNF formula true. The other player sets variables under universal quantifiers with the objective of evaluating false. Under that interpretation, our reduction has the following property: The final observer agent concludes that the assignment formed by private signals with high probability corresponds to a “game transcript” in the game played according to a winning strategy. Depending on which player has the winning strategy, the state of the world is either or whp.

## 3 NP-hardness: Proof of Theorem 1

We start with the -hardness result by reduction from . The reduction is used as a building block in the -hardness proof, but it is also useful in terms of developing intuition for the more technical proof of Theorem 3. We proceed by explaining gadgets that we use, describing how to put them together in the full reduction and proving the correctness.

Say there are agents that do not observe anyone and receive private signals with respective likelihoods and . Additionally, there is an observer agent and we would like to reveal to it, at time , that the sum of likelihoods of agents exceeds some threshold :

 L:=K∑i=1ℓS(vi)(vi)>δ,

without disclosing anything else about the private signals.111 We assume that is chosen such that never happens. This is achieved by the gadget in Figure 1.

We describe the gadget for . Agent receives private signal with (and arbitrary ) and agent with . Agents , and (we will call the latter two the “dummy” agents) do not receive private signals. Our overall reduction will demonstrate the hardness of computation for agent . Therefore, we need to specify the observation history of . By our tie-breaking convention, it must be . Furthermore, we specify and .

Based on that information, agent can infer that , and, since the action is determined by the sign of , that . The purpose of agent is to counteract the effect of this “measurement” on the estimate of the state of the world by . More precisely, let

 P(s1,…,sK,θ0) :=Pr[K⋀i=1S(vi)=si∧θ=θ0], (3) P(s1,…,sK,sB,sC,θ0) :=Pr[K⋀i=1S(vi)=si∧S(B)=sB∧S(C)=sC∧θ=θ0]. (4)

Based on the discussion above, we have the following:

###### Claim 10.

Let be private signals of . Similarly, let be private signals of and . Then:

• If , then there are no that make consistent with observations of .

• If , then there exists unique configuration consistent with observations of

and the (prior) probability of this configuration when the state is

is

 P(s1,…,sK,sB,sC,θ0)=P(s1,…,sK,θ0)⋅α, (5)

where does not depend on .

Similar reasoning can be made for the case when and/or checking the opposite inequality . We will say that an agent observes a threshold gadget if it observes agents , and and denote it as shown in Figure 2. Note that in our diagrams we use circles to denote agents and boxes to denote gadgets. The latter typically contain several auxiliary agents.

#### Network structure and significant times

It might appear that the threshold gadget is more complicated than needed. The reason for this is that we will impose certain additional structure on the graph to facilitate its analysis, and later use it in the proof of Theorem 3. Specifically, we will always make sure that the graph is a DAG, with only the observer agent having in-degree zero. All agents that receive private signals will have out-degree zero, and all agents with non-zero out-degree will not receive private signals (recall a directed edge indicates that observes ).

Furthermore, we will arrange the graph such that each agent will learn new information at a single, fixed time step. That is, for every agent there will exist a significant time such that for and for . If receives a private signal, then . Otherwise, is determined by the (unique) path length from to an out-degree zero agent. For example, in Figure 1 significant times are and .

Accordingly, we will use notation and to denote agent beliefs and actions at the significant time. Let and be agents with . In the following, we will sometimes say that observes , even though that would contradict the significant time requirement (a direct edge implies that ). Whenever we do so, it should be understood that there is a path of “dummy” nodes of appropriate length between and (cf.  and in Figure 1). For clarity, we will omit dummy nodes from the figures.

Assume now that the agents receive private signals with identical likelihoods and and that a number , is given. Then, building on the threshold gadget, it is easy to convey the information that exactly out of agents received private signal . Letting and , we compose two threshold gadgets as shown in Figure 3.

Agent is optional: Depending on our needs we will use the counting gadget with or without it. It is used to preserve the original belief of after learning the count of private signals of agents . It receives a private signal with for appropriate (depending on the sign of ) and broadcasts the corresponding state . By similar analysis as for the threshold gadget and using the notation as in we have:

###### Claim 11.

Let be private signals of agents . Let represent private signals of all auxiliary agents in the threshold gadgets and a private signal of agent .

Then, the only configurations consistent with observations of are those for which . Furthermore, for any such configuration there exists a unique configuration (and , if agent is present) such that (depending on the presence of ):

 P(s1,…,sK,s,θ0) =P(s1,…,sK,θ0)⋅α=P(θ0)⋅α, P(s1,…,sK,s,sA,θ0) =β,

where is easily computable and does not depend on or , but the value of the other term is in general dependent on . On the other hand, if is present, then does not depend at all on private signals or state of the world.

If agent is omitted, the same technique can be used to obtain inequalities (e.g., checking that at least out of private signals are ones). We will say that an agent observes the counting gadget if it observes both respective threshold gadgets (and , if present). We will denote counting gadgets as in Figure 4.

Another related gadget that we will use reveals to the observer that two agents with likelihoods and , respectively, receive opposite signals . Since and , this is achieved by using two threshold gadgets to check that

 ℓ0+m0<ℓS(u)+mS(v)<ℓ1+m1,

where we set the thresholds in the threshold gadgets as and for an appropriately small . We will denote the not-equal gadget as in Figure 5.

Our reduction is from the standard form of , where we are given a CNF formula on variables . The formula is a conjunction of clauses , where each clause is a disjunction of exactly three literals on distinct variables.

We introduce two global agents. One of them is called and we mean it as an “observer agent”. This is the agent for which we establish hardness of computation. We will follow the rule that observes all gadgets that are present in the network. Second, we place an “evaluation agent” with private signals and .

Furthermore, for each variable in the CNF formula, we introduce two agents and that receive private signals given by and . Then, we encompass those two agents in a counting gadget as shown in Figure 6.

Then, for each clause , we introduce a counting gadget on four agents: Three agents corresponding to the literals in the clause (note that they are observed directly and not through the variable gadgets), and the agent. The gadget ensures that at least one of those agents received signal . Illustration is provided in Figure 7.

#### The reduction

We put the agents and and the variable and clause gadgets together, as explained in previous paragraphs. Finally, we add two more agents and . We will choose a natural number for an absolute big enough constant . Agent receives private signals with and and agent with and for some that we will choose shortly. Let the corresponding likelihoods be (note that and ). We also insert two not-equal gadgets observed by : One of them is put between and and the other one between and . The overall construction is illustrated in Figure 8.

We are reducing to the problem of computing the action of agent at its significant time . Note that observes all gadgets in the graph, and only gadgets. In particular, directly infers the signals of all auxiliary agents in the gadgets, but the same cannot be said about the private signals of variable agents. The observation history is naturally determined by specifications of the gadgets.

#### Analysis

As a preliminary matter, the reduction indeed produces an instance of polynomial size: The size of the graph is and the probabilities of private signals satisfy

 exp(−O(N))≤pθ0(u)≤1−exp(−O(N)).

We inspect the construction to understand which private signal configurations are consistent with the observation history of agent . First, the signals of all auxiliary agents in the gadgets can be directly inferred by . With that in mind, fix a sequence of private signals to variable agents . Abusing notation, we identify such sequence with an assignment in a natural way. Variable gadgets ensure that each “negation agent” received the opposite signal . Moreover, due to clause and not-equal gadgets we have the following:

###### Claim 12.

• For every assignment , there exists exactly one consistent configuration of private signals with .

• For every satisfying assignment , there exists exactly one consistent configuration of private signals with .

• There are no other consistent configurations.

As a next step, we compare the likelihoods of configurations corresponding to different assignments. To this end, we let the quantity be the a priori probability that private signals are in the consistent configuration corresponding to assignment , and (note that this is a different definition than given in (3)). Furthermore, we set .

By inspecting the construction in a similar way as in Claims 10 and 11 we observe that, for any assignment :

 P(x,1,T) =q⋅0.9⋅αb1⋅(1−αb3), P(x,1,F) =q⋅0.4⋅(1−αb2)⋅αb4

for some that does not depend on a specific assignment . On the other hand, for any satisfying assignment we additionally have

 P(x,0,T) =q⋅0.1⋅(1−αb1)⋅αb3, P(x,0,F) =q⋅0.6⋅αb2⋅(1−αb4).

Each of those expressions is a product of four terms. The value corresponds to the probabilities of signals in variable agents and auxiliary agents in the gadgets. The other terms arise from private signals of, respectively, , and .

We choose , , and note that our choice of for large enough ensures that we can estimate222 The bounds below are slightly better than needed in order to facilitate the proof of Theorem 3.

 P(x,1,T)∈q⋅0.4b⋅(1±1200N)b, (6) P(x,1,F)∈q⋅0.6b⋅(1±1200N)b, (7)

and, for satisfying assignments,

 P(x,0,T)∈q⋅0.9b⋅(1±1200N)b. (8) P(x,0,F)∈q⋅0.6b⋅(1±1200N)b. (9)

This in turn implies that for a satisfying assignment we have

 P(x,T)∈q⋅0.9b⋅(1±1100N)b,P(x,F)∈q⋅0.6b⋅(1±1100N)b, (10)

and for an unsatisfying one

 P(x,T)∈q⋅0.4b⋅(1±1100N)b,P(x,F)∈q⋅0.6b⋅(1±1100N)b. (11)

Accordingly, if the formula has a satisfying assignment , it must be that the belief of agent at the significant time can be bounded by

 1−μ(OBS)=∑x∈{0,1}NP(x,F)∑x∈{0,1}NP(x,F)+P(x,T)≤∑x∈{0,1}NP(x,F)P(x∗,T)≤2N⋅0.61b0.89b≤0.69b. (12)

At the same time, this probability can be lower bounded as

 1−μ(OBS) ≥P(x∗,F)∑x∈{0,1}NP(x,T)+P(x,F)≥0.59b2N+1⋅0.91b≥0.64b. (13)

If the formula is not satisfiable, a simpler computation taking into account only equation (11) gives

 μ(OBS)∈[0.64b,0.69b]. (14)

Hence, if is satisfiable and otherwise. ∎

###### Remark 13.

There are some results and proofs about opinion exchange models that are sensitive to the tie-breaking rule chosen (see, e.g., Example 3.46 in [MT17]). We claim that the reduction described above (as well as other reductions in this paper) does not suffer from this problem.

Ideally, we would like to say that ties never arise in signal configurations that are consistent with inputs to the reduction. This is seen to be true by inspection, with the following exception: Agents that do not receive private signals are indifferent about the state of the world until their significant time. We made this choice to simplify the exposition. Since significant times are common knowledge, no agent places any weight on others’ actions before their significant time (regardless of the tie-breaking rule used), and the analysis of the reduction is not affected in any way by this fact.

That being said, the ties could be avoided altogether. For example, we could introduce an agent that is observed by everyone else at time , indicating the action and private signal corresponding to the likelihood for a small constant . Since likelihoods arising in the analysis of our reduction are always bounded away from zero, can be made small enough so that the agent does not affect other agents’ actions at their significant times. This almost takes care of the problem, except for the agents without private signals at time (since they will acquire information from only at time ). This can be solved by giving each such agent an informative private signal with likelihoods, say

 ℓ1(u)=−ℓ0(u)=ε100|V|.

In that case will output an action corresponding to its private signal at time , but its belief due to private signal (and signals of all other non-informative agents that observes) will become dominated by belief of at time .

## 4 PSPACE-hardness: Proof of Theorem 3

#### TQBF and the high-level idea

Recall that we will show -hardness by reduction from the canonical -complete language . More precisely, we use a representation of quantified Boolean formulas

 Φ=QKxK⋯Q1x1:ϕ(xK,…,x1),

where:

• is a quantifier such that , and .

• are blocks of variables such that their total count is .

• is a propositional logical formula given in the 3-CNF form with clauses.

The language consists of all formulas that are true. It is common and useful to think of as defining a “position” in a game, where “Player 1” chooses values of variables under existential quantifiers, “Player 0” chooses values of variables under universal quantifiers, and the objective of Player is to evaluate to the value . Under that interpretation, if and only if Player 1 has a winning strategy in the given position.

Keeping that in mind, we can give an intuition for the proof: In the reduction, if the formula had a satisfying assignment, then agent could conclude whp. that the “hidden” assignment is satisfying, and . Otherwise, the hidden assignment is not satisfying and whp. In the reduction, the hidden assignment will correspond (whp.) to a “transcript” of the game played according to a winning strategy for one of the players, and will be determined by the winning player. This will be achieved by implementing a sequence of observer agents , where:

• Ultimately, the hardness will be shown for the computation of agent .

• Agent