Social Learning with Questions

11/01/2018
by   Grant Schoenebeck, et al.
University of Michigan
0

This work studies sequential social learning (also known as Bayesian observational learning), and how private communication can enable agents to avoid herding to the wrong action/state. Starting from the seminal BHW (Bikhchandani, Hirshleifer, and Welch, 1992) model where asymptotic learning does not occur, we allow agents to ask private and finite questions to a bounded subset of their predecessors. While retaining the publicly observed history of the agents and their Bayes rationality from the BHW model, we further assume that both the ability to ask questions and the questions themselves are common knowledge. Then interpreting asking questions as partitioning information sets, we study whether asymptotic learning can be achieved with finite capacity questions. Restricting our attention to the network where every agent is only allowed to query her immediate predecessor, an explicit construction shows that a 1-bit question from each agent is enough to enable asymptotic learning.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/10/2017

How fragile are information cascades?

It is well known that sequential decision making may lead to information...
11/13/2018

Interpreting Models by Allowing to Ask

Questions convey information about the questioner, namely what one does ...
01/09/2018

A Deterministic Protocol for Sequential Asymptotic Learning

In the classic herding model, agents receive private signals about an un...
05/07/2021

A Bayesian model of information cascades

An information cascade is a circumstance where agents make decisions in ...
11/12/2019

On uniform boundedness of sequential social learning

In the classical herding model, asymptotic learning refers to situations...
05/25/2018

A Study of Question Effectiveness Using Reddit "Ask Me Anything" Threads

Asking effective questions is a powerful social skill. In this paper we ...
12/29/2019

Learning Generalized Models by Interrogating Black-Box Autonomous Agents

This paper develops a new approach for estimating the internal model of ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In social networks, agents use information from (a) private signals (knowledge) they have, (b) learning past agents actions (observations), and (c) comments from contactable experienced agents (experts) to guide their own decisions. The study of learning using private signals and past agents’ actions, i.e., private or public history, in homogeneous and Bayes-rational agents was initiated by the seminal work in [1, 2, 3]. Key results in [1, 2, 3] state that in the model where countably infinite Bayes-rational agents make decisions sequentially to match binary unknown states of the world, named social learning or Bayesian observational learning in the literature [4, 5, 6, 7, 8, 9, 10, 11, 12], an outcome called Information Cascade occurs almost surely with fully observable history and bounded likelihood ratio of signals. An information cascade occurs when it is optimal for agents to ignore their own (private) signals for decision making after observing the history. Though individually optimal, this may lead to a socially sub-optimal outcome where after an information cascade, all agents ignore their private signal and choose the wrong action, referred to as a “wrong cascade.” In models of social learning, there is another possible outcome known as (asymptotic) learning222See Definition 1 in Section 2.. This occurs when the information in the private signals is aggregated efficiently so that agents eventually learn the underlying true state the world and make socially optimal decisions.

Literature studying social learning, i.e., making socially optimal decisions, is mainly focussed on using channels (a) and (b) above for Bayes-rational agents; works using channel (c) mostly study learning by modeling it as a communication channel in distributed (sequential) hypothesis testing problems, but with non-Bayes-rational agents. Inspired by behaviors in social networks, where people usually query their friends who have prior experience before making their own decisions, using information via (c) by communicating with contactable past agents may reveal another channel to achieve asymptotic learning for Bayes-rational agents. Another reason to explore this channel is the following statement by Gul and Lundholm in [10]: “Informational cascades can be eliminated by enriching the setting in a way that allows prior agents’ information to be transmitted.” This general principle, however, does not reveal whether learning can be achieved with finite-bit questions333Appendix B.3.1 shows a learning example where the total bits agent spends on questions is not bounded..

Problem Statement: Motivated by behaviors in real social networks and the quoted statement in [10], we want to study if querying past agents with bounded number of finite capacity questions sequentially, but without relaxing assumptions of (a) and (b) used in BHW model [1], can achieve asymptotic learning or not. More precisely, we assume the essential features of the problem of sequential social learning with a common database recording actions of agents [1, 2, 3]

, but allow each Bayes-rational agent to ask a single, private and finite-capacity (for response) question of each among a subset of past agents (friends or assumed experts). Note that the Maximum Aposteriori Probability (MAP) rule is still individually optimally and will be used by each agent for her decision. We emphasize here that the BHW model

[1] has private signals with a bounded likelihood ratio, and with Bayes-rational agents only the ideas in Theme 2 are known to yield the learning outcome. In this paper, allowing for private questions, we want to answer the following three questions: 1) What is the minimum set of agents that need to be queried as a function of the agent’s index and information set?; 2) What is the minimum size of questions needed to achieve learning?, and 3) Can we construct a set of questions that will achieve either minimum?

Before stating our main contributions, we highlight the main differences between this work and the existing literature. In the literature, there are three well-developed themes to achieve social learning: one, the presence of many agents with rich private signals (with unbounded likelihood ratios); two, the obfuscation of the history of agents’ actions when viewed so that at least a minimal number of agents have their private signal revealed by their actions; and three, the presence of many (or all) non-Bayesian algorithmic agents. We briefly describe444See Related Work in Appendix C for a detailed discussion. these three themes in the following:

Theme 1: Compared to the seminal BHW model [1], models in this theme use information from (b) and generalize the information content in (a). Unlike BHW and our model considering private signals with bounded likelihood ratios, the seminal work by Smith and Sørensen [4] allowed generalized models for richer signals characterized by the (unbounded) likelihood ratio of the two states deduced from the private signals; and learning is achievable under richer signals.

Theme 2: Unlike BHW and our model disclose full history to every agent, works in this theme use partial history in (b). By revealing a random or a judiciously designed deterministic subset of past agents’ actions in networks, [8] and [13] respectively showed the asymptotic learning can be achieved through revealing either a random subset of history or at most past agents in a special class of networks. However, presenting reduced or distorted views of the history of agents’ actions is philosophically troubling as it implicitly assumes that the distortions made are always benign or for efficiency, and also posits an implicit trust in the system designer on the part of the agents.

Theme 3: Here, by allowing non-Bayesian agents or changing payoff structures, there is a class of literature on distributed binary hypothesis testing problems that follows Cover [14] and Hellman and Cover [15] and falls under (c). In this model, agents can only observe the actions from their immediate predecessor, and their actions now try to transmit information and to learn collaboratively the true state of the world. It is shown in [15] that the learning, often called optimal decision rules in this literature, cannot be achieved under bounded likelihood ratio of signals. However, if observing immediate predecessor is allowed, authors in [16]

showed that asymptotic learning can be achieved using a specific set of four-state Markov chains. From the perspective of information design, this approach of designing Markov chains for learning is similar in spirit to partitioning of information sets, but for non-Bayesian agents.

Main Contributions: Our analysis of the modified BHW model [1] described above yields two main contributions:
1) To the best of our knowledge, we are the first to highlight the ability to change information structures among agents to achieve learning in social learning problems. The approach used in this work, namely partitioning information sets, is closely related to the Bayesian persuasion [17, 18]. Designs achieving asymptotic learning in our work can be viewed as a “relayed Bayesian persuasion” by persuading agents in “possibly wrong cascades” to avoid the information cascades eventually.
2) With an explicit construction of 1-bit question corresponding to the agent’s possible information set and index value in the network and where agents are only allowed to query their immediate predecessors, we show that learning is achievable and answer the three questions addressed above in a single construction.

Note that in our approach, the system designer commits to a specific information structure (full public history of previous agents’ actions plus private communications) without any reductions or distortions, and also provides the agents with a question guidebook, whose performance each agent can verify independently. The minimal nature of our learning achieving question guidebook also reveals the fragility of information cascades (Sec.16 in [19]), as a small amount of strategically delivered information leads to learning. The information revelation in our question guidebook is strategic in contrast to reviews that are [20] generated via an exogenous process (and revealed only for some specific actions), and furthermore, lead to learning [7] only if the signals are rich. A subtle but rather interesting point of our approach is the “relayed persuasion” aspect wherein we only aim to persuade particular agents chosen by our design instead of agents chosen by nature as is commonly seen in Bayesian persuasion [21, 22].

2 Problem Formulation

BHW Model and Information Cascade Starting from the seminal BHW model[1], we consider a model with binary states of the world that are equally likely to occur and a countable number of Bayes-rational agents, each taking a single action sequentially as indexed by . At each time slot , agent shows up and chooses an action with the goal of matching the true state of the world. Formally, for every agent , the utility function is defined as and .

Before agent takes her action, she receives an informative but not revealing private signal . Her private signal is received through a binary symmetric channel (BSC) with a time-invariant crossover probability555The time-invariance assumption is mainly for the ease of algebraic complexity. The assumption we need for the model is the crossover probability of agents is common knowledge. 1−, where , for all . An agent can also observe the full history of actions taken by her predecessors (agents with index lower than her, if any), . The agent then computes the posterior belief of the true states of the world (alternatively, the likelihood ratio of one state versus the other), and takes the action corresponding to the most likely state. As in [23, 24], if indifferent between the two actions, we assume that the agent follows her private signal, instead of randomizing, following the majority, etc.

Definition 1.

In a model of Bayesian observational learning, asymptotic learning (in probability) is achieved if . If asymptotic learning is not achievable, we say that an information cascade has occurred.

Under the above setting, we know that the BHW model[1] has an information cascade because all agents will ignore their private signals and imitate their immediate predecessor’s action when the difference between the number of actions observed is greater than or equal to two666See Section 16.2 in [19]..

Deterministic Network Topology with Finite Channel Capacity Agents’ communication is modeled by a deterministic network with directed edges. On each such edge, agents are allowed to transmit information up to pre-determined bits through a perfectly reliable communication channel. Since agent takes action prior to for all , only directed edges are allowed in the network. Using these additional directed edges, agent can ask questions individually and privately to predecessors in the set in line with the network topology, before making her decision. Agent asks the set of questions after receiving her private signal. Critically, the deterministic777Discussions about the difference between deterministic and randomized network on this problem are in Appendix B.2. network topology is common knowledge.

Questions and Information Sets Since the network topology is pre-determined, the set of agents that a particular agent can query for information is exogenous. Given the topology, we allow the information designer to supply the agents with questions that they can ask the set of contactable predecessors (in order to distinguish which information set they are at). Assume , questions being asked by agent to agent are allowed to be dependent not only on the private signal and history observed by agent , but also on the answers of questions asked to other agents in prior to asking agent . In short, the order that agents in are queried in matters in the general framework. However, in this paper, we restrict our attention to only allow agents to ask questions simultaneously888The general framework is discussed in Appendix B.3. to predecessors in to avoid complex analyses owing to the recursive analysis required to understand the engendered higher order beliefs. With this degeneration, such a collection of a set of questions is called a question guidebook (QGB). With this background in mind, we formally define a QGB.

Definition 2.

A question guidebook is a function that gives agent a set of predetermined questions conditioned on agent ’s private signal, the observed history, and the predecessor agent who agent queries.

We assume that agents being queried are truthful in their responses, which is natural as they cannot gain from lying since their (payoff relevant) action has already been taken.

A QGB should have two important properties: feasibility and incentive compatibility. Intuitively, a QGB is feasible if agent only asks questions that she knows queried agent can answer. This avoids the ambiguity of what would happen if agent does not have the information to answer the question asked of it. Formally, we call an observed history feasible for a guidebook if there is a nonzero probability of obtaining this history given all of agents follow the guidebook , that is , ; to avoid cumbersome notation we will simplify this to . Similarly, we call a question asked of agent by agent feasible if for every and feasible history , the question can be answered by agent using only the information that it possesses.

Definition 3.

A question guidebook is feasible if for every agent , under every feasible observed history 999Since the history is fully observable in our model, for all , it is sufficient to check for the feasibility of history . However, the above definition still works when the history is only partial observable. and , all questions provided by are feasible.

A QGB is incentive compatible if agent always asks a question from the set of questions that maximizes her expected payoff.

Definition 4.

A question guidebook is incentive compatible if for every agent , under every feasible observed history and , the set of questions provided to her maximizes her expected utility among all feasible questions she can ask to agents in .

Given a feasible and incentive compatible QGB, each question serves to partition sequences of private signals to information sets: the possible states of randomness (underlying -algebra) into subsets (sub--algebras to be more precise). Note that the answer to such a question specifies the set where the current realization belongs to. Viewing questions from the perspective of partitions helps in justifying the following assumption.

Assumption 1.

We assume that there is no cost in asking questions. Thus, if no feasible question will change the expected payoff of an agent, there is no restriction for an information designer to design any question guidebook demands that this agent asks any provided questions within the prescribed capacity limit.

Assumption 1 stands when a limited number of questions are asked at no cost; and every agent is willing to help her successors, in essence bringing in some level of cooperation. The reason for this assumption is because even though no questions can benefit the current agent, her information could be beneficial to future agents. This is inspired by behavior in social networks, where is common to see people are willing to help their friend/neighbor nodes. Henceforth, we will assume such behavior. Additionally, we will also assume that the QGB is common knowledge101010A subtle weaker assumption is discussed in Appendix B.4.

3 Telephone-game Network, Strategy, and Question Guidebooks

To exhibit how designing QGBs can achieve learning, in the following sections, we consider a special network, called telephone-game network. In this network, we only endow each pair of consecutive agents with a communication channel of finite capacity. In other words, the topology of this communication network is a directed line graph. Therefore, besides observing the actions in history, the only source for an agent to get additional information is asking finite-bit questions to her immediate predecessor (if any). Since the capacity of every channel between each agent and her immediate predecessors is finite, agents may not be able to get the information of all private signals observed by all her predecessors. This is a very basic model111111The telephone-game network has the same topology as a tandem network, but the literature on learning in tandem networks uses one-way communication made before the next agent sees her private signal. In contrast, our questions are conditional on the received private signals. To avoid any confusion, we avoid the tandem network terminology. to start studying if the asymptotic learning is achievable by finite-bit questions.In this Section, we will first propose a high-level strategy that may achieve asymptotic learning, then study the following two important questions: first, how to design QGBs to get and accumulate the information we want, we will use the term Information set partition in the following paragraphs; second, what are the necessary and sufficient conditions to implement the strategy we proposed to achieve learning?

3.1 Threshold-based Strategy

Capacity constraints may make agents incapable of getting the information of all their predecessors’ private signals. However, there are some “clues” that agents can learn from a fixed length sequence of private signals owing to the different signal distributions among states of the world, e.g., when the true state is , private-signal sequences containing three consecutive s will occur more frequently than when the true state is . With this intuition in mind, we propose the following strategy.

Suppose we are in an cascade, i.e., every following agent ignores her private signal and takes action if no information from questions is provided. As we all know, if the true state is , agents are less likely to get multiple private signal s in a row. Hence, we propose the following strategy:
Step 1: Given a predetermined threshold , agents in an A cascade ask questions to know if the event “ consecutive private signal are received by agents” occurs or not.
Step 2: Using the information of whether the event occurs or not, an agent updates her likelihood ratio of state versus . Starting from agent with a fixed likelihood ratio of versus , there will be the first agent with a positive probability that her updated likelihood ratio versus will cross .
Step3: If agent has likelihood ratio versus , her best strategy is to stop the cascade. If she doesn’t stop the cascade, then we use this agent as a new starting agent (agent in previous step) and start a new round checking if the event occurs in the following agents.

Let’s defer the details of the implementation of this strategy to the end of this Section and to Section 4. Suppose there is a way to implement the threshold-based strategy presented above without any incentive compatibility issues, then an important feature we can observe here is that not every agent in a cascade gets the chance to stop the cascade, i.e., some of them just ask questions and forward the information of whether the event occurs or not to their successor. Note that Assumption 1 grants the flexibility of designing a QGB with these questions.

Prior to designing QGBs to implement the threshold-based strategy, we know that for agents who have a chance to stop the cascade conditional on the observed history, the questions designed for them are restricted to the set of questions that maximize their expected utility. However, for agents indifferent to any questions, we can design any question for them. Thus, in order to design questions systematically, we categorize agents into two types conditional on the QGB in practice and the observed history.

Definition 5.

Given an observed history in a feasible and incentive compatible QGB, an agent is active if she has a positive probability to stop the cascade, otherwise she is called silent.

To further clarify the above definition, conditional on the history, agents who may benefit from the answer of questions are active agents, otherwise they are silent.

3.2 Questions and Information Set Partition

Here, we detail how questions help gather information, and how the capacity constraints limit the (lossless) information aggregation. To begin, let be the set of sequences of private signals from agent to −1, . The information space of agent , corresponding to a observed history and the question guidebook , is the set of all feasible sequences . The questions assigned to agent help her to update her posterior belief of the true state by partitioning into information sets and telling agent at which set she is in. For simplicity of notation, we denote the collection of information sets by .

By viewing question as an information set partition tool, studying the transition from to can help us understand how information aggregates. A well-known approach to analyze this class of problems systematically is by mapping these information-set transitions to Markov chains with transition matrices121212To allay readers’ concerns on what typical questions should look like and why we use Markov chains to model the QGBs, an example demonstrating the delicateness of the design problem is in Appendix B.1.. To formulate the mapping, we associate a QGB with a sequence of sets of Markov chains sharing the same state space. Denote to be a sequence of sets of Markov chains, where is the state set and is the set of Markov chains at time . To represent all distinguishable information sets, an inequality is required for the corresponding . Since state space is shared by all , hereafter, we simplify words “agent knows her information set corresponding to state ” to “agent goes to .”

3.3 Conditions for Asymptotic Learning

In the threshold-based strategy proposed above, with the threshold is fixed and supposing that we are in a cascade, the number of silent agents between two active agents (if any)131313We show that active agents cannot come consecutively in the proof of Claim 1 in Appendix A.1 is only determined by the (prior) likelihood ratio of the first silent agent who arrives right after an active agent. Denote the number of silent agents between the -th and -th active agents by . One necessary condition to achieve asymptotic learning is having go to infinity with . If this does not happen, then we always have a positive probability (lower-bounded by a constant) to stop a cascade, which will stop every correct cascade too. However, if goes to infinity too fast, then we cannot guarantee that a wrong cascade will eventually be stopped. Ergo, to achieve asymptotic learning, the key is to choose questions that carefully control the growth rate of . In the next Section, we will detail an implementation only using a -bit question for each agent. The precise necessary and sufficient conditions to achieve learning for threshold-based QGBs will be presented in Lemma 2.

4 Implementation of Threshold-based Strategy – Asymptotic
Learning achieved in One-bit Questions

In this section, we present our main result: asymptotic learning is achievable via an explicit construction of a question guidebook that uses a threshold-based strategy in the telephone-game network where agents ask exactly a single one-bit question.

Theorem 1.

In the telephone-game network, there exists a question guidebook using 1-bit capacity questions that achieves asymptotic learning.

Before presenting the proof, we first construct the QGB, and argue the feasibility and incentive compatibility of the designed QGB. Then, we provide the necessary and sufficient conditions when this class of QGBs achieves asymptotic learning. Finally, we prove the theorem by showing the conditions are satisfied in the constructed QGB.

Construction of the Question Guidebook Here we implement a QGB using the threshold-based strategy introduced in Section 3.1. Under the constraint of 1-bit capacity on questions, we choose . In other words, in the QGB we are going to construct, we know that once two consecutive silent agents get a private signal of the same type as the current cascade, there is no chance to stop the cascade at the immediate next active agent.

Without loss of generality we assume that the true state of the world is so that an cascade is a wrong cascade and a cascade is a correct one. Moreover, to avoid trivial questions in the QGB, the question guidebook becomes operational only when a cascade is initiated. Thus, an agent not in a cascade uses her private signal and does not need the guidebook.

As described in Section 3.1, we design different questions for active and silent agents in a cascade. Since only 1-bit questions are allowed, Section 3.2 and accompanying details in Appendix B.5 suggest that the QGB will consist of the partition of the (evolving) history into five sets. Next we illustrate the QGB by providing the corresponding Markov chains first and then detailing the questions actually being asked in each possible state. The Markov chains are depicted in Figure 1 assuming that a cascade is underway141414In a cascade, the same guidebook applies but with the s and s swapped., and they prescribe how the information space gets partitioned based on the type of the agent (active or silent), the private signal of the agent and the response from the immediate predecessor to the question (if it is asked).

Figure 1: Markov chains of proposed threshold-based question guidebook

The corresponding Markov chains in the designed QGB, as depicted in the left of Figure 1, endows every silent agent with the same transition matrix. This transition matrix is given by two different determined questions conditional on the silent agent’s private signal151515As described earlier, a silent agent’s partition can never be .; and the questions and corresponding information sets are as follows.

Receives private signal B Receives private signal A
Question asked Are you in ? Are you at ?
Action under positive answer Go to Go to
Action under negative answer Go to Go to

Since every active agent only cares if her immediate predecessor is in when she receives a private signal so she can stop the cascade (play ), questions are only needed in that case.

Receives private signal B Receives private signal A
Question asked Are you at ? No questions asked
Action under positive answer Go to and stop cascade Go to
Action under negative answer Go to

Before showing that the QGB is feasible and incentive compatible, we want to point out that while there exist a large set of QGBs that can achieve asymptotic learning even with the -bit constraint, the proposed design simplifies the analysis and proofs, and avoids solving complex recursive system of equations with four variables.

Feasibility and incentive compatibility of the question guidebook With the proposed QGB in hand, the first step is to verify the feasibility and incentive compatibility. Feasibility of this QGB is straightforward because every agent, no matter whether she is active or silent, knows her current state. Therefore, she can definitely answer the yes-no question about her current state to pass the feasibility check.

Showing incentive compatibility is equivalent to showing that is the only state that can have likelihood ratio of B over A crossing for any active agent. Here, we will establish a result that applies to more generally than the designed QGB. We will prove that every threshold-based QGB has positive probability to stop the cascade only when the immediate predecessor of a active agent is at (threshold event not holding for current majority, i.e., ).

Definition 6.

Given a question guidebook such that every silent agent uses the same transition matrix, a question guidebook is threshold-based if for every silent agent whose neighbor is in a transient state (e.g., in this question guidebook) of the Markov chain, she goes to state upon receiving a private signal in an observed majority or goes to upon receiving a private signal in an observed minority. Furthermore, active agents continue the cascade bring all feasible sequence of private signal to .

Lemma 1.

In threshold-based question guidebooks, active agents can only stop the cascade at .

To show Lemma 1, we first need to guarantee there exists at least one silent agent between any pair of active agents, i.e., active agents cannot arrive consecutively. The idea of the proof is that once an active agent fails to stop a cascade, it either gets an observed-majority private signal or the cascade will continue whatever private signal she gets. Then simple algebra rules out the possibility of consecutive active agents: see Claim 1 in Appendix A.1. Now, given the current active agent , if the next active agent indexed can stop cascade at state for some , then agent also has the ability to stop the cascade at , which contradicts the fact that is the next active agent. The detailed proof is in Appendix A.1.

Since the QGB we constructed is a threshold-based QGB and active agents stop a cascade only in , Lemma 1 guarantees incentive compatibility.

Necessary and sufficient conditions for asymptotic learning in threshold-based question guidebooks In order to show that the proposed QGB can achieve asymptotic learning, we provide a necessary and sufficient condition to achieve asymptotic learning for every threshold-based QGB.

Definition 7.

Let be the probability that the wrong cascade will be stopped when , where is the number of silent agent between the and active agents. Similarly, represents the probability that the right cascade will be stopped when .

Lemma 2.

Given a threshold-based question guidebook that is operational in a cascade. The following three conditions are necessary and sufficient for the question guidebook to achieve to achieve the asymptotic learning:

  • ;

  • The growth rate of satisfies ;

  • The transition matrix is irreducible, where is the transition matrix for silent agents and is the transition matrix for active agents who receive observed majority signals.

The first condition makes the frequency of active agents to go to 0 as time goes to infinity, otherwise asymptotic learning cannot be achieved161616If we have a non-zero proportion of agents that take actions according to their private signals, the probability of the right action is upper bounded away from 1.. Furthermore, we want every wrong cascade to be stopped with probability 1, but also need the right cascade to have a positive probability to last forever. This constrains the maximum and minimum growth rate of , which is discussed in the second condition. The last condition guarantees that in a cascade, with an arbitrary number of silent agents followed by an active agent, all states can be visited. Without this condition, the QGB cannot correct all kinds of the wrong cascades and no learning is achievable.

4.1 Asymptotic Learning is achieved (Sketch of proof of Theorem 1)

Since the last condition in Lemma 2 is trivially satisfied in the designed QGB, most of proof is on verifying the first two conditions in Lemma 2. For this we have to analyze the growth rate of thoroughly. This following paragraphs will first characterize the form of , then study upper and lower bounds of its growth rate. With those bounds in hand, calculations can be done to show the wrong cascade will be stopped almost surely and the right state will last forever with positive probability (which can be lower-bounded by some constant).

Form of number of silent agents To study the growth rate of , we need to know exactly how many of agents between each pair of consecutive active agents.

Prior to this, we need to specify the functions and first. With threshold , two consecutive majority signals in a cascade will continue the cascade at the next active agent. Simple combinatorics yields and as follows:

(1)

Furthermore, with Lemma 1, we know that the likelihood ratio of versus at state is the only parameter that the next active agent needs to compute. Suppose we know the likelihood ratio of the first silent agent right after , the next active agent is the first agent who could have likelihood ratio at crossing . Since () is the probability that an agent and her successor are both at conditional on the right(wrong) cascade not yet stopped. The ratio of over is the likelihood ratio of versus conditioned on the event that the cascade continues after silent agents. Hence, by definition of a silent agent, if the agent with index is silent, she must have likelihood ratio at , . Given the fact that is an strictly increasing function of for all , the number of silent agents between active agents to can be mathematically defined as:

(2)

Then, since every active agent failing to stop the cascade has only one information set corresponding to , there is a simple recursive form of likelihood ratio at state between first silent agents after and active agents by using the ratio of probability that the cascade continuous. The recursive form of a strictly decreasing171717See Appendix A.7.1 for the proof. function of , is given by:

(3)

With functions , , and (3), is non-decreasing and can be computed iteratively.

Upper bound of growth rate of If asymptotic learning can be achieved under this QGB, every wrong cascade must be stopped, i.e., . Thus, if we can find a sequence , and , then guarantees . Finding the upper bound of the growth rate of is equivalent to finding the lower bound of sequence such that and for all . Now, suppose , , we can calculate (See detailed calculations in A.3) to get

(4)

where is only a function of .

Wrong cascade will be stopped almost surely Taking the inequality (4) into the condition 2 in Lemma 2, the probability that a wrong cascade will be stopped is181818See Appendix A.4 for detailed calculation

(5)

Lower bound of growth rate of and the probability of stopping a right cascade Similarly, to show that , we can show that a lower bound of is positive. Thus, we now need a lower bound of the growth rate of . Using a similar technique as for (4), we derive an opposite inequality for in Appendix A.5, and then the probability that a right cascade will be stopped is

(6)

where Since converges by the ratio test, the RHS of (6) is less than , so that a right cascade can last forever with a positive probability.

Now, every wrong cascade will be stopped but there is a positive probability that a right cascade will continue forever, so the second condition is satisfied. Furthermore, since the lower bound and upper bound of growth rate suggest a finite interval of , goes to infinity as goes to infinity, so the first condition also holds. Thus, asymptotic learning can be achieved under this QGB. Once we’re out of a cascade, agents use their private signals and that will initiate another cascade with a bias towards a right cascade. Nevertheless, every wrong cascade will be stopped in finite time and an unstoppable right cascade happens after finitely many stopped right cascades. Thus, the learning happens almost surely instead of just in probability, as in Definition 1, so that in finite time learning occurs.

5 Conclusion

We have shown that in the sequential social learning model of BHW [1], agents avoid wrong cascades and achieve asymptotic learning by asking one well designed binary question to the preceeding agent. To do this, we develop a question guidebook that the agents can use to ask questions such that the agent can always answer the question, and agents always ask a question that best serves their self interest. Determining the distribution of the time of learning or finding the best question guidebook to minimize some statistic of the time of learning is for future work. Generalizing from binary private signals to other discrete signals and then to the framework of [4], or to consider multiple states of nature [5] is also for future work.

References

  • [1] S. Bikhchandani, D. Hirshleifer, and I. Welch, “A theory of fads, fashion, custom, and cultural change as informational cascades,” Journal of political Economy, vol. 100, no. 5, pp. 992–1026, 1992.
  • [2] I. Welch, “Sequential sales, learning, and cascades,” The Journal of finance, vol. 47, no. 2, pp. 695–732, 1992.
  • [3] A. V. Banerjee, “A simple model of herd behavior,” The quarterly journal of economics, vol. 107, no. 3, pp. 797–817, 1992.
  • [4] L. Smith and P. Sørensen, “Pathological outcomes of observational learning,” Econometrica, vol. 68, no. 2, pp. 371–398, 2000.
  • [5] P. N. Sørensen, “Rational social learning,” Ph.D. dissertation, Massachusetts Institute of Technology, 1996.
  • [6] W. Hann-Caruthers, V. V. Martynov, and O. Tamuz, “The speed of sequential asymptotic learning,” Journal of Economic Theory, vol. 173, pp. 383–409, 2018.
  • [7] D. Acemoglu, A. Makhdoumi, A. Malekian, and A. Ozdaglar, “Fast and slow learning from reviews,” National Bureau of Economic Research, Tech. Rep., 2017.
  • [8] D. Acemoglu, M. A. Dahleh, I. Lobel, and A. Ozdaglar, “Bayesian learning in social networks,” The Review of Economic Studies, vol. 78, no. 4, pp. 1201–1236, 2011.
  • [9] I. H. Lee, “On the convergence of informational cascades,” Journal of Economic theory, vol. 61, no. 2, pp. 395–411, 1993.
  • [10] F. Gul and R. Lundholm, “Endogenous timing and the clustering of agents’ decisions,” Journal of political Economy, vol. 103, no. 5, pp. 1039–1066, 1995.
  • [11] X. Vives, “Learning from others: a welfare analysis,” Games and Economic Behavior, vol. 20, no. 2, pp. 177–200, 1997.
  • [12] S. Huck and J. Oechssler, “Informational cascades with continuous action spaces,” Economics Letters, vol. 60, no. 2, pp. 163–166, 1998.
  • [13] E. Mossel, A. Sly, and O. Tamuz, “Strategic learning and the topology of social networks,” Econometrica, vol. 83, no. 5, pp. 1755–1794, 2015.
  • [14] T. M. Cover, “Hypothesis testing with finite statistics,” The Annals of Mathematical Statistics, vol. 40, no. 3, pp. 828–835, 1969.
  • [15] M. E. Hellman and T. M. Cover, “Learning with finite memory,” The Annals of Mathematical Statistics, pp. 765–782, 1970.
  • [16] K. Drakopoulos, A. Ozdaglar, and J. Tsitsiklis, “On learning with finite memory,” arXiv preprint arXiv:1209.1122, 2012.
  • [17] E. Kamenica and M. Gentzkow, “Bayesian persuasion,” American Economic Review, vol. 101, no. 6, pp. 2590–2615, 2011.
  • [18] L. Rayo and I. Segal, “Optimal information disclosure,” Journal of political Economy, vol. 118, no. 5, pp. 949–987, 2010.
  • [19] D. Easley and J. Kleinberg, “Networks, crowds, and markets,” Cambridge Books, 2012.
  • [20] T. N. Le, V. G. Subramanian, and R. A. Berry, “Quantifying the utility of imperfect reviews in stopping information cascades,” in Decision and Control (CDC), 2016 IEEE 55th Conference on.   IEEE, 2016, pp. 6990–6995.
  • [21] J. Hedlund, “Bayesian persuasion by a privately informed sender,” Journal of Economic Theory, vol. 167, pp. 229–268, 2017.
  • [22] J. C. Ely and M. Szydlowski, “Moving the goalposts,” Working paper, Tech. Rep., 2017.
  • [23] I. Monzón, “Aggregate uncertainty can lead to incorrect herds,” American Economic Journal: Microeconomics, vol. 9, no. 2, pp. 295–314, 2017.
  • [24] T. N. Le, V. G. Subramanian, and R. A. Berry, “Information cascades with noise,” IEEE Transactions on Signal and Information Processing over Networks, vol. 3, no. 2, pp. 239–251, 2017.
  • [25] Z. Zhang, E. K. Chong, A. Pezeshki, and W. Moran, “Hypothesis testing in feedforward networks with broadcast failures,” IEEE Journal of Selected Topics in Signal Processing, vol. 7, no. 5, pp. 797–810, 2013.
  • [26] Y. Wang and P. M. Djurić, “Social learning with bayesian agents and random decision making,” IEEE Transactions on Signal Processing, vol. 63, no. 12, pp. 3241–3250, 2015.
  • [27] T. N. Le, V. G. Subramanian, and R. A. Berry, “Learning from randomly arriving agents,” in Communication, Control, and Computing (Allerton), 2017 55th Annual Allerton Conference on.   IEEE, 2017, pp. 196–197.
  • [28] W. P. Tay, J. N. Tsitsiklis, and M. Z. Win, “Bayesian detection in bounded height tree networks,” IEEE Trans. Signal Process., vol. 57, no. 10, pp. 4042 – 4051, Oct. 2009.
  • [29] K. Drakopoulos, A. Ozdaglar, and J. N. Tsitsiklis, “When is a network epidemic hard to eliminate?” Mathematics of Operations Research, vol. 42, no. 1, pp. 1–14, 2016.
  • [30] W. P. Tay, J. N. Tsitsiklis, and M. Z. Win, “On the subexponential decay of detection error probabilities in long tandems,” IEEE Transactions on Information Theory, vol. 54, no. 10, pp. 4767–4771, 2008.
  • [31] J. Wu, “Helpful laymen in informational cascades,” Journal of Economic Behavior & Organization, vol. 116, pp. 407–415, 2015.
  • [32] S. Vaccari, M. Scarsini, and C. Maglaras, “Social learning in a competitive market with consumer reviews,” 2016.
  • [33] D. Acemoglu, A. Makhdoumi, A. Malekian, and A. Ozdaglar, “Informational braess’ paradox: The effect of information on traffic congestion,” arXiv preprint arXiv:1601.02039, 2016.
  • [34] Y. Peres, M. Z. Racz, A. Sly, and I. Stuhl, “How fragile are information cascades?” arXiv preprint arXiv:1711.04024, 2017.
  • [35] A. Banerjee and D. Fudenberg, “Word-of-mouth learning,” Games and economic behavior, vol. 46, no. 1, pp. 1–22, 2004.
  • [36] L. Smith and P. Sørensen, “Rational social learning with random sampling,” Available at SSRN 1138095, 2008.
  • [37] D. Sgroi, “Optimizing information in the herd: Guinea pigs, profits, and welfare,” Games and Economic Behavior, vol. 39, no. 1, pp. 137–166, 2002.
  • [38] B. Celen and S. Kariv, “Observational learning under imperfect information,” Games and Economic Behavior, vol. 47, no. 1, pp. 72–86, 2004.
  • [39] S. Callander and J. Hörner, “The wisdom of the minority,” Journal of Economic theory, vol. 144, no. 4, pp. 1421–1439, 2009.
  • [40] J. Ho, W. P. Tay, and T. Q. Quek, “Robust detection and social learning in tandem networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on.   IEEE, 2014, pp. 5457–5461.
  • [41] Y. Song, “Social learning with endogenous network formation,” arXiv preprint arXiv:1504.05222, 2015.
  • [42] J. A. Bohren and D. Hauser, “Bounded rationality and learning: A framework and a robustness result,” 2017.
  • [43] C. Chamley and D. Gale, “Information revelation and strategic delay in a model of investment,” Econometrica: Journal of the Econometric Society, pp. 1065–1085, 1994.
  • [44] Y. Zhang, “Robust information cascade with endogenous ordering.”   working paper, 2009. Available at: http://www. gtcenter. org/Archive/2010/Conf/Zhang1085. pdf, 2009.
  • [45] D. Sgroi, “Irreversible investment and the value of information gathering,” Economics Bulletin, vol. 21, no. 4, 2003.
  • [46] T. N. Le, V. G. Subramanian, and R. A. Berry, “The value of noise for informational cascades,” in Information Theory (ISIT), 2014 IEEE International Symposium on.   IEEE, 2014, pp. 1101–1105.
  • [47] ——, “The impact of observation and action errors on informational cascades,” in Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on.   IEEE, 2014, pp. 1917–1922.
  • [48] T. Le, V. Subramanian, and R. Berry, “Bayesian learning with random arrivals,” in Information Theory (ISIT), 2018 IEEE International Symposium on.   IEEE, 2018, pp. 6990–6995.
  • [49] J. N. Tsitsiklis, Decentralized detection, ser. Advances in Statistical Signal Processing, Vol. 2: Signal Detection.   JAI Press, 1989.
  • [50] J. Koplowitz, “Necessary and sufficient memory size for m-hypothesis testing,” IEEE Transactions on Information Theory, vol. 21, no. 1, pp. 44–46, 1975.
  • [51] M. Dia, “On decision making in tandem networks,” Master’s thesis, Massachusetts Institute of Technology, 2009.

Appendix A Appendix: Proofs and Calculations

a.1 Proof of Lemma 1

Proof.

Prior to showing Lemma 1, we want to first claim a useful property in threshold-based question guidebooks.

Claim 1.

Given a threshold-based question guidebook which is feasible and incentive-compatible, active agents cannot arrive consecutively.

Proof.

First we note that active agents arriving means that the cascade is still ongoing and the question guidebook operational. Then assume that active agent is in state , where is defined as the set of states at which she can stop the cascade.

We know for sure that the likelihood ratio of agent at state , denoted by is less than . The reason is that at contains two types of events. The first types is where is in but receiving an observed majority. Given for all , ; note that agent does not stop the cascade. The other type is was at . Here also we get because cannot stop the cascade even after receiving observed minority in any of those states.

Therefore, we can conclude that , which guarantees that agent is a silent agent, and not an active agent. ∎

Given that every active agent goes to once knowing she cannot stop the cascade, Claim 1 tells that the next agent must be a silent agent. With the claim, we can prove this lemma by contradiction. Starting from , we assume the next active agent is who can stop the cascade at for some . Now, consider that likelihood ratio of state at agent , , ; this is because of the allowed transitions in the Markov chain of silent agents. Agent can stop the cascade at and this requires . It implies that . Now, agent must be an active agent, which contradicts that next active agent is . ∎

a.2 Proof of Lemma 2

First, we show that if one of these three condition fails, then asymptotic learning is not achievable.

The check for the first condition is straightforward, if it fails, then there exists an such that for all . Now, is lower-bounded by . As we all know, , a correct cascade will always being stopped in finite time and asymptotic learning is not possible.

For the second condition, if , then there is a positive probability that a wrong cascade lasts forever. Obviously, asymptotic learning cannot be achieved. If , then we will keep stopping every cascade whether it is a right cascade or not. Therefore, because we will have a positive probability of either being in a wrong cascade or not being in a cascade.

Finally, if the product of transition matrix is not irreducible, then there are some states that we either cannot access or from which we cannot go back to any other state. It implies that we cannot stop cascades at those states. Hence, it is necessary that is irreducible to achieve asymptotic learning.

In the other direction, suppose we have the first and the second conditions satisfied, then we know we will keep stopping some wrong cascades but not the correct cascades. With the third condition, we know that if a cascade, whether it’s correct or wrong, isn’t stopped by the upcoming active agent, then it has a positive probability of being stopped by at least one of the next active agents, because the directed graph corresponds to is strongly connected. Now, since all wrong cascades can be stopped in finite time but some of right cascade will last forever, then asymptotic learning is achieved.

a.3 Upper bound of growth rate of

Finding the upper bound of is analogous to finding a sequence of such that

The following calculations derive the value of .

First, the number of is equivalent to difference of indices between the active agent to the next active agent with index such that . Hence, we get the following equation:

Now, using the recursive formula of likelihood ratio stated in (3), we get the following equation:

We know that because and , so taking logarithms we obtain

Using the recursive form in (3) once again, we get

From this point onwards, we need to compute a lower bound of , and an upper bound of . For ease of reading we put the derivations of the calculations of the lower bound of in Appendix A.7.4, and the upper bound of using the Claim 3 in Appendix A.7.2 (by using the fact that first).

Combining the result in AppendixA.7.4, which shows , where is only a function of , and the result in Appendix A.7.2, , we can pick as

(7)

a.4 Calculation of the probability that a wrong cascade will be stopped

First, we use the lower bound of derived in Appendix A.7.3 to get the first inequality

Now, we use the sequence that upper bounds the growth rate of to replace the product and apply the lower-bound result in A.7.4 for large enough .

where .

a.5 Lower bound of growth rate of

Finding the lower bound of is analogous to finding a sequence of such that for all . In contrast to what we did in computing , we now want to upper bound the following equation:

Given that and , because and , we can lower bound the equation and then take logarithms on both sides of the inequality to get

Similarly, we can replace by an upper bound derived in Appendix A.7.4 for large enough . However, unlike what we did for upper bounding the growth rate of , now we keep the form and bring this into the calculation of the probability that a right cascade will being stopped. In other words,

a.6 Calculation the probability that a right cascade will finally being stopped

Taking the lower bound derived in Appendix A.5, we get the following inequality:

where , and is the same as in Appendix A.4.

Now, the rest of calculations are just using well-known upper and lower bounds on the logarithm. With calculations detailed in A.7.5, we get that

where

Now, using the fact that in Appendix A.7.4, for a large enough , . Hence, by ratio test, we can claim that converges, and

.

a.7 Technical Claims and Calculations

a.7.1 Claim 2 and its proof

Claim 2.

The likelihood ratio is a strictly decreasing function of .

Proof.

First, is a strictly increasing function of (please see calculations in Appendix A.7.4). Since and given the form in (2), is an strictly decreasing function. ∎

a.7.2 Claim 3 and its Proof

Claim 3.

can be upper-bounded by .

Proof.

First, .

Then, using the upper bound in Appendix A.7.3 and Taylor’s expansion, we know that

a.7.3 Calculation of Upper bound of

First, in this question guidebook, has a closed form given by

With the closes form of , a simple bound of holds for all .

For readers curious about the exact difference of , the form is
, where is a hypergeometric function. We will not need this detail as getting the bound is good enough for our results.

a.7.4 Convergence and bounds of