TY - JOUR AU1 - Arieli,, Itai AU2 - Mueller-Frank,, Manuel AB - Abstract This article provides a model of social learning where the order in which actions are taken is determined by an |$m$|-dimensional integer lattice rather than along a line as in the herding model. The observation structure is determined by a random network. Every agent links to each of his preceding lattice neighbours independently with probability |$p$|⁠, and observes the actions of all agents that are reachable via a directed path in the realized social network. For |$m\geq 2$|⁠, we show that as |$p<1$| goes to one, (1) so does the asymptotic proportion of agents who take the optimal action, (2) this holds for any informative signal distribution, and (3) bounded signal distributions might achieve higher expected welfare than unbounded signal distributions. In contrast, if signals are bounded and |$p=1$|⁠, all agents select the suboptimal action with positive probability. 1. Introduction Social learning forms an important part of daily life. We observe the choices of others and take this into account when making decisions. Typically, our choices are influenced by our own private information as well as the information we infer from the actions of those we observe. The standard herding model, introduced by Bikhchandani et al. (1992), and Banerjee (1992), features a countable set of fully rational agents who each make a one-time, binary choice in a predetermined sequence. Each agent receives a conditionally independent and identically distributed private signal about the realized state of the world and observes the choices of all his predecessors. In practice, however, the two main features of the herding model, namely, a strict sequential order of choices and common knowledge of the history of actions, fail to adequately represent many real-world environments. Consider for example the adoption of a new product. As time goes by an increasing number of individuals become aware of the existence of the product and then face the decision of whether to adopt it or not. Hence the size of the group that decides in any given period grows over time. The active user base of online networks such as Facebook and Twitter might serve as real-world examples since they have experienced spectacular growth in the first years since their inception. Such online networks also serve as an example of the non-linear way social information spreads in real-world social networks where the information a given individual observes might have no overlap with the information of other users. This article analyses social learning in an environment that features (1) a multidimensional choice order, where at each discrete point in time a group of agents simultaneously take an irreversible action, (2) growth in group size over time, and (3) limited observability of the history for each agent, determined by a random social network. The standard herding model provides deep insights into social learning. Bikhchandani et al. (1992) show that rational individuals whose pooled information is sufficient for choosing the optimal action might nevertheless all select a suboptimal action. More precisely, they show that information cascades, whereby from a finite time period onward all agents ignore their private signal and base their choice only on the history of previous choices, might lead to suboptimal choices. In contrast to information cascades, asymptotic learning occurs if the probability of selecting the optimal action goes to one over time. Smith and Sorensen (2000) show that a necessary and sufficient condition for asymptotic learning is that signals are unbounded. One important big-picture implication of our analysis is that social learning settings where agents observe a random subset of their predecessors’ actions, including, importantly, the case of an empty observation set, can yield dramatically different predictions than the standard herding model where every agent observes all her predecessors’ actions. In particular, (1) asymptotic learning may occur for an arbitrarily large fraction of agents, (2) this does not depend on signals being unbounded, (3) bounded signal distributions might lead to a larger fraction of asymptotic learners than unbounded signal distributions, and (4) there is no tension between asymptotic learning and information cascades. Our model is based on a lattice structure that determines both the order of choices as well as the observed history of the agents. More precisely, the agents are organized in the non-negative orthant of an |$m$|-dimensional integer lattice, where |$m\geq 2$|⁠. The period in which a given agent selects an action out of a binary set is determined by his lattice distance to the origin. In period |$1$| the agent who is located in the origin acts. All agents whose lattice distance from the origin is exactly |$t-1$| act in period |$t$|⁠.1 The observation structure is determined by a social network that builds on the lattice structure. Every agent in the lattice links with each of his neighbouring predecessors independently with probability |$p$| and observes the actions chosen by all previous agents who form part of his extended social network. That is, an agent observes the actions of those who can be reached by a directed path originating from the agent. The linkage probability |$p$| and the lattice dimension |$m$| impact the connectivity of the observation structure. This model thus captures the more realistic features of social interaction described above and additionally allows us to study random observation structures in a tractable way. Our analysis builds upon percolation theory, which is the study of connected clusters in random graphs.2 The main objective of the article is to establish the relation between the structure of the observation network and equilibrium-learning characteristics. In particular, we are interested in the effect of network connectivity on the occurrence of asymptotic learning, information cascades, asymptotic welfare, and speed of learning. The common concept of asymptotic learning, i.e., that the probability of agents acting optimally converges to one over time, is too strong for our model. In the random network case with |$p<1$| every agent is isolated, i.e., he observes no previous actions, with positive probability and therefore bases his action only on his private signal. To capture the quality of learning, we focus on the proportion of agents who act optimally in a given time period. Our novel notion of learning, called |$\alpha $|-proportional learning, is satisfied if the probability, that in time |$t$| a proportion of at least |$\alpha $| of the agents select the optimal action converges to one as |$t$| goes to infinity.3 Existing models of social learning typically feature two types of equilibrium behaviour: either all agents eventually select the optimal action with probability one, or all agents eventually select the same suboptimal action with positive probability. Our model allows for more nuanced asymptotic equilibrium behaviour. This is captured by our proportional learning notion where asymptotically both actions can coexist. We consider two different models that differ in the observability of actions. In the first model, the observation structure of all agents is determined by the same realization of the random lattice and thus the observation structure of all agents is commonly known. This implies that if agent |$\mathbf{x}$| observes the action of another agent |$\mathbf{y}$|⁠, then |$\mathbf{x}$| also observes all the actions that |$\mathbf{y}$| observed, a feature that is shared with the standard herding model. In the second model, the observation set of each agent is his private information and it is determined by an i.i.d. draw from the random lattice, again under the assumption that each agent observes the actions of all agents that he can reach via a directed path in his realized observation network. Thus, if agent |$\mathbf{x}$| observes the action of agent |$\mathbf{y}$|⁠, then |$\mathbf{x}$| does not know the observation set of agent |$\mathbf{y}$|⁠. The main contribution of our article lies in the provision of a natural generalization of the herding model whereby equilibrium learning and welfare properties can be neatly characterized in terms of the connectivity of the observation structure. We first consider the common observations model. Our main result, Theorem 1, establishes that for every |$\alpha \in \left( 0,1\right) $|⁠, there exists a corresponding linkage probability |$p(\alpha )$| such that |$\alpha $|-proportional learning is satisfied for all |$p\in (p(\alpha ),1)$| in any Perfect Bayesian equilibrium of the game, for all signal distributions. Most of the herding literature relates the quality of learning to the properties of the private signals.4 In contrast, Theorem 1 characterizes the equilibrium learning properties in terms of the connectivity parameter |$p$|⁠. The connection between |$\alpha $| and the connectivity level |$p(\alpha )$| that is required for |$\alpha $|-proportional learning is established using percolation theory. More precisely, |$p(\alpha )$| equals the infimal value |$p$| such that, in the two-dimensional lattice, the agent in the origin is observed by infinitely many agents with a probability of at least |$\alpha $|⁠.5 The driving force behind our results is the observational incompleteness in the random network. More precisely, for |$p<1$| each given agent is isolated with positive probability. The actions of such isolated agents reveal more about their private information, and access to a growing set of isolated agents is what enables learning even under bounded signals. The idea of learning based on isolated agents is not new. In fact, Smith (1991) demonstrates that an infinite but proportionally vanishing set of isolated agents (or “sacrificial lambs”) enables learning, and this feature is also employed in Sgroi (2002), Smith and Sorensen (2013), and Acemoglu et al. (2010).6 In contrast to these results, our Theorem 1 identifies the exact relation between proportional learning and the connectivity parameter |$p$|⁠, which endogenously determines the distribution of isolated agents. We then turn to the analysis of the private observations model. One crucial difference between our two models is that in the second model agents can no longer identify who among the observed agents is isolated. Nevertheless, Theorem 2 shows that in the case of private observations the exact same result of Theorem 1 holds. To provide some intuition of why the result generalizes, consider a random observation structure where each agent is isolated with a probability bounded away from zero. We show that observing a subsequence of such actions is sufficient for asymptotically learning the true state, even without knowing the realized observation structure of the agents (see Lemma 1). This result forms an important part of the proof of Theorem 2. All our remaining results hold in both observation models. Our next result formally establishes a lower bound for asymptotic expected welfare as a function of the signal distribution and the connectivity parameters. Denote by success probability the probability of the first agent acting optimally. Note that the success probability is determined by the signal distribution and is constant across all equilibria. Theorem 3 provides a lower bound for asymptotic welfare achieved in equilibrium which depends on the success probability and the percolation probability, which equals the probability that the origin is observed by infinitely many nodes. Increasing either the linkage probability or the lattice dimension strictly increases the asymptotic welfare bound (see Corollary 3 and Corollary 4). Theorem 4 establishes that if the success probability of signal distribution |$F$| is larger than that of signal distribution |$F^{\prime }$|⁠, then there exists a linkage probability threshold |$p^{\prime }<1$| such that the expected asymptotic welfare under |$F$| is larger than under |$F^{\prime }$|⁠, for all linkage probabilities |$p\in \left( p^{\prime },1\right) $|⁠. Thus for linkage probabilities close to one, a bounded signal distribution achieves higher asymptotic welfare than an unbounded signal distribution if the former’s success probability is larger. In contrast, in the standard herding model any unbounded signal distribution yields higher asymptotic welfare than every bounded signal distribution. We next compare the random network structure to the deterministic structure when |$p=1$|⁠. The analysis of the deterministic network establishes that Smith and Sorensen (2000) characterization of asymptotic learning effectively carries forward to the multidimensional deterministic observational network. Our Theorem 5 states that |$\alpha $|-proportional learning is satisfied for any |$\alpha <1$| if private signals are unbounded, and fails for every |$\alpha >0$| if private signals are bounded.7 Therefore, if signals are bounded, the equilibrium learning properties feature a sharp discontinuity at |$p=1$|⁠. Finally, we provide the results of a simulation for a specific set of parameters, to illustrate the behaviour along the equilibrium path and our formal results about asymptotic behaviour. A fundamental theoretical and empirical question concerns the relation between network structure and the quality of learning. Our analysis delivers a clear answer for social networks generated by random lattices. In particular, we show that the proportion of agents that learn crucially depends on the existence of a large yet incomplete connected component within the social network. In our lattice model this is determined by the percolation probability. Our results suggest that general random graphs that feature, for each agent, a bounded number of direct connections and a positive probability of being isolated exhibit similar learning properties as the random lattice. Thus, the testable relation that emerges from our analysis is that a denser, more connected (yet incomplete) network achieves a higher proportion of agents acting optimally in the long run. The rest of the article is organized as follows. Section 2 introduces the model with commonly known observations and provides our main results on |$\alpha $|-proportional learning. Section 3 considers the model with private observations and establishes that the |$\alpha $|-proportional learning result carries forward. Section 4 formally establishes the connection between connectivity and expected welfare. Section 5 considers the deterministic observation structure and provides a characterization of |$\alpha $|-proportional learning in terms of the signal distribution. Section 6 illustrates our results via simulations. Section 7 concludes. All proofs are relegated to the Appendix. 2. Learning with Commonly Known Observations A countably infinite set of agents |$N$| are organized in the non-negative orthant of an |$m$|-dimensional integer lattice |$Z_{+}^{m}\subset \mathbb{R} ^{m}$| whose vertices are |$m$|-tuples of integers. We identify each agent with a corresponding point |$\mathbf{x}$| in the lattice. Each agent |$\mathbf{x}$| makes a single, irreversible decision, |$a_{\mathbf{x}}\in A=\{0,1\}$|⁠, under uncertainty, which is represented by a binary state space |$\Omega =\{0,1\}$|⁠. The true state of the world is drawn according to a uniform prior in period |$t=0$|⁠. The payoff of each agent is given by \begin{eqnarray*} u(a,\omega )= \begin{cases} & 1\text{ if }a=\omega \\ & 0\text{ otherwise.}% \end{cases}% \end{eqnarray*} Agents do not know the realized state but each agent |$\mathbf{x}$| observes a private signal |$s_{\mathbf{x}}$| belonging to a measurable signal space |$S$|⁠. The distribution according to which the signal |$s_{\mathbf{x}}$| is drawn depends on the state of the world, that is, |$F=(F_0,F_1)$| where |$F_{0},F_{1}\in \Delta \left( S\right) $|⁠. Signals are conditionally independent and identically distributed across agents. We assume that the probability measures |$F_{0}$| and |$F_{1}$| are absolutely continuous with respect to each other but are not identical so that some signals are informative with positive probability, and almost surely not perfectly informative. The lattice determines the order of choices of the agents. Define the lattice distance function |$d_{m}:Z_{+}^{m}\times Z_{+}^{m}\rightarrow \mathbb{R} $| as follows: for two points |$\mathbf{x,y}\in Z_{+}^{m}$| the lattice distance |$d_{m}(\mathbf{x,y})$| is given by |$d_{m}(\mathbf{x,y})=\sum_{i=1}^{m}\left\vert \mathbf{x}_{i}\mathbf{-y}_{i}\right\vert $|⁠. There are countable decision periods. Let |$B_{t}$| denote the set of agents at a lattice distance of |$t$| from the origin. In period |$t$||$=1,2,...$|⁠, all agents in |$B_{t-1}$| simultaneously make their decision. In particular, this means that the number of agents acting in a given period is increasing with the number of periods. In the herding model agents observe the history of choices of all their predecessors. By organizing agents in a lattice, one can distinguish between the order in which actions are taken and the observed actions of a given agent. We represent the observation structure of agents by a network. In general, a network |$G=(V,E)$| is represented by two sets: a set of vertices |$V $|⁠, and a set of edges |$E\subset V^{2}$|⁠, that represents the links between agents. In particular, for two vertices |$\mathbf{x,y}$| the directed edge |$\mathbf{xy}$| represents a link from |$\mathbf{x}$| to |$\mathbf{y}$|⁠. Our observation structure is determined by the following (random) directed social network |$G_{p}$| with linkage probability |$p\in \lbrack 0,1]$|⁠. A realization of the network is denoted by |$G=(Z_{+}^{m},E^{m})$|⁠, while |$G_{1}=(Z_{+}^{m},E_{1}^{m})$| denotes the deterministic network, and |$G_{p}=(Z_{+}^{m},E_{p}^{m})$| denotes the random variable generating the network. Let |$\mathbf{\mathbf{e}}^{j}$| denote the unit vector in |$\mathbb{R}^{m}$|⁠. The edge set |$E_{1}^{m}$| is given by |$E_{1}^{m}=\left\{ \mathbf{xy:x=y+\mathbf{e}}^{j}\text{ for some }j=1,...,m\text{ }\right\} $|⁠. For |$p<1$| we have that every realization |$E^{m}$| of |$E_{p}^{m}$| satisfies |$E^{m}\subset E_{1}^{m}$|⁠. That is, |$E_{p}^{m}$| is a random collection of edges in |$E_{1}^{m}$|⁠, each realized independently with probability |$p$|⁠. The random network is drawn at time |$t=0$| independently of the state of the world |$\omega $|⁠. The observation structure is linked to the social network as follows. A given agent |$\mathbf{x}$| observes the previous actions of all agents that form part of his extended social network, i.e., of all those agents who can be reached from |$\mathbf{x}$| through a directed path.8 For a graph |$G$|⁠, the distance |$d_{G}(\mathbf{x,y)}$| between nodes |$\mathbf{x}$| and |$\mathbf{y}$| is equal to the length of the shortest directed path from |$\mathbf{x} $| to |$\mathbf{y}$|⁠. If there exists no connecting directed path, the distance is set to infinity. Formally, each agent |$\mathbf{x}$| observes the actions of all agents in |$B_{G}(\mathbf{x})$|⁠, where |$B_{G}(\mathbf{x})=\{\mathbf{y}\in Z_{+}^{m}:d_{G}\left( \mathbf{x,y}\right) $| is finite|$\}$|⁠.9 Figure 1 displays the deterministic social network |$G_{1}$| and the observation set of agent |$\mathbf{x}=(2,3)$| who lies at a lattice distance of |$5$| from the origin and therefore makes his choice in period |$t=6$|⁠. All agents denoted by a circle with an empty interior lie on a directed path from |$\mathbf{x}$| and their actions are observed by agent |$\mathbf{x}$|⁠. Figure 1 View largeDownload slide The observation set of agent |$\mathbf{(}2,3\mathbf{)}$| in |$G_{1}.$| Figure 1 View largeDownload slide The observation set of agent |$\mathbf{(}2,3\mathbf{)}$| in |$G_{1}.$| The realized observation structure is commonly known among all agents while the realized actions are not. In Section 3, we relax the assumption of common knowledge of the realized observation network |$G$|⁠. Instead, there we assume that the observation set of each agent is his private information. In the one-dimensional case of |$m=1$| and |$p=1$|⁠, the graph |$G_{1}$| is the infinite line network with agent |$\mathbf{0}$| as the origin. Therefore, the one-dimensional case with |$p=1$| corresponds to the standard herding model: agents decide in strict sequential order and observe the actions of all their predecessors. By increasing the dimension |$m $|⁠, we increase the number of agents who act simultaneously in each round. We restrict attention to dimensions larger than one. For the given signal distributions |$F$|⁠, denote by |$\Gamma _{p}^{m}$| the game with the above random observation structure, linkage probability |$p$|⁠, and lattice dimension |$m$|⁠. The information set |$I_{\mathbf{x}}$| of agent |$\mathbf{x}$| is given by his signal realization |$s_{\mathbf{x}}$|⁠, and the actions chosen by the agents he observes, i.e., |$I_{\mathbf{x}}=\left\{ (s_{\mathbf{x}},\left\{ a_{\mathbf{y}}\right\} )\text{ for }\mathbf{y}\in B_{G}(\mathbf{x})\right\} $|⁠. Denote by |$\mathcal{I}_{\mathbf{x}}$| the set of all possible information sets of agent |$\mathbf{x}$|⁠. A strategy |$\sigma _{\mathbf{x}}$| of agent |$\mathbf{x}$| is a measurable mapping that assigns a (mixed) action to each possible information set, i.e., |$\sigma _{\mathbf{x}}:\mathcal{I}_{\mathbf{x}}\rightarrow \Delta \left( A\right) .$| A strategy profile is given by the set of strategies of all agents, i.e., |$\sigma =\left\langle \sigma _{\mathbf{x}}\right\rangle _{\mathbf{x}\in Z_{+}^{m}}$|⁠. The game |$\Gamma _{p}^{m}$|⁠, together with a strategy profile |$\sigma $|⁠, determines a probability measure |$\mathbf{P}_{\sigma ,p}^{m}$| over the state space, the set of observation networks, the vector of signals, and the process of actions for all agents. As is common in the literature, we solve the game for its Perfect Bayesian equilibria, i.e., the set of strategy profiles |$\sigma $| such that, for each agent |$\mathbf{x}$|⁠, |$\sigma _{\mathbf{x}}$| maximizes the expected utility of |$\mathbf{x}$| given the strategies of all other agents. We now briefly introduce some key concepts from directed percolation that are important for our analysis.10 A directed percolation model differs from our random observation structure in the direction of the edges. More precisely, the set of nodes in directed percolation is |$Z_{+}^{m}$| and thus coincides with ours. However, the set of edges |$\hat{E}_{1}^{m}$| have opposite orientation, i.e., |$\mathbf{x}\mathbf{y}\in \hat{E}_{1}^{m}$| iff |$\mathbf{y}=\mathbf{x}+\mathbf{\mathbf{e}}^{j}$| for some |$j=1,\ldots ,m$|⁠. As in our case the edges are realized independently with probability |$p\in \lbrack 0,1]$|⁠. For a realized percolation graph, let |$C$| denote the set of nodes that are connected to the origin node by a directed path. Analogously, in our model |$C$| represents the set of agents who can observe the action of the first agent who is located in the origin of the lattice. Let |$\boldsymbol{\hat{P}}_{p}^{m}$| be the induced probability measure over the standard percolation random lattice. A measure that plays a fundamental role in our analysis is the percolation probability|$\rho _{p}^{m}=\boldsymbol{\hat{P}}_{p}^{m}(|C|=\infty )$|⁠, which equals the probability that the origin forms part of an infinite component, i.e., the probability of the event where the origin can reach infinitely many nodes via a directed path. Our analysis relies on two crucial results from percolation theory. First, for every dimension |$m\geq 2$| there exists a linkage probability threshold |$p_{c}^{m}\in \left( 0,1\right) $| such that |$\rho _{p}^{m}=0$| for every |$p0$| for every |$p>p_{c}^{m}$|⁠. Second, for values of |$p$| above |$p_{c}^{m}$| the percolation probability |$\rho _{p}^{m}$| is strictly increasing and goes to one with |$p$|⁠. Additionally, note that the percolation threshold |$p_{c}^{m}$| is decreasing in the lattice dimension |$m$|⁠.11 Asymptotic learning is a central concept in the social learning literature. Under asymptotic learning, the probability with which agents select the optimal action converges to one along the sequence of actions. Smith and Sorensen (2000) establish that in the herding model asymptotic learning critically hinges upon properties of the signal space. Their main result shows that asymptotic learning occurs in any equilibrium if private signals are unbounded and fails in any equilibrium if private signals are bounded. Boundedness and unboundedness are properties of the support of the private beliefs induced by the signals. Let |$q_{\mathbf{0}}$| be the private belief of an agent who selects his action based only on his private signal realization, i.e., |$q_{\mathbf{0}}=\Pr \left( \omega =1\left\vert s\right. \right) $|⁠. Private signals are bounded if the support of |$q_{\mathbf{0}}$| contains neither zero nor one. Likewise, private signals are unbounded if the support of |$q_{\mathbf{0}}$| contains both zero and one. The notion of asymptotic learning is too strong for our setting. To see this, note that for |$p<1$|⁠, with a fixed positive probability, the observation set of every agent in the random lattice is empty making him isolated. Hence there is always a positive probability, bounded away from zero, that any given agent will choose the suboptimal action. This raises the question of how one should define an alternative notion of asymptotic learning that is conceptually close to the standard definitions. Our notion of asymptotic learning captures the multidimensional order in which actions are taken. Consider the set of agents who act in period |$t$|⁠. From the perspective of learning, the best possible outcome is that a large proportion of the agents who act in period |$t$| select the optimal action as |$t$| grows. We therefore consider a notion of proportional learning. Recall that |$B_{t}$| is the set of agents at a lattice distance of precisely |$t-1$| from the origin. These are the agents who take their decision at time |$t$|⁠. Let |$b_{t}$| denote the size of |$B_{t}$| and let |$r_{t}$| be the random variable that represents the number of agents in |$B_{t}$| whose actions match the realized state.12 Definition 1. Consider the game |$\Gamma _{p}^{m}$|⁠. A strategy profile |$\sigma $| satisfies |$\alpha $|-proportional learning if |$\lim_{t\rightarrow \infty }\mathbf{P}_{\sigma ,p}^{m}\left( \frac{r_{t}}{b_{t}}\geq \alpha \right) =1$|⁠. Note that our proportional learning notion is very strong as it requires more than the expected proportion to be above |$\alpha$|⁠. In some environments, such as voting, a threshold proportion determines the overall outcome. Our concept of |$\alpha $|-proportional learning assures that, asymptotically, a proportion of at least |$\alpha $| acts optimally with probability one. Our main goal is to establish a connection between |$\alpha $| and |$p$|⁠. Definition 2. For every |$\alpha \in (0,1)$| and for every dimension |$m\geq 2$|⁠, let |$p^{m}(\alpha )$| be the infimum over all values |$p$| such that |$\rho _{p}^{m}>\alpha $|⁠. We denote |$p^{2}(\alpha )$| by |$p(\alpha )$|⁠. That is, |$p^{m}(\alpha )$| is the infimum over all connectivity parameters |$p$| for which the percolation probability exceeds |$\alpha $|⁠, i.e., the probability of the origin agent being part of an infinite connected component. Note that |$p^{m}(\alpha )$| is strictly increasing in |$\alpha $|⁠. We can now state our main result. Theorem 1. For every |$\alpha <1$|⁠, any dimension |$m\geq 2$|⁠, and any signal distribution, |$\alpha $|-proportional learning holds in any Perfect Bayesian equilibrium |$\sigma $| of the game |$\Gamma _{p}^{m}$| for every |$p\in \left( p(\alpha ),1\right) $|⁠. Theorem 1 provides a neat characterization of the relation between the connectivity parameter |$p$| and the asymptotic proportion of agents who choose the optimal action. As noted, an important feature of the characterization is that it is independent of the particular signal distribution and thus differs conceptually from the equilibrium behaviour in the standard herding model.13 That is, if |$p\in \left( p(\alpha ),1\right) $| and |$m\geq 2,$| then |$\alpha $|-proportional learning holds for every signal distribution in any Perfect Bayesian equilibrium. Thus, the linkage probability |$p(\alpha )$| induces a lower bound of |$\alpha $| for the proportion of agents who take the optimal action in the limit, for any signal distribution. This implies that the asymptotic proportion of agents acting optimally goes to one with |$p$|⁠. As we shall show, this result contrasts with the equilibrium behavior in the deterministic social network (⁠|$p=1$|⁠), thus implying a sharp discontinuity in the limit proportion of agents who take the optimal action when signals are bounded. A necessary condition for |$\alpha $|-proportional learning is that an |$\alpha $|-proportion of agents in a given round have access to an ever-growing history. Under unbounded signals this condition is also sufficient. In the case of bounded signals, observing an arbitrarily large history does not necessarily reveal the true state with probability one, as the actions of some observed agents may reveal no information. However, observing an unboundedly growing number of actions taken in isolation is sufficient for taking the optimal action with a probability that approaches one. Recall that an agent is isolated if he observes no predecessor. In this context, changing the linkage probability |$p$| has two opposing effects. First, increasing |$p$| increases the connectivity of the social network and hence the expected size of the observed history of a given agent. Second, increasing |$p$| decreases the probability that a given agent is isolated. As Theorem 1 shows, the first effect outweighs the second. This dual role of |$p$| also highlights why Theorem 1 fails for the one-dimensional case of |$m=1$|⁠. Here, for every |$p\in \left[ 0,1\right] $|⁠, the number of isolated agents a given agent may observe is bounded by one and thus bounded signals fail to achieve learning. We next provide an outline of the proof of Theorem 1. The key aspect is to establish that for any |$p\in (p(\alpha ),1)$|⁠, a proportion of at least |$\alpha $| of the agents who act at time |$t$| observe an unbounded number of isolated agents with a probability that goes to one as |$t$| goes to infinity. Since the probability that an agent takes the optimal action after observing |$k$| isolated agents goes to one as |$k$| goes to infinity, we can deduce that, asymptotically, an |$\alpha $|-proportion of agents take the optimal action with a probability that goes to one as |$t$| grows. Consider the standard percolation model. For any |$l\in \mathbb{N}$|⁠, consider the distribution of the number of nodes at distance |$l$| from the origin that can be reached by a connected path. We denote this random variable by |$|\hat{\xi}_{l}^{0}|$|⁠. Consider the case where |$p\in \left( p_{c}^{m},1\right) $|⁠. Our Lemma 3 establishes that for any |$M\in \mathbb{N}$| and every |$\epsilon >0$| there exists a time period |$l$| such that the probability that |$|\hat{\xi}_{t}^{0}|\geq M$| for any period |$t\geq l$| is at least |$\rho _{p}^{m}-\epsilon $|⁠. Thus the origin reaches an arbitrarily large number of agents at a sufficiently large distance with a probability that is arbitrarily close to the percolation probability. We then use the following important identification between the standard percolation model and our observational model. Consider any agent |$\mathbf{x}\in B_{t}$| with |$\mathbf{x}_{-}=\underset{1\leq i\leq m}{\rm min}\mathbf{x}_{i}\geq l$| for some |$l\in \mathbb{N}$|⁠. Let |$\xi _{l}^{\mathbf{x}}=B_{G}(\mathbf{x})\cap B_{t-l}$| represent the set of agents who are observed by |$\mathbf{x}$| and make their decision |$l$| periods prior to |$\mathbf{x},$| at time |$t-l+1$|⁠. A critical point is the fact that |$|\xi _{l}^{\mathbf{x}}|$| and |$|\hat{\xi}_{l}^{0}|$| have an identical distribution. That is, the number of agents in |$B_{l}$| who can be reached from the origin in the percolation model has the same distribution as the number of agents in |$B_{t-l}$| who are observed by |$\mathbf{x}$|⁠. Therefore, Lemma 3 implies that for every |$M\in\mathbb{N}$| and |$\epsilon>0$| there exists |$l$| such that if agent |$\mathbf{x}$| satisfies |$\mathbf{x}_-\geq l$|⁠, then |$|\xi^\mathbf{x}_l|\geq M$| with a probability of at least |$\rho^m_p-\epsilon$|⁠. Note that each agent is isolated with a probability that is bounded below by |$(1-p)^{m}$|⁠. As a corollary of Lemma 3, using the above identification, we have that for |$k\in \mathbb{N}$| and every |$\epsilon >0$| there exists |$t_{k,\epsilon }$| such that if |$\mathbf{x}_{-}\geq t_{k,\epsilon }$|⁠, then the probability that agent |$\mathbf{x}$| observes at least |$k$| isolated agents is at least |$\hat{\rho}_{p}^{m}-\epsilon $| (see Corollary 5 in Section A.1). Let |$p\in (p(\alpha ),1)$|⁠. Since |$p^{m}(\alpha )\leq p(\alpha )$| it follows that |$\rho _{p}^{m}>\alpha $|⁠. Since an isolated agent must be informative we can choose |$k$| sufficiently large such that any agent who observes at least |$k$| isolated agents takes the optimal action |$a=\omega $| with arbitrarily high probability. Together with Corollary 5 in Section A.1 this implies that for all sufficiently small |$\epsilon$| there exists a |$k$| such that any agent |$\mathbf{x}$| with |$\mathbf{x}_{-}\geq t_{k,\epsilon }$| takes the optimal action with probability at least |$\rho _{p}^{m}-\epsilon >\alpha $|⁠. Since the proportion of agents |$\mathbf{x}\in B_{t}$| with |$\mathbf{x}_{-}\geq t_{k,\epsilon }$| approaches one as |$t$| grows, this implies that the average expected payoff of agents in |$B_{t}$| exceeds |$\alpha $| for all sufficiently large |$t$|⁠. Our notion of |$\alpha $|-proportional learning, however, is more demanding as we require that a proportion of at least |$\alpha $| of the agents in |$B_{t}$| take the correct action with probability one, as |$t$| goes to infinity. We could easily infer |$\alpha$|-proportional learning, using the law of large numbers, if agents’ actions were conditionally independent. This is however not the case. The key missing argument is to show that not only do most agents have a probability of |$\rho _{p}^{m}$| of observing |$k$| isolated agents but it simultaneously holds that there is a |$\rho_{p}^{m}$| proportion of agents in |$B_{t}$| who observe |$k$| isolated agents. To bypass this obstacle, we consider hypothetical agents in |$B_{t}$| who observe only the agents who lie at a distance of |$t_{k,\epsilon } $| from them. One can see that in this case, the hypothetical agents observations are “almost” independent in the sense that each agent’s observation set overlaps with only a bounded number of other agents’ observation sets (independently of the time |$t$|⁠). We use the law of large numbers for dependent random variables (see Lemma 5) to deduce that for every |$k$| the proportion of hypothetical agents in |$B_{t}$| who observe at least |$k$| isolated agents lies arbitrarily close to |$\rho _{p}^{m}$| as |$t$| grows. This certainly implies that for every |$k$| the proportion of agents in |$B_{t}$| who observe |$k$| isolated agents lies arbitrarily close to |$\rho _{p}^{m}$| as |$t$| grows. It therefore follows that as |$k$| grows, a proportion of |$\rho^m_p$| of the agents take the correct action with probability one. Therefore, if |$\rho _{p}^{m}>\alpha $| (which holds for |$p\in (p(\alpha ),1)$|⁠), then |$\alpha $|-proportional learning must hold. We next discuss the connection between learning and information cascades in our framework. We start by defining the notion of cascading. Definition 3. Consider a strategy profile |$\sigma $|⁠. Agent |$\mathbf{x}$| cascades given information set |$I_{\mathbf{x}}\in\mathcal{I}_{\mathbf{x}}$|⁠, if there exists an action |$a$| such that |$\sigma _{\mathbf{x}}(s_{\mathbf{x}},h_{\mathbf{x}})=a$| for almost every signal realization |$s_{\mathbf{x}}\in S$|⁠. That is, agent |$\mathbf{x}$| cascades if the action he takes is independent of his private information. Let |$c_{t}$| be the random variable that represents the number of agents in |$B_{t}$| who cascade. The proportion of agents who cascade in period |$t$| is represented by the random variable |$\frac{c_{t}}{b_{t}}$|⁠. Proportional information cascades are now defined as follows. Definition 4. Consider the game |$\Gamma _{p}^{m}$|⁠. An |$\alpha $|-proportional cascade occurs under strategy profile |$\mathbf{\sigma }$| if |$\lim_{t\rightarrow \infty }\mathbf{P}_{\sigma ,p}^{m}\left( \frac{c_{t}}{b_{t}}\geq \alpha \right) =1$|⁠. Intuitively, an information cascade seems to prevent learning as agents select actions disregarding their private signals. In the herding model this intuition is in fact correct: the failure of information cascades is a necessary condition for asymptotic learning. As shown by Smith and Sorensen (2000), the condition required for asymptotic learning in the standard herding model (unbounded signals) rules out cascades.14 In our multidimensional model, however, this relation is reversed for bounded signals. Corollary 1. For every |$\alpha <1$|⁠, any dimension |$m\geq 2$|⁠, and any bounded signal distribution, an |$\alpha $|-proportional cascade occurs in every Perfect Bayesian equilibrium |$\sigma $| of |$\Gamma _{p}^{m}$| for every |$p\in \left( p(\alpha ),1\right)$|⁠. It is not a coincidence that the same linkage probability threshold above which |$\alpha $|-proportional learning holds is used to characterize |$\alpha $|-proportional cascades. While in the standard herding framework asymptotic learning might occur only if information cascades fail, the opposite is true for the proportional concepts in the multidimensional lattice. If signals are bounded, then for |$\alpha $|-proportional learning to hold it is necessary that |$\alpha $|-proportional cascades occur. To see this, note that under |$\alpha $|-proportional learning, as time goes by, a proportion of at least |$\alpha $| of agents observe an arbitrarily high number of isolated agents, which implies that their prior belief based on the history is arbitrarily close to the truth. We recall that for any bounded signal distribution there are two ranges of prior beliefs close to zero and one, such that agents cascade if facing a prior within one of these ranges. This implies that, as time goes by, a proportion of at least |$\alpha $| of agents cascade. The proof of the corollary follows directly from the proof of Theorem 1. Under unbounded signals for every interior prior belief the agent selects either action with positive probability. Thus, when signals are unbounded, even when the agent observes a large set of isolated agents he may select either action, with positive probability. However, the signal has to be very extreme to overturn a large set of isolated actions. Hence, among those agents that observe a large number of isolated actions the probability of such extreme signals converges to zero as time goes to infinity. Therefore, even under unbounded signals, the asymptotic proportion of agents for which the private signal does not overturn the action that is optimal conditional only on the observed history is at least |$\alpha $| for |$p\in \left( p(\alpha ),1\right) $| regardless of whether signals are bounded. 3. Learning with Private Observations So far we have assumed that an agent knows the entire observation set of each agent he observes. This assumption is satisfied, for example, in an environment where a direct observation edge serves as a communication link and each agent communicates his observed history to his direct successor. Nevertheless, the assumption fails to hold for many applications. Instead, it is more realistic to assume that the set of agents a given agent observes is his private information. We now analyse the case of private observation sets. Essentially, we show that the exact result of Theorem 1 carries forward at the cost of a more complicated proof. Rather than the commonly known observation graph being determined by one draw of the random lattice |$G_{p}$|⁠, in the private observations model the observation set of each agent |$\mathbf{x}$| is determined by a private i.i.d. draw from |$G_{p}$|⁠. To be more precise, each agent |$\mathbf{x}$| faces a different realization |$G_{\mathbf{x}}$| drawn independently from the random lattice |$G_{p}$|⁠. As in the model with commonly known observation sets, agent |$\mathbf{x}$| observes the action of every agent in |$B_{G_{\mathbf{x}}}(\mathbf{x})$|⁠, which consists of all agents that agent |$\mathbf{x}$| can reach in |$G_{\mathbf{x}}$|⁠, via a directed path originating from |$\mathbf{x}$|⁠. In contrast to the first model, agent |$\mathbf{x}$| now does not know the set of agents observed by any agent |$\mathbf{y\in }B_{G_{\mathbf{x}}}(\mathbf{x})$| as |$G_{\mathbf{y}}$| is |$\mathbf{y}$|’s private information. The random graph process |$\left\{ G_{p,\mathbf{x}}\right\} _{\mathbf{x\in }Z_{+}^{m}}$| generating the observation structures |$\left\{ G_{\mathbf{x}}\right\} _{\mathbf{x\in }Z_{+}^{m}}$| is commonly known among the agents and induces a game |$\hat{\Gamma}_{p}^{m}$|⁠. The following theorem shows that the exact same result of Theorem 1 carries forward to the case of independent realizations of observation sets. Theorem 2. Let |$m\geq 2$| and let the private observation structure of each agent |$\mathbf{x\in }Z_{+}^{m}$| be determined by independent draws from |$G_{p}$|⁠. For every |$\alpha <1$| and |$p\in (p(\alpha ),1)$|⁠, |$\alpha $|-proportional learning is satisfied in any Perfect Bayesian equilibrium of |$\hat{\Gamma}_{p}^{m}$|⁠. The proof of Theorem 1 was mainly based on an |$\alpha $|-proportion of asymptotic agents who observe unboundedly many isolated agents. The same logic does not directly apply here. Even though an |$\alpha $|-proportion of asymptotic agents still observe unboundedly many isolated agents, now they do not know who is isolated and who is not. The proof of Theorem 2 is based mainly on a general learning result that we establish. Consider the standard herding model with binary states, binary actions, and a signal distribution |$F=(F_{0},F_{1})$|⁠. Let |$X=\left\{ 1,2,...\right\} $| denote the infinite sequence of agents who act in corresponding time periods. Each agent |$i$| observes the actions of a randomly drawn subset |$K_{i}\subseteq \left\{ 1,2,...,i-1\right\} $|⁠, of his predecessors and receives a conditionally independent signal that is generated according to a distribution |$F$|⁠. We assume that the individual neighbourhoods are independently drawn across agents. For agent |$i$|⁠, let |$k_{i}$| be the probability that his observation set is empty, i.e., |$k_{i}=\mathbf{P}(K_{i}=\emptyset ).$| We assume that the random graph process is commonly known, whereas individual observation sets are the private information of agents. Let |$\tau $| be a Perfect Bayesian equilibrium of this game. Consider an outside observer who observes the actions of a countable subset |$N=\{i_{1},i_{2},\ldots \}\subseteq X$| sequentially (for simplicity of notation, let |$a_{n}$| be the action of agent |$i_{n}$|⁠). That is, at each time |$n\in N$| he observes the history of actions |$h_{n}=(a_{1},\ldots ,a_{{n-1}})\in \left\{ 0,1\right\} ^{n-1}$| of all agents in |$N$| who acted prior to time |$n$|⁠, without knowing the realized observation structure of either the agents in |$N$| or the agents in |$X$|⁠. The observer knows |$\tau $| and the process according to which the observation structure is generated. Let |$p_{n} $| be the conditional probability that the observer assigns to state |$\omega =1$| at time |$n$|⁠, conditional on observing |$h_{n}$|⁠. That is, |$p_{n}=\mathbf{P}_{\tau }(\omega =1|h_{n}).$| We establish the following result. Lemma 1. Let |$k_{i}\geq e>0$| for all |$i\in X$|⁠. For every |$\epsilon >0$| there exists |$n_{\epsilon }$| such that for all |$n>n_{\epsilon }$| we have |$\mathbf{P}_{\tau }(|p_{n}-\omega |\leq \epsilon |\omega )\geq 1-\epsilon $| for every equilibrium strategy |$\tau $|⁠, for |$\omega =0,1$|⁠. The lemma states that if each agent is isolated with a probability that is bounded away from zero, then the belief of the observer who does not know the realized observation sets of the agents, converges in probability to the point belief that assigns probability one to the realized state. We shall outline the proof of Theorem 2 based on Lemma 1. Consider |$p\in (p(\alpha ),1)$|⁠, a Perfect Bayesian equilibrium |$\sigma ,$| and an agent |$\mathbf{x}\in Z_{+}^{m}$|⁠. We can enumerate the agents observed by |$\mathbf{x}$| such that |$B_{G_{\mathbf{x}}}( \mathbf{x})=\{i_{1},\ldots ,i_{n}\}$|⁠. We claim that |$\mathbf{x} $| plays the role of the outside observer in Lemma 1. To see this, note that the process by which actions are taken by the agents in |$B_{G_{\mathbf{x}}}(\mathbf{x})$| satisfies the assumption of Lemma 1. That is, agent |$\mathbf{x}$| observes the sequence of actions taken by the agents in |$B_{G_{\mathbf{x}}}(\mathbf{x})$|⁠, knows the process by which the observation sets are generated, but does not know the realized observation sets of the agents in |$B_{G_{\mathbf{x}}}(\mathbf{x})$|⁠. Crucially, it also holds that the probability that the observation set of any agent |$\mathbf{y}\in B_{G_{\mathbf{x}}}(\mathbf{x})$| is empty, is at least |$(1-p)^{m}>0$|⁠. Let |$p_{\mathbf{x}}$| be agent |$\mathbf{x}$|’s conditional probability of state |$\omega =1$| based only on observing the actions of the agents in |$B_{G_{\mathbf{x}}}(\mathbf{x})$|⁠. We get the following corollary from Lemma 1. Corollary 2. For every |$p\in (p(\alpha ),1)$|⁠, any Perfect Bayesian equilibrium |$\sigma $| of |$\hat{\Gamma}_{p}^{m}$|⁠, and any agent |$\mathbf{x}\in Z_{+}^{m}$|⁠, it holds that if |$|B_{G_{\mathbf{x}}}(\mathbf{x})|\geq n_{\epsilon }$|⁠, then |$\boldsymbol{\hat{P}}_{\sigma ,p}^{m}(|p_{\mathbf{x}}-\omega |\leq \epsilon |\omega ,B_{G_{\mathbf{x}}}(\mathbf{x}))\geq1-\epsilon $|⁠. That is, in our random observation model it holds that if agent |$\mathbf{x}$| observes at least |$n_{\epsilon }$| agents, then |$p_{\mathbf{x}}$| assigns a probability of at least |$1-\epsilon $| to the realized state with probability at least |$1-\epsilon $|⁠. This implies, in particular, that if the observation set of |$\mathbf{x}$| is sufficiently large, then he takes the optimal action with arbitrarily high probability. Fix |$p\in (p(\alpha ),1)$|⁠. It follows from Corollary 5 in Section A.1 that for every |$n_{\epsilon }$| there exists a |$t_{n,\epsilon }$| such that if |$\mathbf{x}_{-}\geq t_{n,\epsilon }$| then |$|B_{G_{\mathbf{x}}}(\mathbf{x})|\geq n_{\epsilon }$| with probability at least |$\rho _{p}^{m}-\epsilon $|⁠. Thus Corollary 2 implies that if |$\mathbf{x}_{-}$| is sufficiently large, then |$\mathbf{x}$| takes the optimal action with a probability that is arbitrarily close to |$\rho _{p}^{m}>\alpha .$| We can then apply very similar considerations to those of the proof of Theorem 1 to deduce that |$\alpha $|-proportional learning is satisfied. We now outline the proof of Lemma 1. We let |$l_{n}=\log (\frac{p_{n}}{1-p_{n}})$| be the log-likelihood ratio of |$p_{n}$|⁠. The crucial step in establishing Lemma 1 is to show that there exists |$w>0$| such that conditional on state |$\omega =1$| and the history |$h_{n}$| the expectation of |$l_{n+1}-l_{n}$| is greater than |$w$|⁠, for every |$n$|⁠. That is, \begin{equation} E_{\tau }[l_{n+1}-l_{n}|h_{n},\omega =1]\geq w>0. \label{eq:llr1} \end{equation} (1) To establish Equation (1) we show that there exists |$\beta >1$| such that |$\frac{\mathbf{P}_{\tau }(a_{n}=1|h_{n},\omega =1)}{\mathbf{P}_{\tau }(a_{n}=1|h_{n},\omega =0)}\geq \beta $| for every |$n$|⁠. To see the intuition for the existence of |$\beta $|⁠, consider the case where agent |$i_{n}$| takes his action in isolation. In this case, it clearly holds that action |$a_{n}=1$| is more likely under |$\omega =1$| than under |$\omega =0$|⁠. We show that since agent |$i_{n}$| is isolated with a probability of at least |$e>0$|⁠, independently of |$n$|⁠, the ratio is bounded away from |$1$|⁠, even when the realized observation set of agent |$i_{n}$| is not known. That is, the fact that |$i_{n}$| is isolated with probability of at least |$e>0$| makes his action informative to the observer, even if the observer cannot determine whether |$i_{n}$| takes his action in isolation. This is shown to imply that the expectation in Equation (1) must be bounded away from zero. We then show that Equation (1) implies that, conditional on state |$\omega =1$|⁠, the log likelihood ratio |$l_{n}$| converges to infinity with probability one which in turn implies that |$p_{n}$| converges to |$1$| with probability one. 4. Asymptotic Welfare and Connectivity In this section, we focus on the relation between asymptotic welfare and the connectivity parameters of the random observation structure. The results we establish here hold exactly as stated for the case of a commonly known observation structure and the case of independent private observations. Thus we do not explicitly distinguish between the models below. As is common in the literature, we define welfare in terms of expected utility. More precisely, we define welfare in period |$t$| as the expected average utility achieved by the agents who act in period |$t$|⁠. That is, asymptotic welfare is defined as the expected asymptotic proportion of agents who act optimally. The main distinction between this notion of asymptotic welfare and our concept of |$\alpha $|-proportional learning is that |$\alpha $| -proportional learning guarantees that a proportion of at least |$\alpha $| of agents take the optimal action in the limit. However, expected welfare may be very high even if |$\alpha $|-proportional learning fails for every |$\alpha \in \left( 0,1\right) $|⁠. To see this, consider the case where |$p=1$| and signals are binary and symmetric with precision |$q>\frac{1}{2}$|⁠.15 In this case, it follows that |$\alpha $|-proportional learning fails for every |$\alpha \in \left( 0,1\right) $| (see Theorem 5 in Section 5), while asymptotic welfare is at least |$q$|⁠, for every |$q\in \left(\frac{1}{2},1\right)$|⁠. Naturally, the expected asymptotic welfare varies with the signal distribution and also across different equilibria for a fixed signal distribution. We shall provide a lower bound for the expected asymptotic welfare as a function of the signal distribution |$F=(F_{0},F_{1})$|⁠, the dimension |$m$|⁠, and the linkage probability |$p$|⁠. We next fix an observational model, i.e., either commonly known or private observation sets. Let |$\Sigma _{F,p}^{m}$| denote the set of all Perfect Bayesian equilibria for a given signal distribution |$F$|⁠, lattice dimension |$m $|⁠, and linkage probability |$p$|⁠. Formally, we define the infimal asymptotic welfare as a function of |$(F,m,p)$|⁠. Given |$(F,m,p)$| and a Perfect Bayesian equilibrium |$\sigma \in \Sigma _{F,p}^{m}$| let \begin{equation} \underline{l}_{\sigma ,p}^{m}(F)=\underset{t\rightarrow \infty }{\lim \inf }% E_{\sigma ,p}^{m}\left[ \sum_{\mathbf{x}\in B_{t}}\frac{u_{\mathbf{x}}\left( \sigma _{\mathbf{x}},\omega \right) }{b_{t}}\right]\!. \label{eq:wel} \end{equation} (2) That is, |$\underline{l}_{\sigma ,p}^{m}(F)$| represents the asymptotic welfare bound that is induced by the strategy |$\sigma $|⁠.16 For a signal distribution |$F$|⁠, consider the ex-ante probability of agent |$\mathbf{0}$| selecting the optimal action after observing his private signal, |$y_{F}=\mathrm{Pr}_{F}(\sigma _{\mathbf{0}}=\omega )$|⁠. We call |$y_{F}$| the success probability. Since signals are conditional i.i.d., the ex-ante probability of an isolated agent selecting the optimal action is equal to the success probability. The following result provides a lower bound for asymptotic welfare as a function of the success probability, the linkage probability |$p$|⁠, and the lattice dimension |$m$|⁠. Theorem 3. For any dimension |$m\geq 2$|⁠, signal distribution |$F$|⁠, linkage probability |$p\in \left( p_{m}^{c},1\right)$|⁠, and a Perfect Bayesian equilibrium |$\sigma \in \Sigma _{F,p}^{m},$| the asymptotic welfare |$\underline{l}_{\sigma ,p}^{m}(F)$| is at least |$\underline{w}_{p}^{m}(F)=\rho_{p}^{m}+\left( 1-\rho _{p}^{m}\right) y_{F}$|⁠. For bounded signal distributions, Theorem 3 shows that the discontinuity at |$p=1$| holds also with respect to asymptotic welfare. As |$p$| approaches one the asymptotic welfare bound converges to one. In contrast, at |$p=1$| asymptotic welfare is bounded away from one. We establish two corollaries of Theorem 3 that concern the relation between connectivity and the uniform expected welfare bound. Corollary 3. Fix a lattice dimension |$m\geq 2$| and a signal distribution |$F$|⁠. For any pair of linkage probabilities |$p,p^{\prime }\in \left( p_{c}^{m},1\right) $| such that |$p>p^{\prime }$| we have |$\underline{w}_{p}^{m}(F)>\underline{w}_{p^{\prime }}^{m}(F)$|⁠. Corollary 3 establishes that the lower bound on welfare is strictly increasing in the linkage probability, for the range |$\left( p_{c}^{m},1\right) $|⁠. The relation between the lattice dimension |$m$| and the lower bound on asymptotic welfare is characterized in the following17 corollary. Corollary 4. Fix a linkage probability |$p\in \left( p_{c}^{m},1\right) $| and a signal distribution |$F$|⁠. The asymptotic welfare bound |$\underline{w}_{p}^{m}(F)$| is strictly increasing in the lattice dimension |$m$|⁠. Recall that increasing the lattice dimension and/or the linkage probability increases the expected average degree of each agent, which is a standard measure of network connectivity.18 Hence, increasing the connectivity of the observation network increases the infimal expected asymptotic welfare. We point out that while the bound in Theorem 3 is not tight, in general there are two cases where the bound is “almost tight”. In the first case, the connectivity parameter |$p$| approaches one. This case is further explained in the proof sketch of Theorem 4 below. In the second case the signal distribution |$F$| becomes uninformative; i.e., |$y_F$| approaches |$\frac{1}{2}.$| The logic behind the proof of Theorem 3 is as follows. Fix |$\epsilon >0$|⁠, and a value |$k$| such that observing |$k$| isolated agents guarantees an expected payoff of |$1-\epsilon $|⁠. Corollary 5 in Section A.1 implies that for every |$k$| there exists |$t_{k,\epsilon }$| such that if an agent |$\mathbf{x}$| satisfies |$\mathbf{x}_{-}\geq t_{k,\epsilon }$|⁠, then he observes |$k$| isolated agents with a probability of at least |$\rho _{p}^{m}-\epsilon $|⁠. The complementary event where agent |$\mathbf{x}$| observes fewer than |$k$| isolated agents occurs with a probability of at most |$1-\rho _{p}^{m}+\epsilon $|⁠. In that event agent |$\mathbf{x}$|’s expected payoff is at least |$y_{F}$|⁠, even if |$\mathbf{x}$| is isolated. Therefore, the expected payoff of |$\mathbf{x}$| is at least |$(\rho _{p}^{m}-\epsilon )(1-\epsilon )+(1-\rho _{p}^{m}+\epsilon )y_{F}$|⁠. Since the proportion of agents |$\mathbf{x}\in B_{t}$| for which |$\mathbf{x}_{-}\geq t_{k,\epsilon }$| goes to one with |$t$|⁠, we deduce that the average asymptotic welfare is at least |$(\rho _{p}^{m}-\epsilon )(1-\epsilon )+(1-\rho _{p}^{m}+\epsilon )y_{F}$|⁠. Taking |$\epsilon $| to zero yields the lower bound of |$\rho _{p}^{m}+\left( 1-\rho _{p}^{m}\right) y_{F}$|⁠. The existing social learning literature focuses on the distinction between bounded and unbounded signal distributions. Unbounded signals lead to asymptotic learning while bounded ones do not (see e.g., Smith and Sorensen (2000), Acemoglu et al. (2010), and Arieli and Mueller-Frank (2014)) and thus in the standard social learning model, unbounded signals lead to higher expected welfare than bounded signals. We next introduce an expected welfare ranking for pairs of signal distributions for the case of |$p<1$|⁠. Essentially, the following theorem makes use of the fact that as |$p$| grows the welfare lower bound |$\underline{w}_{p}^{m}$| becomes tight for any equilibrium. Theorem 4. Fix a dimension |$m\geq 2$| and let |$F,F^{\prime }$| be two signal distributions such that |$y_{F}>y_{F^{\prime }}$|⁠. There exists a linkage probability |$\hat{p}<1$| such that |$\underline{l}_{\sigma ,p}^{m}(F)> \underline{l}_{\sigma^{\prime },p}^{m}(F^{\prime })$| for all |$p\in \left(\hat{p},1\right)$| and any Perfect Bayesian equilibria |$\sigma \in \Sigma_{F,p}^{m},$||$\sigma^{\prime }\in \Sigma _{F^{\prime },p}^{m}$|⁠. Therefore, for a range of linkage probabilities close to one the determinant of asymptotic welfare is the success probability. This directly implies that if observational noise is small, then for any unbounded signal distributions there is a family of bounded signal distributions with superior welfare properties. The intuition behind the proof of Theorem 4 is as follows. Fix a lattice dimension |$m\geq 2$|⁠. Recall that |$B_{t}$| is the set of agents who act in period |$t$|⁠. As discussed in the outline of the proof of Theorem 1, the proportion of agents in |$B_{t}$| who observe at least |$k$| isolated agents goes to |$\rho _{p}^{m}$| with a probability that approaches one, for every |$k$|⁠. This group of agents who observe a large number of isolated agents takes the optimal action with a probability that approaches one. As illustrated in the proof of Theorem 1, this is true regardless of the actual signal distribution.19 The second group of agents in |$B_{t}$|⁠, those with a bounded observation set, has a proportion of approximately |$1-\rho _{p}^{m}$| in |$B_{t}$| for large |$t$|⁠. As their observation set is bounded, their expected payoff crucially depends on the signal distribution. We use a result from percolation theory to show that the proportion of isolated agents in the second group goes to one with |$p$|⁠. Recall that the expected payoff of an isolated agent is exactly|$y_{F}$|⁠. This implies, intuitively, that the asymptotic welfare approaches |$\rho _{p}^{m}+(1-\rho _{p}^{m})y_{F}$| as |$p$| grows. Therefore, if for two signal distributions |$F,F^{\prime }$| it holds that |$y_{F}>y_{F^{\prime }}$|⁠, then for |$p$| large enough, but unequal to one, the asymptotic welfare under |$F$| is greater than under |$F^{\prime }$|⁠. 5. The Deterministic Observation Structure One implication of our main theorems is that introducing infinitesimal noise in the observation structure induces learning in arbitrarily high proportion, independently of the signal distribution. A natural question in this context concerns the learning properties in the absence of observational noise, that is, for a deterministic observation network with linkage probability |$p=1$|⁠. Note that in the deterministic case with |$p=1$| the common and private observation models analysed above coincide. For completeness we present Theorem 5, which establishes a characterization of |$\alpha $|-proportional learning in the deterministic case. Theorem 5. Let |$m\geq 2$| and |$p=1$|⁠. Any Perfect Bayesian equilibrium |$\sigma $| of the game |$\Gamma _{1}^{m}$| satisfies |$\alpha $|-proportional learning for every |$\alpha <1$| if signals are unbounded, and fails to satisfy |$\alpha $|-proportional learning for every |$\alpha >0$| if signals are bounded. In contrast to the random observation network, Theorem 5 establishes that in the deterministic multidimensional social learning framework asymptotic learning still critically depends upon the signal distribution. If signals are unbounded, then an arbitrarily high proportion of agents take the optimal action in the limit with probability one. However, if signals are bounded, then an arbitrarily high proportion of agents select the suboptimal action with positive probability. Thus Smith and Sorensen (2000) characterization generalizes to higher-dimensional observation structures. The positive result for unbounded signals follows from Theorem 2 in Acemoglu et al. (2010). The negative result for bounded signals follows from a stronger result that addresses the possibility of information cascades in our multidimensional model. It shows not only that |$\alpha $|-proportional learning fails for any |$\alpha >0$|⁠, but also that there exists a positive probability that all agents take the suboptimal action, and thus it sharply contrasts the case of |$p<1$|⁠. We call |$\mathbf{x}$| a boundary agent if |$\mathbf{x}_{i}=0$| for all but one coordinate |$i=1,...,m$|⁠. An agent who is not a boundary agent is called an interior agent. Lemma 2. Let private signals be bounded, |$m\geq 2,$| and |$p=1$|⁠. For any Perfect Bayesian equilibrium |$\sigma $| and any realized state of the world |$\omega \in \{0,1\}$|⁠, the following event holds with positive probability: all agents select action |$1-\omega $|⁠, and there exists a time |$t^{\prime }$| such that all interior agents who act after |$t^{\prime }$| cascade. Lemma 2 implies that, with positive probability, the proportion of agents who cascade on the suboptimal action goes to one as |$t$| grows. The fact that an arbitrarily high proportion of agents cascade on the suboptimal action with positive probability is precisely what prevents |$\alpha $|-proportional learning for any |$\alpha >0$|⁠. This sharply contrasts with the random case of |$p<1$| where |$\alpha $|-proportional learning and |$\alpha $|-proportional cascades coincide under bounded signals. Lemma 2 also highlights a stark distinction between our multidimensional model and the standard herding model with respect to the occurrence of information cascades. In the standard herding model, from a finite time and on all agents herd on the same action. This, however, does not imply that information cascades occur. Namely, the fact that all agents choose the same action from some time onward does not imply that their choice is independent of their private signal. Smith et al. (2014) and Herrera and Hoerner (2013) provide sufficient conditions on bounded signal distributions under which informational cascades fail to occur in the standard herding model. Instead, Lemma 2 shows that in the deterministic multidimensional setting cascades occur with positive probability for every bounded signal distribution. We next provide some intuition for the arguments underlying the proof of Lemma 2. Consider a bounded signal distribution such that the convex hull of the support of |$q_{\mathbf{0}}$|⁠, the posterior belief of agent |$\mathbf{0},$| is |$[\underline{\beta },\overline{\beta }]$| for |$\underline{\beta }>0$| and |$\overline{\beta }<1$|⁠. Let |$p_{\mathbf{x}}$| be the prior belief of any agent |$\mathbf{x}$|⁠, that is, the belief that agent |$\mathbf{x}$| assigns to |$\omega =1$| based only his observed history, and not on his private signal. It clearly holds that if |$p_{\mathbf{x}}\not\in \lbrack 1-\overline{\beta },1-\underline{\beta }],$| then agent |$\mathbf{x}$| cascades. Assume for simplicity that |$m=2$|⁠. Fix a Perfect Bayesian equilibrium |$\sigma $|⁠. We show that there exists a finite time period |$t^{\prime }$| such that all interior agents who act after |$t^{\prime }$| cascade on the suboptimal action. The reasoning behind this claim is as follows. Note first that the observation set of the boundary agent |$\mathbf{x}=(t,0)$| comprises |$\{(u,0)\}_{ut^{\prime }$| and every |$\mathbf{x}\in B_{t}$| with |$\mathbf{x}_{-}>0$|⁠, it holds that |$p_{\mathbf{x}}>1-\underline{\beta }$|⁠. Hence there exists a positive probability such that in state |$\omega =0$| all agents in the lattice take action |$1$| and all interior agents |$\mathbf{x}\in B_{t}$| cascade, for |$t>t^{\prime }$|⁠. 6. Simulations In this section we use simulations to visualize the properties of equilibrium behaviour. In particular, our objective with the simulations is 3-fold. First, we illustrate our theoretical results on asymptotic equilibrium properties for specific parameters. Second, we visualize the process of actions away from the limit. Finally, we provide an informal insight into the relation between speed of learning and connectivity that we did not analyse formally.20 For the simulations we consider the model with commonly known observations. Throughout, we consider a uniform prior and binary (and thus bounded) signals with a precision of |$0.55$|⁠, i.e., |$\Pr \left( s=\omega \left\vert \omega \right. \right) =0.55$| for |$\omega =0,1$|⁠. We begin by analysing the two-dimensional lattice. The simulations are derived based on the following equilibrium whose strategies can be defined via a closed-form recursive formula. We say that a given agent |$\mathbf{x}\in Z_{+}^{2}$| is informative if he selects his action according to his signal. The first agent |$\mathbf{x}=\mathbf{0}$| is always informative. For any other |$\mathbf{x}\in Z_{+}^{2}$|⁠, let |$A_{a}(\mathbf{x})$| be the number of informative agents observed by agent |$\mathbf{x}$| who select action |$a$|⁠. Agent |$\mathbf{x}$| cascades, i.e., selects action |$a_{1}$| independently of his signal, if |$A_{a_{1}}(\mathbf{x})>A_{a_{2}}(\mathbf{x})+1$| for |$a_{1}\neq a_{2}\in \{0,1\}$|⁠. If neither |$A_{0}(\mathbf{x})>A_{1}(\mathbf{x})+1$| nor |$A_{1}(\mathbf{x})>A_{0}(\mathbf{x})+1$| holds, then |$\mathbf{x}$| is informative and he sets his action equal to his signal, i.e., |$a_{\mathbf{x}}=s_{\mathbf{x}}$|⁠. We first consider |$m=2$| with three different linkage probabilities, |$p\in \left\{ 0.8,0.95,1\right\} $|⁠, and the choices of agents in the first 300 periods. Figure 2 reports the proportion of agents who select the optimal action per period and Figure 3 reports the proportion of agents who cascade per period. The proportion is the average computed from 1,000 simulations. Figure 2 View largeDownload slide Proportion of optimal actions. Figure 2 View largeDownload slide Proportion of optimal actions. Figure 3 View largeDownload slide Proportion of cascading agents. Figure 3 View largeDownload slide Proportion of cascading agents. The graphs highlight two of the main results of this article. First, the introduction of observational noise improves long-run learning for the case of bounded signals, as Theorem 1 and Theorem 5 formally established and as can be seen in Figure 2. Second, Corollary 1 implies that increasing the linkage probability leads to an increase in the proportion of agents who cascade in the long run. This feature is illustrated in Figure 3. Additionally, when comparing Figures 2 and 3, one can clearly see the coexistence of proportional cascades and proportional learning, which we discussed in Section 2. We further note that Figure 2 highlights that for |$p<1$|⁠, increasing the linkage probability increases the asymptotic proportion of agents who take the optimal action and thus illustrates Corollary 3. In addition, one can see in Figure 2 that while from period 40 to 220 the linkage probability |$p=0.8$| outperforms |$p=0.95$| in the sense that a higher proportion of agents select the optimal action, in period 220 the graphs intersect and from this period onward the graph that corresponds to |$p=0.95$| lies above the |$p=0.8$| graph. This suggests a trade-off between speed of learning and accuracy of learning. While values of |$p$| close to one achieve a higher proportion of optimal actions in the long run, this may not be the case in early periods, in which there is a higher proportion of cascading agents due to a lower incidence of informative agents. One may ask how the simulation results relate to the asymptotic welfare bounds of Theorem 3. To answer this question for the case of |$p=0.8$| and |$m=2$| recall that the asymptotic welfare bound is |$\underline{w}_{0.8}^{m}=\rho _{0.8}^{2}+\left( 1-\rho _{0.8}^{2}\right) y_{F}$|⁠. Let us further recall that |$\rho _{0.8}^{2}$| is the percolation probability, which is the probability that in the standard percolation model the origin is part of an infinite component. Similarly, |$y_{F}$| is the success probability, which for the signals considered in the simulation equals |$0.55$|⁠. Based on our simulation we estimate that |$\rho _{2}^{0.8}\approx 0.93.$|21 Therefore, in this case Theorem 3 provides a lower bound of |$\underline{w}_{0.8}^{2}(F)\approx 0.93+0.07\cdot 0.55=0.9685$| for the expected asymptotic proportion of optimal decisions. According to our simulation, the welfare after 300 periods is approximately |$0.93$|⁠. This indicates that crossing the lower bound for asymptotic welfare may require a large number of periods. We note that for |$p=0.95$| the percolation probability |$\rho _{0.95}^{2}$| is very close to one.22 This implies that the welfare bound |$\underline{w}_{0.95}^{2}(F)$| is also very close to |$1$|⁠. Despite this, approximately |$t=200$| decision rounds are required for the proportion of optimal actions to cross |$0.9$| (which is almost |$0.1$| away from the asymptotic limit). The number of decision makers by time |$t=200$| is |$\frac{200\times 201}{2}=20{,\!}100$|⁠. While this number may be significant in the context of (some not all) real-world social learning settings, the time at which the proportion of optimal actions approaches the asymptotic limit drops dramatically when signals become more accurate. For example, when the signal precision is increased to |$0.7$| (instead of |$0.55$|⁠) by time |$t=50$| (which means after the decisions of 1,275 agents) the average proportion of correct decisions is |$0.976,$| which is less than |$0.03$| away from the asymptotic limit. Furthermore, by time |$t=100$| (which means after the decisions of 5,050 agents) the average proportion of correct decisions is approximately |$0.99$|⁠. Finally, we consider the effect of the dimensionality on the proportion of agents who act optimally in a given period, for the case of a linkage probability of |$p=0.9$|⁠. Figure 4 reports the proportion of agents selecting the optimal action per period for a lattice dimension |$m=2$| and |$m=3$|⁠, for the first 150 periods.23 One can see the significant effect of increasing the dimension on the proportion of optimal decisions in every given period. One possible explanation for why this effect is so significant is that the size of the set of agents that lie at distance |$t$| from the origin is |$(t+1)$| in the two-dimensional lattice and |$\frac{(t+1)(t+2)}{2}$| in the three-dimensional lattice. Therefore, while the proportion of isolated agents in the three-dimensional lattice is smaller compared with the two-dimensional lattice, the number of isolated agents in the three-dimensional lattice is much higher. Figure 4 View largeDownload slide Dimensionality and proportion of optimal actions. Figure 4 View largeDownload slide Dimensionality and proportion of optimal actions. 7. Conclusion This article introduces an extension of the herding model by representing the observation structure as well as the order of choices via a directed percolation random graph. This approach combines two realistic features of real-world networks, i.e., a small number of direct neighbours with the possibility of a large number of indirect connections, and a growing number of agents who act simultaneously. We formulate a novel concept of proportional learning that requires only a proportion of agents in a given period to select the optimal action. Our model allows for a neat characterization of proportional learning in terms of the connectivity parameter of the random graph. We show that the occurrence of proportional learning depends uniquely on the linkage probability of the random network rather than on properties of the signal distribution. Finally, despite the fact that our lattice model induces a particular geometric structure, the logic underlying our results extends to more general random graph structures. Roughly, an infinite random observation structure that features a growing group size over time, a bounded number of direct connections and a positive probability for each agent to be isolated will exhibit similar social learning properties. In particular, a testable prediction of our analysis is that a higher connectivity parameter of such a random network leads to a higher proportion of agents acting optimally in the long run. A. Appendix A.1. Proof of Theorem 1 We shall prove the following stronger version of Theorem 1. Theorem 1 For every |$\alpha <1$|⁠, any dimension |$m\geq 2$|⁠, and any signal distribution |$F$|⁠, |$\alpha$|-proportional learning holds in any Perfect Bayesian equilibrium |$\sigma $| of the game |$\Gamma _{p}^{m}$| for every |$p\in \left( p^m(\alpha ),1\right) $|⁠. This version is stronger than the one stated in Section 2 since for |$m\geq 3$| it holds that |$p^{m}(\alpha )p^m_c$|⁠. For every |$\epsilon>0$| the following two conditions hold: (1) For every constant |$M$| there exists |$l$| such that \begin{equation} \mathbf{P}_{p}^{m}(|\hat{\xi} _{t}^{0}|\geq M\ \forall t\geq l)\geq \rho _{p}^{m}-\epsilon . \label{eq:com1} \end{equation} (A.1) (2) There exists a constant |$M_\epsilon$| such that \begin{equation} \label{eq:com2} \mathbf{P}^m_{p}( |C|\leq M_\epsilon)>1-\rho^m_p-\epsilon. \end{equation} (A.2) Proof. The existence of |$M_\epsilon$| readily follows from the fact that |$|C|$| is finite with probability |$1-\rho^m_p$|⁠. We turn to the proof of the first statement. We shall show first that for every constant |$M_p$|⁠, \begin{equation}\label{eq:ne1} \mathbf{P}^m_p( 1\leq|\hat{\xi}^0_t|\leq M_p\ \text{infinitely often}\ )=0. \end{equation} (A.3) For each |$k$|⁠, let |$A_{k}$| be the event that the inequality |$1\leq |\hat{\xi} _{t}^{0}|\leq M_{p}$| holds for at least |$k$| distinct times. For |$l\geq k,$| let |$A_{k,l}$| be the event that at time |$l$| the inequality |$1\leq |\hat{\xi} _{t}^{0}|\leq M_{p}$| holds for the |$k$|-th time between times |$0$| and |$l$|⁠. Note that |$A_{k}=\cup _{l\geq k}A_{k,l}$| and that |$A_{k,l}\cap A_{k,m}=\varnothing $| for |$m\neq l$|⁠. Note that if |$|\hat{\xi}_t^0|\leq M_p$|⁠, then the conditional probability that |$\hat{\xi}_{t+1}^0=\varnothing$| is at least |$(1-p)^{mM_{p}}$|⁠, which is the probability that no edge exiting from |$M_p$| nodes is formed. Therefore, conditional on |$|\hat{\xi}^0_t|\leq M_p,$| the probability that |$\hat{\xi}^0_{t+1}\neq\varnothing$| is bounded from above by |$\delta =1-(1-p)^{mM_{p}}.$| We shall show by induction that |$\mathbf{P}^m_{p}(A_{k})\leq \delta ^{k-1}$|⁠. For |$ k=1$| we obviously have |$\mathbf{P}^m_{p}(A_{1})\leq 1$|⁠. Assume that |$\mathbf{P}^m_{p}(A_{k})\leq \delta ^{k-1}$|⁠. We shall show that the following holds for |$ \mathbf{P}_{p}^m(A_{k+1})$|⁠: \begin{align*} \mathbf{P}^m_{p}(A_{k+1}) \notag =\sum_{l=k}^{\infty }\mathbf{P}^m_p(A_{k,l})\mathbf{P}^m_{p}(A_{k+1}|A_{k,l}) \leq \sum_{l=k}^{\infty }\mathbf{P}^m_{p}(A_{k,l})\delta \label{eq:2} =\mathbf{P}^m_{p}(A_{k})\delta \leq \delta ^{k}. \end{align*} The first equality follows from the law of total probability and the fact that |$A_{k+1}\subset A_{k}$|⁠. The fact that |$\mathbf{P}^m_{p}(|\hat{\xi}_{l+1}^{0}|=0|\ |\hat{\xi}_{l}^{0}|\leq M_{p})\geq 1-\delta $| together with |$A_{k+1}\cap A_{k,l}\subset \{|\hat{\xi}_{l+1}^{0}|=0\}^{c}\cap A_{k,l}$| imply that |$\mathbf{P}^m _{p}(A_{k+1}|A_{k,l})\leq \delta.$| Hence, the first inequality follows. The last inequality follows from the induction hypothesis. Hence, since |$ \{A_{k}\}_{k}$| is a decreasing family of events, we have \begin{equation*} \mathbf{P}^m_{p}(1\leq |\hat{\xi}_{t}^{0}|\leq M_{p}\ \text{infinitely often})=\mathbf{P}^m_{p}(\bigcap_{k}A_{k})=\lim_{k}\mathbf{P}^m_{p}(A_{k})=0. \end{equation*} Note that if |$C$| is infinite, then, in particular, |$\hat{\xi}^0_t\geq 1$| for every |$t$|⁠. By equation (A.3), for every constant |$M_p$|⁠, |$ \mathbf{P}^m_p(1\leq |\hat{\xi}_{t}^{0}|\leq M_{p}\ \text{infinitely often}\big||C|=\infty)=0. $| Therefore, we must have that |$\mathbf{P}^m_p(\lim_{t\rightarrow\infty} |\hat{\xi}_{t}^{0}|=\infty\big||C|=\infty)=1.$| That is, conditional on the event that the origin is part of an infinite component, it holds that |$|\hat{\xi}_{t}^{0}|$|⁠, the number of nodes that lie at distance |$t$| from the origin, goes to infinity. Since |$\mathbf{P}^m_p(|C|=\infty)=\rho^m_p$| the first part of the lemma follows. ‖ Essentially, Lemma 3 classifies the two possible events regarding the set of nodes that can be reached from the origin by a directed path. It is either the case that |$C$| is infinite, which happens with probability |$\rho _{p}^{m}$| (by definition) and in which case |$|\hat{\xi}_{t}^{0}| $| grows to infinity with probability one, or that |$C$| is finite and bounded by |$M_{\epsilon }$| with probability |$1-\rho _{p}^{m}-\epsilon $|⁠. Our result in the standard percolation model connects with our observation structure as follows. Recall from Section 3 that |$|\xi_{l}^{\mathbf{x}}|$| and |$|\hat{\xi}_{l}^{0}|$| have an identical distribution. This follows from the fact that there exists a natural isomorphism between the set of nodes that lie at a distance of |$l$| from the origin and those agents who decide |$l$| periods before |$\mathbf{x}$|⁠. Unlike in the standard percolation model, in our model the set of agents whom any agent |$\mathbf{x}$| observes is finite and bounded above by a constant. Nonetheless, if |$\mathbf{x}_{-}$| is large, some conclusion can be drawn using our identification. Corollary 5. Let |$p\in(p^m_c,1)$| and consider the random observation structure induced by |$\mathbf{P}^m_p$| on |$Z^m_+$|⁠. For every |$k>0$| and every |$\epsilon>0$| there exists a constant |$t_{k,\epsilon}$| such that if |$\mathbf{x}\in Z^m_+$| satisfies |$\mathbf{x}_{-}\geq t_{k,\epsilon}$|⁠, then |$\mathbf{P}^m_p$| assigns a probability of at least |$\rho^m_p-\epsilon$| to the following event: |$B_G(\mathbf{x})$| contains at least |$k$|isolated agents who lie at a distance of exactly |$t_{k,\epsilon}$| from |$\mathbf{x}$|⁠. Proof of Corollary 5. It follows from Lemma 3 that for every |$M$| and |$\epsilon>0$| there exists |$l$| such that |$\mathbf{P}^m_{p}( |\xi^0_t|\geq M)>\rho^m_p-\epsilon$| for every |$t\geq l$|⁠. Let |$\mathbf{x}$| be a node such that |$\mathbf{x}_{-}\geq l$|⁠. By the above identification, |$|\xi _{l}^{ \mathbf{x}}|$| and |$|\hat{\xi}_{l}^{0}|$| have an identical distribution. Therefore |$\mathbf{P}^m_{p}( |\xi _{l}^{ \mathbf{x}}|\geq M)>\rho^m_p-\epsilon.$| Consider the event that an agent |$\mathbf{x}\in B_t$| observes at least |$M$| agents from |$B_{t-l}$|⁠. That is, |$|\xi_{l}^{ \mathbf{x}}|\geq M$|⁠. We note that all agents in |$B_{t-l}$| who are observed by |$\mathbf{x}$| are isolated independently with probability |$(1-p)^m$|⁠. Hence, for every |$\epsilon>0$|⁠, if |$|\xi _{l}^{ \mathbf{x}}|\geq M$| then for sufficiently large |$M$| agent |$\mathbf{x}$| observes at least |$\frac{(1-p)^m M}{2}$| isolated agents with probability |$1-\epsilon$|⁠. Therefore, for every |$k$| and |$\epsilon$| there exists a large enough |$t_{k,\epsilon}$| such that if |$\mathbf{x}_{-}\geq t_{k,\epsilon}$|⁠, then agent |$\mathbf{x}$| observes at least |$k$| isolated agents in |$\xi_{t_{k,\epsilon}}^{ \mathbf{x}}$| with a probability of at least |$\rho^m_p-\epsilon$|⁠. ‖ Let |$\epsilon >0$| and |$k\in \mathbb{N}$|⁠. By Corollary 5 there exists |$t_{k,\epsilon }$| such that if |$\mathbf{x}\in B_{t}$| and |$\mathbf{x}_{-}\geq t_{k,\epsilon }$|⁠, then |$\mathbf{x}$| observes at least |$k$| isolated agents with a probability of at least |$\rho _{p}^{m}-\epsilon $|⁠. It follows that for sufficiently large |$k$|⁠, an agent |$\mathbf{x}$| who observes at least |$k$| isolated agents takes the optimal action with a probability that is arbitrarily close to one. As |$t$| grows the proportion of agents |$\mathbf{x}\in B_{t}$| for which |$\mathbf{x}_{-}\geq l$| goes to one. We can deduce that the average expected welfare of agents in |$B_{t}$| is arbitrarily close to |$\rho _{p}^{m}$| as |$t$| grows. The rest of the proof is devoted to establishing that if |$p>p^{m}(\alpha )$|⁠, then a proportion of |$\alpha $| of agents is guaranteed to take the optimal action in the long run. The following lemma shows that if |$p\in (p^m_c,1)$|⁠, then for every |$\epsilon>0$| and |$k,$| the proportion of agents in |$B_t$| who observe |$k$| isolated agents lies above |$\rho^m_p-\epsilon$| with a probability that approaches one as |$t$| goes to infinity. Let |$R^{k}_t $| be the set of agents in |$B_t$| who can observe at least |$k$| isolated agents, and let |$r^k_t$| be the size of |$R^{k}_t $|⁠. Lemma 4. For every |$p\in (p^m_c,1)$|⁠, |$k\in\mathbb{N},$| and |$\epsilon>0$|⁠, |$\lim_{t\rightarrow \infty }\mathbf{P}^m_{p}(\frac{r_{t}^{k}}{b_{t}}>\rho^m_p-\epsilon)=1.$| For the proof of Lemma 4 we require the following result. Lemma 5. For every |$t\in\mathbb{N}$|⁠, let |$\{X^t_i\}_{1\leq i\leq m_t} $| be a sequence of Bernoulli random variables for which there exists |$\epsilon>0$|⁠, and |$\beta>0$| such that |$E(X^t_i)\geq\beta+\epsilon$| for every |$i$|⁠. Assume that there exists an integer |$n$| such that for every |$i$| the random variable |$X^t_i$| depends on at most |$n$| other random variables from |$\{X^t_i\}_{1\leq i\leq m_t}$|⁠, and that |$m_t\rightarrow_{t\rightarrow \infty}\infty$|⁠. Then, \begin{equation*} \lim_{t\rightarrow\infty}\mathbf{P}(\frac{1}{m_t}\sum_{i=1}^{m_t} X^t_i>\beta)=1. \end{equation*} Proof. The proof follows from Theorem 2 in Andrews (1988) and also follows directly from Chebyshev’s inequality. ‖ Proof of Lemma 4. For every agent |$\mathbf{x}\in B_t$| define a random variable |$h_{\mathbf{x}}$| to be equal to |$1$| if |$\xi^{\mathbf{x}}_{l}$| contains at least |$k$| isolated agents. It follows from Corollary 5 that for every |$\mathbf{x}$| with |$\mathbf{x}_{-}\geq t_{k,\frac{\epsilon}{2}}$| it holds that |$\mathbf{P}^m_{p} (h_{\mathbf{x}}=1)>\rho^m_p-\frac{\epsilon}{2}.$| In addition, note that if |$\mathbf{x},\mathbf{y}\in B_t$| are such that |$d_{Z^m_{+}}(\mathbf{x},\mathbf{y})\geq 2t_{k,\frac{\epsilon}{2}}+1$|⁠, then |$ \xi^\mathbf{x}_{t-t_{k,\frac{\epsilon}{2}}}\cap\xi^\mathbf{y}_{t-t_{k,\frac{\epsilon}{2}}}=\varnothing$| with probability one. Hence, if |$d_{Z^m_{+}}(\mathbf{x},\mathbf{y})\geq 2t_{k,\frac{\epsilon}{2}}+1$|⁠, then |$h_\mathbf{x}$| and |$h_\mathbf{y} $| are independent random variables. Therefore, for every |$t$| and |$\mathbf{x}\in B_t,$| the random variable |$h_{\mathbf{x}}$| depends on at most |$n$| random variables |$h_{\mathbf{y}}$| in24|$B_t$|⁠. Moreover, the proportion of agents |$\mathbf{x}\in B_t$| for which |$\mathbf{x}_{-}\leq t_{k,\frac{\epsilon}{2}}$| goes to zero as |$t$| grows to infinity. Hence, based on Lemma 5, it follows that: \begin{equation*} \label{eq:sum} \lim_{t\rightarrow\infty}\mathbf{P}^m_{p} (\sum_{\mathbf{x}\in B^t}\frac{h_\mathbf{x}}{b_t}>\rho^m_p-\epsilon)=1. \end{equation*} Since, by definition, |$\sum_{\mathbf{x}\in B^t}\frac{h_\mathbf{x}}{b_t}= \frac{r^k_t}{b_t}$| we have that |$\lim_{t\rightarrow\infty}\mathbf{P}^m_{p}(\frac{r^k_t}{b_t}>\rho^m_p-\epsilon)=1.$| This concludes the proof of the lemma. ‖ Lemma 4 shows that for |$p\in (p_{c}^{m},1)$| the proportion of agents |$\mathbf{x}\in B_{t}$| who observe at least |$k$| isolated agents lies above |$\rho _{p}^{m}-\epsilon $|⁠, for any |$\epsilon >0$|⁠, with a probability that is arbitrarily close to one as |$t$| grows. This is true for every natural number |$k$|⁠. Observing the decision of at least |$k$| isolated agents serves as a sufficient statistic for taking the optimal action as |$k$| grows. Hence a proportion of |$\rho _{p}^{m}$| agents in |$B_{t}$| must take the optimal action with a probability that approaches one as |$t$| goes to infinity. Therefore, in particular, |$\alpha $|-proportional learning holds. The rest of the proof is devoted to formally establishing this intuition. Assume that an agent |$\mathbf{x}$| observes at least |$k$| isolated agents. In equilibrium, as |$k$| grows to infinity, agent |$\mathbf{x}$| would learn the true state of the world and therefore choose the optimal action with arbitrarily high probability. Therefore, there exists a sequence |$\{q_{k}\}_{k} $| converging to one such that if a given agent observes at least |$k$| isolated agents, then his expected utility is at least25|$q_{k}$|⁠. We fix a Perfect Bayesian equilibrium |$\sigma$| of |$\Gamma^m_p$|⁠. For every agent |$\mathbf{x}\in B_{t}$| we let |$Y_{\mathbf{x}}$| be the random variable that represents the utility of agent |$\mathbf{x}$|⁠. Corollary 6. For every |$\epsilon>0$|⁠, |$\delta>0$|⁠, and |$p\in(p^m_c,1),$| there exists |$k_0$| such that for every |$k \geq k_0,$| and |$t\geq k$|⁠, \begin{equation*} \mathbf{P}^m_{\sigma,p}(\frac{ \sum_{\mathbf{x}\in R^k_t}Y_{\mathbf{x}}}{% r^k_t}\geq 1-\delta|\frac{r^k_t}{b_t}>\rho^m_p-\epsilon)\geq 1-\delta. \end{equation*} Proof of Corollary 6. Note that |$E^m_{\sigma,p}(Y_{\mathbf{x}}=1|\mathbf{x}\in R^k_t)\geq q_{k}$|⁠. Let |$k$| be such that |$q_k\geq 1-\delta^2$|⁠. It follows that |$E^m_{\sigma,p}[\frac{\sum_{\mathbf{x}\in R^k_t}Y_{\mathbf{x}}}{r^k_t}|\frac{r^k_t}{b_t}>\rho^m_p-\epsilon]\geq 1-\delta^2$|⁠. Since |$\frac{\sum_{\mathbf{x}\in R^k_t}Y_{\mathbf{x}}}{r^k_t}\in[0,1]$| it must hold that |$\mathbf{P}^m_{\sigma,p}(\frac{ \sum_{\mathbf{x}\in R^k_t}Y_{\mathbf{x}}}{r^k_t}\geq 1-\delta|\frac{r^k_t}{b_t}>\rho^m_p-\epsilon)> 1-\delta.$| ‖ Proof of Theorem 1. Let |$p\in(p^m_c,1)$| and |$\sigma$| be an equilibrium strategy of |$\Gamma^m_p$|⁠. For every |$\epsilon>0$|⁠, |$\frac{\epsilon}{2}\geq\delta>0$|⁠, and |$k$| it holds that \begin{equation}\label{eq:nap} \mathbf{P}^m_{\sigma,p}(\frac{\sum_{\mathbf{x}\in B_t}Y_{\mathbf{x}}}{b_t}>\rho^p_m-\epsilon)\geq \mathbf{P}^m_{\sigma,p}(\frac{r^k_t}{b_t}>\rho^p_m-\frac{\epsilon}{2})\mathbf{P}^m_{\sigma,p}(\frac{\sum_{\mathbf{x}\in R^k_t}Y_{\mathbf{x}}}{r^k_t}\geq 1-\delta|\frac{r^k_t}{b_t}>\rho^p_m-\frac{\epsilon}{2}). \end{equation} (A.4) Lemma 4 implies that the first expression on the right-hand side of (A.4) goes to |$1$| as |$t$| goes to infinity. Moreover, it follows from Corollary 6 that the second expression on the right-hand side of (A.4) is at least |$1-\delta$| for sufficiently large |$k$| and all |$t\geq k$|⁠. Therefore, for every |$\epsilon>0$|⁠, |$\frac{\epsilon}{2}\geq\delta>0$| it holds that, |$ \liminf_{t\rightarrow\infty}\mathbf{P}^m_{\sigma,p}(\frac{\sum_{\mathbf{x}\in B_t}Y_{\mathbf{x}}}{b_t}>\rho^p_m-\epsilon)\geq 1-\delta. $| Taking |$\delta$| to zero yields that for every |$\epsilon>0$|⁠, \begin{equation}\label{eq:final} \lim_{t\rightarrow\infty}\mathbf{P}^m_{\sigma,p}(\frac{\sum_{\mathbf{x}\in B_t}Y_{\mathbf{x}}}{b_t}>\rho^p_m-\epsilon)=1. \end{equation} (A.5) We now conclude the proof of Theorem 1. Let |$\alpha>0$| and |$p\in (p^m(\alpha),1)$|⁠. By definition |$\alpha<\rho^m_p$|⁠; hence, for |$\epsilon=\rho^m_p-\alpha>0$| equation (A.5) implies that |$\lim_{t\rightarrow\infty}\mathbf{P}^m_{\sigma,p}(\frac{\sum_{\mathbf{x}\in B_t}Y_{\mathbf{x}}}{b_t}>\alpha)=1.$| ‖ A.2. Proof of Theorem 2 Let |$\tau $| be a Perfect Bayesian equilibrium of the private observation model introduced in Section 3. Recall that |$p_{n}=\mathbf{P}_{\sigma }(\omega =1|h_{n})$| is the probability that the state is |$\omega =1$| conditional on |$h_{n}\in \{0,1\}^{n-1},$| the history of decisions of the first |$n-1$| agents |$\{i_{1},\ldots ,i_{n-1}\}$| observed by the observer. Lemma 1 shows that when |$n$| grows large the observer can infer the true state with arbitrarily high probability. Assume, without loss of generality, that |$\omega =1$| and recall that |$a_{n}$| is the action of agent |$i_{n}$|⁠. Note first that \begin{equation*} \mathbf{P}_{\tau }(\omega =1|h_{n},a_{n}=a)=\frac{p_{n}\mathbf{P}_{\tau }(a_{n}=a|h_{n},\omega =1))}{p_{n}\mathbf{P}_{\tau }(a_{n}=a|h_{n},\omega =1)+(1-p_{n})\mathbf{P}_{\tau }(a_{n}=a|h_{n},\omega =0)}, \end{equation*} for |$a\in\{0,1\}$|⁠. Let |$l_{n}=\log (\frac{p_{n}}{1-p_{n}})$| be the log-likelihood ratio at stage |$n$|⁠. Conditional on |$\omega=1$| and |$h_n$| we can write the distribution of |$l_{n+1} $| as a function of |$l_n$| as follows. With probability |$\mathbf{P} _\tau(a_{n}=a|h_n,\omega=1)$| we have \begin{equation} \label{eq:pc} l_{n+1}=l_n+\log(\frac{\mathbf{P}_\tau(a_{n}=a|h_n,\omega=1)}{\mathbf{P}% _\tau(a_{n}=a|h_n,\omega=0)}), \end{equation} (A.6) for |$a=0,1$|⁠. We claim the following: Lemma 6. There exist constants |$\beta>1$| and |$r>0$| such that for every |$n$| and |$h_n$|⁠, \begin{equation*} r\geq \frac{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)}{\mathbf{P}% _\tau(a_{n}=1|h_n,\omega=0)}=\beta_n\geq\beta\text{ and } \frac{\mathbf{P}% _\tau(a_{n}=0|h_n,\omega=1)}{\mathbf{P}_\tau(a_{n}=0|h_n,\omega=0)}\geq \frac{1}{r}. \end{equation*} Proof. For any |$n$|⁠, recall that |$k_n$| is the probability that |$K_n=\varnothing $|⁠. We can write \begin{eqnarray*} & &\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)\\ &=&\mathbf{P}_\tau(K_n=\varnothing)\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n=\varnothing)\\ &+&\mathbf{P}_\tau(K_n\neq \varnothing)\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n\neq\varnothing)\\ &=&k_n\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n=\varnothing)+(1-k_n)\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n\neq\varnothing). \end{eqnarray*} Similarly, we can write \begin{eqnarray}\label{eq:a-iso} \notag& &\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0)\\ &=&k_n\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0,K_n=\varnothing)+(1-k_n)\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0,K_n\neq\varnothing). \end{eqnarray} (A.7) We let |$\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n=\varnothing)=\theta,$| and |$\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0,K_n=\varnothing)=\eta.$| Hence we can write \begin{equation}\label{eq:eq-bound} \frac{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)}{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0)}=\frac{k_n\theta+(1-k_n)\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n\neq\varnothing)}{k_n\eta+(1-k_n)\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0,K_n\neq\varnothing)}. \end{equation} (A.8) By (c) of Lemma A1 in Acemoglu et al. (2014) we have that |$1>\theta>\eta>0$|⁠. Since, by definition, |$k_n\geq e$| we can bound Equation (A.8) from above by |$\frac{e\theta+1-e}{e\eta}$|⁠. Similarly, \begin{equation*} \frac{\mathbf{P}_\tau(a_{n}=0|h_n,\omega=1)}{\mathbf{P}_\tau(a_{n}=0|h_n,\omega=0)}\geq \frac{e(1-\theta)}{e(1-\eta)+1-e}. \end{equation*} By letting |$r=\max\{\frac{e\theta+1-e}{e\eta},\frac{e(1-\eta)+1-e}{e(1-\theta)}\}$| we have that |$ r\geq \frac{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)}{\mathbf{P}% _\tau(a_{n}=1|h_n,\omega=0)}\text{ and } \frac{\mathbf{P}_\tau(a_{n}=0|h_n,\omega=1)}{\mathbf{P}% _\tau(a_{n}=0|h_n,\omega=0)}\geq \frac{1}{r}. $| We next prove the existence of such a |$\beta>1$|⁠. Again (c) of Lemma A1 in Acemoglu et al. (2014) shows that \begin{equation*} \mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n\neq\varnothing)\geq \mathbf{P}_\tau(a_{n}=1|h_n,\omega=0,K_n\neq\varnothing). \end{equation*} Therefore it follows from Equation (A.8), |$ \frac{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)}{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0)}\geq\frac{k_n\theta+(1-k_n)\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n\neq\varnothing)}{k_n\eta+(1-k_n)\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n\neq\varnothing)}. $| Since |$k_n\theta>k_n\eta$|⁠, and since |$(1-k_n)\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1,K_n\neq\varnothing)\leq 1$|⁠, we have |$ \frac{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)}{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0)}\geq\frac{k_n\theta+1}{k_n\eta+1}. $| Since |$k_n\geq e$| we have, |$ \frac{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)}{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0)}\geq\frac{e\theta+1}{e\eta+1}. $| Since |$\theta>\eta$|⁠, letting |$\beta=\frac{e\theta+1}{e\eta+1}>1$| concludes the proof of the lemma. ‖ Lemma 7. Let |$00. \end{equation*} Proof. We show first that |$g_x(\gamma)=f(x,\gamma)$| is strictly decreasing in |$\gamma$| for every |$x>0$|⁠. It is easy to see that |$ g'_x(\gamma)=-\frac{x(1-\gamma)}{\gamma(1-\gamma x)}. $| Since |$x,\gamma>0$| it holds that |$g'_x(\gamma)<0$|⁠. Hence, |$g_x(\gamma)$| is strictly decreasing in |$\gamma$| and |$\min\limits_{\gamma\in(0,\alpha]}f(x,\gamma)=f(x,\alpha).$| We note that |$f(x,1)=0$|⁠. Therefore, since |$f(x,\gamma)$| is strictly decreasing in |$\gamma$|⁠, and since |$\alpha<1,$| we get that |$f(x,\alpha)>0$| for every |$x\in[a,1]$|⁠. From the continuity of |$f(x,\gamma)$| we get |$ \min_{x\in[a,1]}\min_{\gamma\in(0,\alpha]}f(x,\gamma)=\min_{x\in[a,1]}f(x,\alpha)=w>0. $| This concludes the proof of the lemma. ‖ Proof of Lemma 1. Let |$\alpha=\frac{1}{\beta}$|⁠. We claim that Lemma 7 implies that, |$ E_\tau[l_{n+1}-l_n|h_n,l_n,\omega=1]\geq w>0. $| To see this, we note that by Equation (A.6), we can write \begin{eqnarray}\label{eq:ce} & &E_\tau[l_{n+1}-l_n|h_n,\omega=1]\\ &=&\notag\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)\log(\frac{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)}{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0)})\\ \notag&+&\mathbf{P}_\tau(a_{n}=0|h_n,\omega=1)\log(\frac{\mathbf{P}_\tau(a_{n}=0|h_n,\omega=1)}{\mathbf{P}_\tau(a_{n}=0|h_n,\omega=0)}). \end{eqnarray} (A.9) Following Lemma 6, |$ \frac{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)}{\mathbf{P}_\tau(a_{n}=1|h_n,\omega=0)}=\beta_n\geq\beta. $| Letting |$x_n=\mathbf{P}_\tau(a_{n}=1|h_n,\omega=1)$| we can rewrite Equation (A.9) as follows: \begin{eqnarray}\label{eq:de} & &E_\tau[l_{n+1}-l_n|h_n,\omega=1] =-x_n\log(\frac{1}{\beta_n})+(1-x_n)\log(\frac{1-x_n}{1-\frac{1}{\beta_n}x_n}). \end{eqnarray} (A.10) Letting |$\alpha_n=\frac{1}{\beta_n}$| we can rewrite (A.10) as follows: \begin{equation*} E_\tau[l_{n+1}-l_n|h_n,\omega=1]=-x_n\log(\alpha_n)+(1-x_n)\log(\frac{1-x_n}{1-\alpha_n x_n}). \end{equation*} It follows from Equation (A.7) that |$x_n\geq e\theta>0$|⁠. Hence, |$x_n\in[a,1]$| for |$a=e\theta$|⁠. Moreover, since |$1<\beta\leq\beta_n$| we have |$ \alpha_n=\frac{1}{\beta_n}\leq \frac{1}{\beta}=\alpha<1. $| Therefore, |$\alpha_n\in[0,\alpha]$| with probability one. We can now deduce from Lemma 7 that for every history |$h_n\in \{0,1\}^{n-1},$| |$ E_\tau[l_{n+1}-l_n|h_n,\omega=1]\geq w>0. $| Let |$\mathbf{P}_{\tau,\omega=1}$| be the probability distribution of the learning process conditional on the realized state |$\omega=1$|⁠. We note that |$X_n=l_n-wn$| is a sub-martingale with respect to |$\mathbf{P}_{\tau,\omega=1}$|⁠. Moreover, Lemma 6 implies that |$|X_{n+1}-X_n|\leq c$| for |$c=\ln(r)+w$|⁠. We can therefore apply Azuma’s inequality to the sub-martingale |$X_n$| (see Alon and Spencer (2004), Theorem 7.2.1), which implies that for all |$t\geq 0$|⁠, \begin{equation}\label{eq:azuma} \mathbf{P}_{\tau,\omega=1}(l_n-wn\leq-t)\leq \exp(\frac{-t^2}{2nc^2}). \end{equation} (A.11) For |$t=wn^{\frac{2}{3}}$| it follows that: |$\mathbf{P}_{\tau,\omega=1}(l_n-wn\leq -wn^{\frac{2}{3}})\leq \exp(-\frac{w^2}{2c^2}n^{\frac{1}{3}}).$| That is, |$ \mathbf{P}_{\tau,\omega=1}(l_n>w(n-n^{\frac{2}{3}}))\geq 1-\exp(-\frac{w^2}{2c^2}n^{\frac{1}{3}}). $| Since, by definition, |$p_n=\frac{\exp(l_n)}{\exp(l_n)+1}$|⁠, for every |$\epsilon$| there exists |$n_\epsilon$| such that for every |$n\geq n_\epsilon,$| |$ \mathbf{P}_{\tau}(p_n\geq 1-\epsilon|\omega=1)\geq 1-\epsilon. $| This concludes the proof of Lemma 1. ‖ Proof of Theorem 2. It follows directly from Lemma 3 that for any |$\epsilon$| and |$M$| there exists |$l$| such that if agent |$\mathbf{x}\in B_t$| satisfies |$\mathbf{x}_{-}\geq l$|⁠, then it holds that |$|B_G(\mathbf{x})|\geq M$| with probability |$\rho^m_p-\epsilon$|⁠. It further follows from Corollary 2 that if |$M\geq n_\epsilon$|⁠, then |$\hat{E}^m_{\sigma,p}(Y_{\mathbf{x}}=1||B_G(\mathbf{x})|\geq n_\epsilon)\geq 1-\epsilon.$| We can now continue exactly as in the proof of Theorem 1, from Lemma 4 onward, to deduce that if |$p\in (p^m(\alpha),1)$|⁠, then |$\alpha$|-proportional learning holds. ‖ A.3. Proof of Theorem 3 In the proof of Theorem 3 we use the notation introduced in Section A.1. Proof of Theorem 3. We first provide the proof of the commonly known observation structure and then explain how to adapt the proof to the model with independent private observations. Let |$p>p_c^m$|⁠, let |$F$| be any signal distribution, and let |$\sigma \in \Sigma _{F,p}^{m}$| be any Perfect Bayesian equilibrium of |$\Gamma^m_p$|⁠. It follows from the definition of |$R^k_t$| that for every |$\epsilon>0$| there exists large enough |$k$| such that if |$\mathbf{x}\in R^k_t$|⁠, then |$E^m_{\sigma,p}\big[Y_{\mathbf{x}}\big]\geq 1-\epsilon.$| Since |$E^m_{\sigma,p}\big[Y_{\mathbf{x}}\big]\geq y_F$| for every agent |$\mathbf{x}$|⁠, we have that \begin{eqnarray*} &&\underline{l}_{\sigma ,p}^{m}(F)=\liminf_{t\rightarrow\infty}E_{\sigma,p}\big[\frac{% \sum_{\mathbf{x}\in R^k_t}Y_{\mathbf{x}}}{b_t}+\frac{ \sum_{\mathbf{x}\not\in R^k_t}Y_{\mathbf{x}}}{b_t}\big]\geq \liminf_{t\rightarrow\infty}E^m_{\sigma,p}\big[\frac{(1-\epsilon)r^k_t}{b_t}+y_F(\frac{b_t-r^k_t}{b_t})\big]\\ &\geq& (1-\epsilon)\rho^m_p+(1-\rho^m_p)y_F. \end{eqnarray*} The last inequality follows since Lemma 4 implies that |$\liminf\limits_{t\rightarrow\infty}E^m_{\sigma,p}[\frac{r^k_t}{b_t}]\geq\rho^m_p.$| This shows that |$ \underline{l}_{\sigma ,p}^{m}(F)\geq \rho^m_p+(1-\rho^m_p)y_F=\underline{w}_{p}^{m}. $| To adapt the proof to the independent observation structure, note that Lemma 3 implies that for every |$\epsilon>0$| and |$M$| there exists |$l$| such that if |$\mathbf{x}\in B_t$| satisfies |$\mathbf{x}_{-}\geq l$|⁠, then it holds that |$|B_{G_{\mathbf{x}}}(\mathbf{x})|\geq M$| with probability |$\rho^m_p-\epsilon$|⁠. It therefore follows from Corollary 2 that if |$M\geq n_\epsilon$|⁠, then |$\hat{P}^m_{\sigma,p}(Y_{\mathbf{x}}=1||B_{G_{\mathbf{x}}}(\mathbf{x})|\geq n_\epsilon)\geq 1-\epsilon.$| We can then apply identical arguments to those applied of the commonly known observation structure. ‖ A.4. Proof of Theorem 4 In the proof of Theorem 4 we use the notation introduced in Section A.1. Proof of Theorem 4. We prove the theorem for the commonly known observation model. The proof of the independent observation structure follows very similar lines and is therefore omitted. Let |$y_F$| and |$y_{F'}$| be the success probabilities under |$F$| and |$F'$|⁠, respectively. Assume that |$y_F>y_{F'}$|⁠; we need to show that there exists |$\hat p$| such that |$\underline{l}_{\sigma ,p}^{m}(F)>\underline{l}_{\sigma',p}^{m}(F')$| for all |$p\in(\hat p,1)$| and any |$\sigma \in \Sigma _{F,p}^{m},$||$\sigma' \in \Sigma _{F',p}^{m}$|⁠. Theorem 3 implies that |$\underline{l}_{\sigma ,p}^{m}(F)\geq \underline{w}_{p}^{m}=\rho^m_p+(1-\rho^m_p)y_F.$| Therefore it is sufficient to show that there exists |$\hat{p}\in(0,1)$| such that |$\underline{l}_{\sigma',p}^{m}(F')<\underline{w}_{p}^{m}(F)$| for all |$p\in(\hat{p},1)$| and any |$\sigma' \in \Sigma _{F',p}^{m}.$| We consider first the standard percolation model. Consider the probability of the event that the origin can reach no agent conditional on the event that his observation set contains at most |$n$| nodes, i.e., |$\mathbf{P}^m_p(C=\emptyset||C|\leq n)$|⁠. It follows from Durrett (1984) that |$\mathbf{P}^m_p(C=\emptyset||C|\leq n)$| goes to one uniformly in |$n$| as |$p$| goes to one. That is, for every |$\delta$| there exists |$\hat p(\delta)\in(0,1)$| such that for all |$p>\hat p(\delta)$| it holds for all |$n\geq 1$| that |$ \mathbf{P}^m_p(|C|=0||C|\leq n)> 1-\delta. $| Fix |$\delta>0$| and let |$ p\in(p(\delta),1)$|⁠. Let |$\sigma'\in \Sigma^m_{p,F'}$| be any equilibrium with signal distribution |$F$|⁠. Lemma 3 implies that for every |$\epsilon>0$| there exist |$M_\epsilon$| such that |$|B_G(\mathbf{x})|$|⁠, the size of the observation set of any agent |$\mathbf{x},$| is smaller than |$M_\epsilon$| with probability |$1-\rho^m_p-\epsilon$|⁠. This again follows from the fact that for every agent |$\mathbf{x}$| the probability that |$\mathbf{x}$| observes at most |$M_\epsilon$| agents is greater than the probability that in the standard percolation model the origin can reach at most |$M_\epsilon$| nodes. Let |$k$| be large enough so that |$q_k\geq 1-\epsilon$|⁠. We can also assume that for every |$t$|⁠, the event that a given agent |$\mathbf{x}\in R^k_t$| is disjoint to the event that his observation set |$B_G(\mathbf{x})$| contains at most |$M_\epsilon$| nodes. That is, |$\mathbf{x}\in R^k_t$| implies that |$|B_G(\mathbf{x})|>M_\epsilon.$| This holds for example if |$k\geq M_\epsilon$|⁠. By Corollary 5 there exists |$t_{k,\epsilon}$| such that for every |$\mathbf{x}\in B_t$| with |$\mathbf{x}_{-}\geq t_{k,\epsilon}$| it holds that |$\mathbf{x}\in R^k_t$| with probability |$\rho^m_p-\epsilon$|⁠. Let |$\mathbf{x}$| be such that |$\mathbf{x}_{-}\geq M_\epsilon$|⁠. It holds that \begin{align} \notag&E^m_{\sigma',p}[Y_{\mathbf{x}}]= \mathbf{P}^m_{\sigma',p}(|B_G(\mathbf{x})|\leq M_\epsilon)E^m_{\sigma',p}\big[Y_\mathbf{x}\big||B_G(\mathbf{x})|\leq M_\epsilon\big]+\mathbf{P}^m_{\sigma',p}(|B_G(\mathbf{x})|> M_\epsilon)E^m_{\sigma',p}\big[Y_\mathbf{x}\big||B_G(\mathbf{x})|> M_\epsilon\big]\\ \end{align} \begin{align} &\leq\mathbf{P}^m_{\sigma',p}(|B_G(\mathbf{x})|\leq M_\epsilon)E^m_{\sigma',p}\big[Y_\mathbf{x}\big||B_G(\mathbf{x})|\leq M_\epsilon\big]+ \rho^m_p+\epsilon\\ \end{align} (A.12) \begin{align} &\leq(1-\rho^m_p)E^m_{\sigma',p}\big[Y_\mathbf{x}\big||B_G(\mathbf{x})|\leq M_\epsilon\big]+ \rho^m_p+\epsilon.\label{eq:he0} \end{align} (A.13) Since |$\mathbf{x}_{-}\geq M_\epsilon$| and since |$p>\hat p(\delta)$| we have by the definition of |$\hat p(\delta)$| that \begin{align} \notag&E^m_{\sigma',p}\big[Y_\mathbf{x}\big||B_G(\mathbf{x})|\leq M_\epsilon\big]\\ \notag&\leq\mathbf{P}^m_{\sigma',p}(|B_G(\mathbf{x})|=1\big||B_G(\mathbf{x})|\leq M_\epsilon)y_{F'}+\mathbf{P}^m_{\sigma',p}(|B_G(\mathbf{x})|>1\big||B_G(\mathbf{x})|\leq M_\epsilon)\\ \end{align} \begin{align} &\leq \delta y'_F+(1-\delta). \label{eq:he} \end{align} (A.14) Since the proportion of agents |$\mathbf{x}\in B_t$| for whom |$\mathbf{x}_{-}\geq M_\epsilon$| goes to one, we get from Equation (A.13) and Equation (A.14) that |$ \underline{l}_{\sigma',p}^{m}(F')\leq (1-\rho^m_p)(\delta y_{F'}+1-\delta)+\rho^m_p+\epsilon. $| Since |$\epsilon$| is arbitrary we get that |$ \underline{l}_{\sigma',p}^{m}(F')\leq (1-\rho^m_p)(\delta y_{F'}+1-\delta)+\rho^m_p. $| If we choose |$\hat p(\delta)$| for |$\delta>\frac{1-y_F}{1-y_{F'}}$| then for every |$p>\hat p(\delta),$| we get that |$\underline{l}_{\sigma',p}^{m}(F')< w^m_p(F).$| ‖ A.5. Proof of Lemma 2 For the sake of clarity we prove Lemma 2 for |$m=2$|⁠. The proof can be easily extended to the general case where |$m> 2$|⁠. Proof of Lemma 2. Assume for simplicity that the two agents |$(0,1)$| and |$(1,0)$| are playing according to the same strategy, and that for every action |$a$| played by agent |$(0,0)$| there exists a positive probability that the action |$1-a$| is being played by either of the two agents. That is, we assume that neither agent |$ (0,1)$| nor agent |$(1,0)$| is cascading.26 This assumption implies that \begin{equation} \mathbf{P}^2_{\sigma,1 }(a_{\mathbf{x}}=1|\omega =1,a_{(0,0)}=1)>\mathbf{P}^2 _{\sigma,1 }(a_{\mathbf{x}}=1|\omega =0,a_{(0,0)}=1), \label{eq:ml} \end{equation} (A.15) for |$\mathbf{x}\in \{(0,1),(1,0)\}$|⁠. For every agent |$\mathbf{x}$|⁠, let |$p_\mathbf{x}$| be the public belief of agent |$\mathbf{x}$| conditional on his observed history prior to receiving his signal, given that all the agents in his observation set play action |$1$|⁠. Note that agent |$\mathbf{x}=(t,0)\in Z_{+}^{2}$| observes all agents |$ (u,0)$| for |$u\overline{L}$| then agent |$\mathbf{x}$| chooses action |$1$| with probability one regardless of his signal; i.e., agent |$\mathbf{x}$| cascades. Since |$ \{p_{(0,t)}\}_t$| converges to |$q\geq 1-\underline{\beta},$| we must have |$\lim_t L_t\geq \overline{L}$|⁠. By Equation (A.15), there exists a time |$t_0$| such that \begin{equation}\label{eq:acr} L_*:=L_{(0,t_0+1)}\cdot \frac{\mathbf{P}^2_{\sigma,1}(a_{(1,0) }=1| \omega=1,a_{(0,0)}=1)}{\mathbf{P}^2_{\sigma,1}(a_{(1,0) }=1| \omega=0,a_{(0,0)}=1)}>\overline{L}. \end{equation} (A.17) To understand Equation (A.17) consider the case where agent |$(0,t+1)$| observes that all agents |$(0,u)$| for |$u\leq t_0$| chose action |$1$|⁠. In this case his posterior probability is |$p_{(0,t_0+1)}$| and his likelihood ratio is |$L_{(0,t_0+1)}$|⁠. Consider the case where agent |$(0,t+1)$| additionally observes that agent |$(1,0)$|⁠, who is not part of his observation set, chose action |$1$|⁠. By Bayes rule his new likelihood ratio is exactly|$L_*$|⁠. Hence, if he were to observe in addition that |$(1,0)$| chose action |$1$|⁠, then his new likelihood ratio would cross |$\overline L$| and enter the cascading region. Assume for simplicity that the same is true for agent |$(t_0+1,0)$|⁠, namely, \begin{equation}\label{eq:acr1} L_*=L_{(t_0+1,0)}\cdot \frac{\mathbf{P}^2_{\sigma,1}(a_{(0,1) }=1| \omega=1,a_{(0,0)}=1)}{\mathbf{P}^2_{\sigma,1}(a_{(0,1) }=1| \omega=0,a_{(0,0)}=1)}>\overline{L}. \end{equation} (A.18) Let |$R$| be the event that all boundary agents (i.e. all agents |$\mathbf{x}$| such that |$\mathbf{x} _{-}=0$|⁠) chose action |$1 $|⁠. Since signals are bounded and since the observation set of every boundary agent is identical to the line, Theorem 3 in Smith and Sorensen (2000) implies that |$R$| holds with positive probability conditional on any state |$\omega=0,1.$| Let |$V$| be the event that all agents prior to and at time |$2t_{0}$| chose action |$1 $|⁠. It clearly holds that the event |$M=R\cap V$| also has positive probability conditional on any state |$\omega=0,1.$| Let |$M$| be the event where all boundary agents chose action |$1$|and all agents prior to and at time |$2t_0$| chose action |$1$|⁠. We shall show that conditional on the event |$M,$|all agents choose action |$1$| under |$\sigma .$| Moreover, we shall show that all agents |$\mathbf{x}\in B_{t}$| such that |$ t>2t_{0}$| and |$\mathbf{x}_{-}>0$| cascade. Let |$ \mathbf{x}\in B_{2t_{0}+1}$|⁠. If |$\mathbf{x}_{-}=0$|⁠, then clearly conditional on the event |$M$| we must have that |$a_{\mathbf{x}}=1$|⁠. Assume that |$\mathbf{x}_{-}>0$|⁠. Let |$K_{1}=\{(0,0),(0,1),\ldots ,(0,t_{0})\}\cup \{(1,0)\},$| and |$K_{2}=\{(0,0),(1,0),\ldots ,(t_{0},0)\}\cup \{(0,1)\}.$| The observation set |$B_{G}(\mathbf{x})$| of every interior agent |$ \mathbf{x}\in B_{2t_{0}+1}$| contains at least one of these two sets. Assume for simplicity that it contains the agents in |$K_{1}$|⁠. Let |$L$| be the likelihood ratio obtained from observing that all agents in |$K_{1}$| chose action |$1$|⁠. By definition |$L=L_*$|⁠. Since |$K_{1}\subset B_{G}(\mathbf{x})$| and all agents in |$B_{G}(\mathbf{x})$| take action |$1$| we must have |$L_{\mathbf{x}}\geq L_*.$| Hence agent |$\mathbf{x}$| cascades. A simple inductive argument shows that conditional on |$M,$| for all agents |$\mathbf{x} \in B_{t}$| such that |$t>2t_{0},$| and |$\mathbf{x}_{-}>0$|⁠, it holds that |$L_{\mathbf{x}}\geq L_{*}>\overline{L}.$| ‖ A.6. Proof of Theorem 5 Proof of Theorem 5. We first show that if private signals are unbounded, then the equilibrium |$ \sigma $| satisfies |$\alpha $|-proportional asymptotic learning for any |$ \alpha <1.$| Let |$f:Z_{+}^{m}\rightarrow \mathbb{N}$| be a bijection that assigns a unique natural number to every agent |$\mathbf{x}$|⁠. We shall identify every agent |$\mathbf{x}$| with |$f(\mathbf{x}).$| For every |$n,$| let |$Y_{n}$| be a Bernoulli random variable that corresponds to the payoff of agent |$n$|⁠. By Theorem 2 in Acemoglu et al. (2010), |$\lim_{n\rightarrow \infty }\mathbf{P}_{\sigma,1}(Y_{n}=1)=1$|⁠. Let |$\epsilon >0$|⁠.27 There exists a large enough |$n_{0}$| such that for every |$n>n_{0},$| \begin{equation} E(Y_{n})>1-\epsilon . \label{eq:ln} \end{equation} (A.19) Let |$t_{0}$| be sufficiently large such that for every |$t>t_{0}$| and |$\mathbf{ x}\in B_{t}$| it holds that |$f(\mathbf{x})>n_{0}$|⁠. Let |$b_{t}=|B_{t}|$| and let |$S_{t}=\frac{\sum_{\mathbf{x}\in B_{t}}Y_{\mathbf{x}}}{b_{t}}$|⁠. For every |$t>t_{0}$| we get from (A.19) and the linearity of the expectation operator that |$E(S_{t})>1-\epsilon.$| Since |$S_{t}$| is bounded above by |$1$| and since |$\epsilon $| is arbitrary we get that, \begin{equation} \lim_{t\rightarrow \infty }E(S_{t})=1. \label{eq:fe} \end{equation} (A.20) Again since |$S_{t}$| is bounded above by |$1,$| equation (A.20) implies that for every |$\alpha <1$|⁠, \begin{equation*} \lim_{t\rightarrow \infty }\mathbf{P}_{\sigma,1}(S_{t}<\alpha )=0. \end{equation*} Hence for every |$\alpha <1,$||$\lim_{t\rightarrow \infty }\mathbf{P}_{\sigma,1}(S_{t}\geq \alpha )=1,$| as claimed. We now turn to the second part of Theorem 5. Fix a Bayesian equilibrium |$\sigma $|⁠. By Lemma 2, if the true state of the world is |$\omega $| then there exists a positive probability |$p_{\omega }>0$| such that all agents take action |$1-\omega $|⁠. It follows that |$\alpha$|-proportional learning fails for every |$\alpha >0$|⁠. ‖ Acknowledgments We would like to thank the editor, three anonymous referees, Tai-Wei Hu, Erez Karpas, Ilan Lobel, Oren Louidor, and Peter N. Sorensen for useful comments and suggestions. We also thank Liora Braunstain for her research assistance. I.A. gratefully acknowledges the support of the German-Israeli Foundation for Scientific Research and Development (grant number 2022309). Supplementary Data Supplementary data are available at Review of Economic Studies online. Footnotes The editor in charge of this paper was Dimitri Vayanos. 1. Recent studies show a close relation between product adoption evolving according to a percolation model and empirical findings (see, e.g., Solomon et al. (2000) and Goldenberg et al. (2000)). 2. See Bollobas and Riordan (2006) and Durrett (1984). 3. Acemoglu et al. (2014) introduce a similar notion of proportional learning in a model of repeated truthful communication in social networks. See also Smith and Sorensen (2013). 4. See Smith and Sorensen (2000), Acemoglu et al. (2010), Lobel and Sadler (2013), and Arieli and Mueller-Frank (2014). 5. See Definition 2 and Theorem 1. 6. In the experimental literature, Anderson and Holt (1997), and Goeree et al. (2007) have shown that individuals overturn cascades; that is, they act in accordance with their signal and hence play the role of isolated agents. 7. Private signals are unbounded if the support of the conditional private probability (of either state) contains |$0$| and |$1$|⁠, and bounded if the support includes neither |$0$| nor |$1$|⁠. 8. A directed path from node |$x$| to node |$y$| in a network |$G=(V,E)$| is a sequence of edges in |$E$| such that |$xx_{1}$|⁠, |$x_{1}x_{2}$|⁠,...,|$x_{n}y$|⁠. 9. Notice that |$\mathbf{y\in B}_{G}(\mathbf{x})$| implies that |$\mathbf{y}$| takes his action in an earlier period than |$\mathbf{x}$|⁠. 10. See Durrett (1984). 11. To see this note that the |$m$|-dimensional lattice can be embedded in the |$m+1 $|-dimensional lattice. 12. Both |$b_{t}$| and |$r_{t}$| depend on the dimension of the lattice. To simplify the exposition we omit this dependence. 13. Recall that we restrict attention to informative signal distributions where |$F_{0}\neq F_{1}$|⁠. 14. For the standard herding model, Smith et al. (2014) and Herrera and Hoerner (2013) show that information cascades fail to occur for some bounded signal distributions. See Section 5 for more details. 15. That is, |$\Pr \left[ s=\omega \left\vert \omega \right. \right] =q$|⁠. 16. We take the liminf of the expected welfare as convergence in general may fail. 17. Despite the fact that |$\underline{w} _{p}^{m}(F)$| only provides a lower bound for asymptotic welfare, Section 6 supports the relation between welfare and the connectivity parameters |$p$| and |$m$|⁠, using numerical simulations. 18. See for example Alatas et al. (2016). 19. Recall that we are considering the set of all informative signal distributions where |$F_{0}\neq F_{1}$|⁠. 20. The files based on which the simulations were generated are available as supplementary data at Review of Economic Studies online. 21. To get this value we approximate |$\rho _{0.8}^{2}$| by the probability that the connected component that contains the origin intersects |$B_{400}$|⁠; i.e., it contains nodes within a distance of 400 from the origin. The probability is the average over 10,000 simulations. 22. Numerical simulation shows that |$\rho _{0.95}^{2}\approx 0.997$|⁠. 23. For |$m=3$| the graph represents the average proportion of optimal actions computed from 400 simulations. In addition, the simulations are derived based on an equilibrium as discussed above for |$m=2$|⁠. 24. The value |$n$| depends on the dimension |$m$|⁠. 25. |$q_{k}$| may be taken independently of the precise equilibrium strategy. 26. The assumption makes the analysis more transparent and saves notational complications. 27. For ease of exposition, we omit the dependence of the probability expression of |$m$|⁠. REFERENCES ACEMOGLU, D. , BIMPIKIS, K. and OZDAGLAR, A. ( 2014 ), “ Dynamics of Information Exchange in Endogenous Social Networks ”, Theoretical Economics , 9 , 41 – 97 . Google Scholar Crossref Search ADS ACEMOGLU, D. , DAHLEH, M. A. , LOBEL, I. et al. ( 2010 ), “ Bayesian Learning in Social Networks ”, Review of Economic Studies , 78 , 1 – 34 . ALON, N. and SPENCER, J. H. ( 2004 ), “ The Probabilistic Method ”, ( New York : John Wiley and Sons ). ANDERSON, L. R. and HOLT, C. ( 1997 ), “ Information Cascades in the Laboratory ”, American Economic Review , 87 , 847 – 862 . ALATAS, V. , BANERJEE, A. , CHANDRASEKHAR, A. HANNA, R. et al. ( 2016 ), “ Network Structure and the Aggregation of Information: Theory and Evidence from Indonesia ”, American Economic Review 106 , 1663 – 1704 . Google Scholar Crossref Search ADS ANDREWS, D. W. K. ( 1988 ) “ Law of Large Numbers for Dependent Non-Identically Distributed Random Variables ”, Econometric Theory , 4 , 458 – 467 . Google Scholar Crossref Search ADS ARIELI, I. , MUELLER-FRANK, M. ( 2014 ), “ A General Analysis of Sequential Learning ”, ( Working Paper , IESE Business School ). ARIELI, I. , MUELLER-FRANK, M. ( 2017 ), “ Inferring Beliefs from Actions ”, Games and Economic Behavior , 102 , 455 – 461 . Google Scholar Crossref Search ADS BANERJEE, A. V. ( 1992 ), “ A Simple Model of Herd Behavior ”, Quarterly Journal of Economics , 107 , 797 – 817 . Google Scholar Crossref Search ADS BANERJEE, A. V. and FUDENBERG, D. ( 2004 ), “ Word-of-Mouth Learning ”, Games and Economic Behavior , 46 , 797 – 817 . Google Scholar Crossref Search ADS BIKHCHANDANI, S. , HIRSHLEIFER, D. and WELCH, I. ( 1992 ), “ A Theory of Fads, Fashion, Custom, and Cultural Change as Information Cascades ”, The Journal of Political Economy , 100 , 151 – 170 . Google Scholar Crossref Search ADS BOLLOBAS, B. , and RIORDAN, O. ( 2006 ), Percolation , ( Cambridge, England : Cambridge University Press ). CELEN, B. and KARIV, S. ( 2004 ), “ Observational Learning under Imperfect Information ”, Games and Economic Behavior 47 , 72 – 86 . Google Scholar Crossref Search ADS CONLEY, T. G. and UDRY, C. R. ( 2010 ), “ Learning about a New Technology: Pineapple in Ghana ”, American Economic Review , 100 , 35 – 69 . Google Scholar Crossref Search ADS DURRETT, R. ( 1984 ), “ Oriented Percolation in Two Dimensions ”, The Annals of Probability , 12 , 999 – 1040 . Google Scholar Crossref Search ADS GOEREE, J. K. , PALFREY, T. R. , ROGERS, B. W. and MCKELVEY, R. D. ( 2007 ), “ Self-Correcting Information Cascades ”, Review of Economic Studies , 74 , 733 – 762 . Google Scholar Crossref Search ADS GRILICHES, Z. ( 1957 ), “ Hybrid Corn: An Exploration in the Economics of Technological Change ”, Econometrica , 25 , 501 – 522 . Google Scholar Crossref Search ADS GOLDENBERG, J. , LIBAI, B. , SOLOMON, S. , et al. ( 2000 ), “ Marketing Percolation ”, Physica A , 284 , 335 – 347 . Google Scholar Crossref Search ADS HERRERA, H. , HOERNER, J. ( 2013 ), “ A Necessary and Sufficient Condition for Information Cascades ”, ( Working Paper , Yale University ). LEE, I. H. and VALENTINYI, A. ( 2000 ), “ Noisy Contagion Without Mutation ”, Review of Economic Studies , 67 , 47 – 56 . Google Scholar Crossref Search ADS LOBEL, I. and SADLER, E. ( 2013 ), “ Social Learning and Aggregate Network Uncertainty ”, ( Working Paper , New York University ). MANSFIELD, E. ( 1961 ), “ Technical Change and the Rate of Imitation ”, Econometrica , 29 , 741 – 766 . Google Scholar Crossref Search ADS MORRIS, S. ( 2000 ), “ Contagion ”, Review of Economic Studies , 67 , 57 – 78 . Google Scholar Crossref Search ADS MOSSEL, E. , SLY, A. , TAMUZ, O. ( 2012 ), “ On Agreement and Learning ”, ( Working Paper U.C. Berkeley ). SGROI, D. ( 2002 ), “ Optimizing Information in the Herd: Guinea Pigs, Profits, and Welfare ”, Games and Economic Behavior 39 , 137 – 166 . Google Scholar Crossref Search ADS SMITH, L. ( 1991 ), “ Essays on Dynamic Models of Equilibrium and Learning ”, ( Ph.D. thesis , University of Chicago ). SMITH, L. , SORENSEN, P. ( 2000 ), “ Pathological Outcomes of Observational Learning ”, Econometrica , 68 , 371 – 398 . Google Scholar Crossref Search ADS SMITH, L. , SORENSEN, P. ( 2013 ), “ Rational Social Learning by Random Sampling ”, ( Working Paper , University of Wisconsin at Madison ). SMITH, L. , SORENSEN, P. and TIAN, J. ( 2014 ), “ Informational Herding, Optimal Experimentation, and Contrarianism ”, ( Working Paper , University of Wisconsin at Madison ). SOLOMON, S. , WEISBUCH, G. , DE ARCANGELIS , et al. ( 2014 ), “ Social percolation models ”, Physica A: Statistical Mechanics and its Applications , 277 , 239 – 247 . Google Scholar Crossref Search ADS © The Author(s) 2018. Published by Oxford University Press on behalf of The Review of Economic Studies Limited. Advance access publication 11 May 2018 This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Multidimensional Social Learning JF - The Review of Economic Studies DO - 10.1093/restud/rdy029 DA - 2019-05-01 UR - https://www.deepdyve.com/lp/oxford-university-press/multidimensional-social-learning-EPHKmsEexZ SP - 913 VL - 86 IS - 3 DP - DeepDyve ER -