Predictability of the imitative learning trajectories

07/12/2018 ∙ by Paulo R. A. Campos, et al. ∙ 0

The fitness landscape metaphor plays a central role on the modeling of optimizing principles in many research fields, ranging from evolutionary biology, where it was first introduced, to management research. Here we consider the ensemble of trajectories of the imitative learning search, in which agents exchange information on their fitness and imitate the fittest agent in the population aiming at reaching the global maximum of the fitness landscape. We assess the degree to which the starting and ending points determine the learning trajectories using two measures, namely, the predictability that yields the probability that two randomly chosen trajectories are the same, and the mean path divergence that gauges the dissimilarity between two learning trajectories. We find that the predictability is greater in rugged landscapes than in smooth ones. The mean path divergence, however, is strongly affected by the search parameters - population size and imitation propensity - that obliterate the influence of the underlying landscape. The imitative search becomes more deterministic, in the sense that there are fewer distinct learning trajectories and those trajectories are more similar to each other, with increasing population size and imitation propensity. In addition, we find that the roughness, which measures de deviation from additivity of the fitness function, of the learning trajectories is always greater than the roughness estimated over the entire fitness landscape.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Search is fundamental for human decision-making because the available choices usually result from discovery through search activities Simon_71 . Hence, given the central role played by imitation on the construction of human culture Blackmore_01 ; Boyd_10

, which was neatly summarized by the phrase “Imitative learning acts like a synapse, allowing information to leap the gap from one creature to another”

Bloom_01 , it has been suggested that agent-based models of the imitative learning search could reproduce some features of problem-solving strategy and performance of groups Kennedy_99 ; Lazer_07 ; Fontanari_14 ; Fontanari_15 . In fact, in the case that the agents are too propense to imitate their more successful peers or that the group is too large, the resulting disastrous performance of the imitative learning search, as compared with the baseline situation where the agents work independently of each other, is reminiscent of the classic Groupthink phenomenon of social psychology that occurs when everyone in a group starts thinking alike Janis_82 . Otherwise, if the imitation propensity and the group size are properly set, then the imitative search greatly improves the group performance, as expected Fontanari_15 .

The standard theoretical framework to study search heuristics is the fitness landscape metaphor first introduced in the context of evolutionary biology to assess the consequences of the genotype-fitness map

Wright_32 . The basic idea is that points in a multidimensional space representing all possible gene combinations (genotypes) compose the domain of a real valued function that represents the fitness of each genotype. Since the expected effect of natural selection is the increase of fitness, evolution can be viewed as a hill-climbing or adaptive walk in the space of genotypes. The effectiveness of natural selection would thus be strongly dependent on the ruggedness of the fitness landscape Carneiro_10 .

Within the fitness landscape framework we can, in principle, represent the entire evolutionary history of a population by a trajectory on the landscape, and so the analysis of the ensemble of the evolutionary trajectories can offer an approach to the (open) issue of whether evolution is predictable or not. In other words, if “the tape of evolution is replayed” should we expect a completely different outcome? We refer the reader to Gould_97 and Morris_10 for different viewpoints on this matter.

The study of evolutionary predictability through the analysis of the ensemble of evolutionary trajectories resulted in the proposal of (at least) two measures aiming at quantifying the diversity and divergence of trajectories that begin at a same point in genotype space and end also at a same, distinct point, usually the wild type of a gene that codifies an enzyme Lobkovsky_11 ; Lobkovsky_12 ; Visser_14

. The first measure is the predictability that yields the probability that two evolutionary trajectories picked at random from the ensemble of trajectories are the same. The second measure is the mean path divergence that gauges the dissimilarity between two randomly chosen trajectories in the ensemble of trajectories. Those measures indicate that adaptive walks in rugged landscapes are more deterministic and hence more predictable than walks in smooth landscapes. However, due mainly to the technical difficulty of defining and following evolutionary trajectories using standard evolutionary algorithms, there are no systematic studies of the effects of different degrees of ruggedness on the statistical properties of the ensemble of trajectories.

Here we consider the ensemble of learning trajectories of a well-mixed population of agents that search for the unique global maximum of NK-fitness landscapes Kauffman_87 ; Kauffman_95 using imitative learning. In this context, there is little ambiguity in the definition of a learning trajectory, which is the ordered sequence of states assumed by the model agent, i.e., the fittest agent in the population, who can influence all the other agents through the imitation procedure. We note that the NK model is the canonical modeling approach used in management research to study decision making in complex scenarios where the decision variables interact in determining the value (fitness) of their combinations Levinthal_97 ; Billinger_13 ; Sebastian_17 . In addition, use of the NK model allows the tuning of the ruggedness of the landscape and hence the study of the effects of the topography of the fitness landscape on the repeatability of the imitative learning trajectories in distinct searches.

We find that the properties of the ensemble of learning trajectories are determined by the interaction between the search dynamics and the underlying fitness landscape. In agreement with the results for evolutionary trajectories, we find that the learning trajectories are more deterministic, in the sense that they are more similar to one another, for rugged landscapes than for smooth (additive) ones. However, increasing the ruggedness of the landscape, say by increasing the parameter of the NK model, does not necessarily make the search more deterministic. The distance and the difference in fitness between the local maxima are probably influential factors to their attractivity, which greatly impact on the properties of the learning trajectories. The increase of either the population size or the imitation propensity makes the search more deterministic since both parameters enhance the attractivity of the local maxima, thus forcing the trajectories to pass through them. The existence of few escape routes from the local maxima results in more predictable and less divergent learning trajectories.

Another interesting aspect of the study of trajectories, which is not directly related to the predictability issue, is the possibility of assessing the roughness, defined as the deviation from additivity Aita_01 , experienced by the search on a specific learning trajectory. This analysis shows that changing the parameters of the model, i.e., the population size and the imitation propensity , changes the characteristics of the trajectories of the imitative search. In particular, the roughness of the learning trajectories increases monotonously with increasing and , and tends to a well-defined value for which corresponds to the trajectory of maximum roughness in the landscape. In addition, we find that the learning trajectories exhibit a roughness much greater than the average roughness of the landscape.

The rest of this paper is organized as follows. In Section II we offer an outline of the NK model of rugged fitness landscapes Kauffman_87 and present a measure of ruggedness, denoted by roughness Aita_01 , that we use to characterize those landscapes. The rules used by the agents to explore the state space of the NK-landscape are explained in Section III together with the definition of the (purged) learning trajectories. The measures of predictability and similarity of the learning trajectories are then introduced in Section IV. In Section V we present and analyze the results of our simulations, emphasizing the effect of the ruggedness of the landscape on the learning trajectories. Finally, Section VI is reserved for our concluding remarks.

Ii NK model and roughness

The NK model Kauffman_87 is the choice computational implementation of fitness landscapes that has been extensively used to study optimization problems in population genetics, developmental biology and protein folding Kauffman_95 . In fact, the repute of the NK model goes way beyond the (theoretical) biology realm, as that model is considered a paradigm for problem representation in management research Levinthal_97 ; Billinger_13 ; Sebastian_17 , since it allows the manipulation of the difficulty of the problems and challenges posed to individuals and companies. The NK model of rugged fitness landscapes is defined in the space of binary strings of length and so this parameter determines the size of the state space or the dimensionality of the landscape, namely, . The other parameter, , determines the range of the epistatic interactions among the bits of the binary string and influences strongly the number of local maxima on the landscape.

More pointedly, the state space of the NK landscape consists of the distinct binary strings of length , which we denote by with . To each string we associate a fitness value that is given by an average of the contributions from each component of the string, i.e.,

(1)

where is the contribution of component to the fitness of string . The local fitness depends on the state as well as on the states of distinct randomly chosen components, i.e., , with , so each has distinct arguments. It is clear then that the parameter

determines the degree of interaction (epistasis) among the components of the string. As usual, we assign a uniformly distributed random number in the unit interval to each one of the

distinct arguments of , so the specification of the full NK fitness landscape requires the generation of uniform deviates Kauffman_87 . As a result, we have for all strings . Moreover, because the fitness

are real valued random variables with support in the unit interval, the NK landscape exhibits a unique global maximum with probability one, as different strings have different fitness values.

For , the NK-fitness landscape exhibits a single maximum, which is easily determined by picking for each component the state if or the state , otherwise. In time, a string is a maximum if its fitness is greater than the fitness of all its neighboring strings (i.e., strings that differ from it at a single component). For , the fitness values of neighboring configurations are uncorrelated and so the NK model reduces to the Random Energy model Derrida_81 ; Saakian_09 . This (uncorrelated) landscape has on the average maxima with respect to single bit flips Kauffman_87 .

A simple and popular measure of the ruggedness of a landscape is the density of maxima, i.e., the number of strings with no fitter neighbors divided by the total number of strings in the landscape Kauffman_87 . However, here we will consider an alternative measure that has been used to model empirically adaptive landscapes in the protein evolution literature Aita_01 ; Carneiro_10 ; Lobkovsky_11 ; Visser_14 : the roughness that measures the deviation from additivity of a landscape. Since is measured over a particular trajectory towards the global maximum, it is a valuable tool to assess the nature of the search on rugged landscapes. In the following we offer a brief sketch of this important measure Aita_01 .

Without loss of generality, we can assume that the fitness of a string is given by the sum of an additive term and a non-additive term , i.e.,

(2)

Ideally, the non-additive term should be a small random component because, in practice, protein fitness landscapes are nearly additive close to the maximum (wild type) Carneiro_10 , but here we impose no condition on since is an NK landscape fully specified by eq. (1). The additive term is given by

(3)

where is the fitness value of the unique global maximum of the landscape and

(4)

Hence, once is known, we need only to specify the unknowns to determine the values of the additive fitness for all the strings. This can be done by considering the neighbors of the global maximum, for which we can rewrite eq. (3) as

(5)

Now, since the NK landscape defined in eq. (1) is clearly additive for , we must require that in this case. This condition is fulfilled provided we define

(6)

for . So the fitness of the additive landscape is guaranteed to coincide with the fitness of a general NK landscape at the global maximum and at its neighbors. In addition, in the case we have for all strings, since .

Let us consider a trajectory on an NK landscape of length that is comprised of the sequences . The roughness of the landscape measured along this trajectory is defined as Aita_01

(7)

which depends on the particular trajectory followed by the imitative search on its way towards the global maximum of the landscape. The global roughness of a landscape is defined by summing over all the state space in eq. (7) and setting to the size of that space, i.e., . Notice that we use the term ruggedness in a qualitative sense to mean the jaggedness of the landscape, whereas the term roughness is reserved for the quantity defined in eq. (7).

In this paper we focus mainly on single realizations of NK-fitness landscapes with for and , so each landscape has strings. As expected, the number of maxima increases with , and for the realizations considered we find , , and maxima for and , respectively. Different realizations yield qualitatively similar results (see, e.g. Fontanari_15 ). The somewhat low dimensionality of the NK landscapes, as well as the use of single landscape realizations, is justified by the high computational demand to store and analyze the learning trajectories needed in the calculation of the predictability and the mean path divergence. We recall that the number of trajectories are given by combinations of the number of strings and so, for large , no feasible ensemble of trajectories can suitably describe the properties of the full trajectory space. The predictability measure is more susceptible to the increase of the dimensionality of the landscape as it entails the generation of identical trajectories, which requires an unrealistically large number of runs. In fact, previous experimental and computational studies have considered landscapes of even lower dimensionality, with the string lengths varying from to Carneiro_10 ; Lobkovsky_11 ; Lobkovsky_12 ; Visser_14 ; Aita_01 . We address the effects of increasing the dimensionality of the landscape in Section V.

Iii Imitative Learning

We consider a well-mixed population of agents or binary strings of length that explore the state space of an NK-fitness landscape aiming at reaching the global maximum . Initially, all strings are identical (isogenic population) and set to the antipode of , i.e., . The initial isogenic population is necessary because the predictability measure (see Section IV) requires counting the number of identical trajectories in the ensemble of learning trajectories and so all searches must begin and end at the same strings, otherwise they could never be identical. The imitative search strategy is based on the presumption that it may be advantageous for an agent to copy or imitate the agent with the highest fitness in the population, the so-called model agent Rendell_10 ; King_12 . Notice that in this paper we use the terms agent and string interchangeably.

More pointedly, we implement the synchronous or parallel update of the agents as follows. At time we first determine the model agent and then we let each agent choose between two actions. The first action, which happens with probability , consists of simply flipping a bit at random. The second action, which happens with probability , is the imitation of the model agent. The model and the target agents are compared and the different bits are singled out. Then the target agent selects at random one of the distinct bits and flips it so that this bit is now the same in both agents. As expected, imitation results in the increase of the similarity between the target and the model agents, which may not necessarily lead to an increase of the fitness of the target agent if the landscape is not additive. In the case the target agent is identical to the model agent, which happens not only in our initial isogenic population setup but also during the search since the imitation procedure reduces the diversity of the strings, the target agent flips a bit at random with probability one. After the agents are updated we identify the fittest agent in the population, which then becomes the model agent for the next update round at time .

The parameter is the imitation propensity, which is the same for all agents (see Fontanari_16 for the relaxation of this assumption). The case corresponds to the baseline situation in which the agents explore the state space independently of each other. The case corresponds to the situation where only the model string explores the state space through random bit flips; the other strings simply follow the model, thus making the population almost isogenic.

We note that during the increment from to all agents are updated either by flipping a bit at random or by copying a bit of the model string. The search ends when one of the agents finds the global maximum and we denote by the time when this happens. Clearly, at time the model string is the global maximum . We stress that the imitative search always reaches the global maximum since there is a nonzero probability that a string flips a bit at random, which guarantees that the agents are always exploring the state space, thus avoiding the permanent trapping in the local maxima. This is true even for , since according to the rules of the imitative search, the model string (and only that string) flips bits at random. Of course, the time the imitative search takes to escape the local maxima and reach the global maximum can be very large, as we will see in Section V, but it is definitely finite. This contrasts with the adaptive walks Kauffman_87 , where only bit flips that increase the fitness of the strings are allowed, which may permanently get stuck in a local maximum of the landscape. For those walks it is important to study the influence of the topography of the landscape on the accessibility of the global maximum Schmiegelt_14 , whereas for the imitative search the relevant issue is determining the time needed to find the global maximum.

In spite of apparent similarities, the imitative learning search is markedly different from the evolutionary algorithms (see Back_96 ). In fact, the exploration of the state space through the flipping of randomly chosen bits is similar to the mutation operator of those algorithms, with the caveat that in evolutionary algorithms mutation is an error of the reproduction process, whereas in the imitative search flipping a bit and imitation are mutually exclusive processes. The analogy between the imitation and the crossover processes is more flimsy since the model agent is a mandatory parent in all mates but it contributes a single gene (i.e., a single bit) to the offspring which then replaces the other parent, namely, the target agent. Since the contributed gene is not random - it must be absent in the target agent - the genetic analogy is clearly inappropriate and so the imitative learning search stands on its own as a search strategy.

The performance of a search is evaluated by the computational cost that measures the total number of agent updates implemented in a search of duration . Since for each round of the search exactly agents are updated, we have . In addition, since finding the global maximum of the NK model for is a NP-complete problem Solow_00 , which means that the time required to solve the problem scales with the size of the state space, and in order to guarantee that is a quantity on the order of 1 we write

(8)

This definition guarantees that is on the order of 1 for the independent search (see Fontanari_15 for an analytical derivation of the computational cost in this case) and, as we will show in Section V, for the imitative search as well. We note that since our searches begin with an isogenic population set at the antipode of , we expect to obtain a higher computational cost as compared with the usual initial setup where the strings are chosen randomly.

Whereas the previous studies of the imitative search focused on the dependence of the computational cost on the two parameters of the search strategy, namely, the number of agents and the imitation propensity Fontanari_14 ; Fontanari_15 , here our focus is on the statistical characterization of the ensemble of learning trajectories. We use two quantitative measures introduced in the study of evolutionary trajectories Lobkovsky_11 - the predictability and the path divergence - which we describe in detail in the next section.

Iv Characterization of trajectories

The trajectory of a particular imitative search is the ordered sequence of the model strings that begins at the antipode of and ends at . Thus an unpurged trajectory has , not necessarily distinct, strings. However, the trajectories we consider here are purged of loops and so they contain typically much fewer than strings. Explicitly, if a string appears as a model string more than once in a search, thus forming a loop structure, the trajectory is redefined and the loop is removed. In this way no string can appear more than once in the resulting purged learning trajectory. Therefore, for our purposes, a learning trajectory is described by a directed graph with no loop structures and where each node in the graph denotes a model string. We stress that the evaluation of the computational cost (8) does not involve the purging of loops from trajectories, as accounts for all (not necessarily distinct) model strings produced in the search. The loops need to be purged from the learning trajectories for the calculation of the predictability measure only. In fact, since in our searches we never found two identical unpurged trajectories, that measure is not suitable to characterize the ensemble of the original trajectories, hence the need to purge the loops from them.

For each run of the imitative search, we keep track of the entire (unpurged) learning trajectory comprising strings beginning at the antipode of and ending at . We then purge the loops from the trajectory and store it in the trajectory ensemble. This procedure is repeated times. Hence, the trajectory ensemble contains learning trajectories of the imitative search for a fixed realization of the NK-fitness landscape and for fixed parameters and . Let us denote by one such a trajectory and by its probability of occurrence in the ensemble, which is given by the ratio between the number of times appears in the ensemble and the ensemble size . The predictability of the imitative search is defined as the probability that two independent runs result in the same (purged) learning trajectory Roy_09 (see also Lobkovsky_11 ; Lobkovsky_12 ; Visser_14 ), i.e.,

(9)

where the sum is over all distinct trajectories in the trajectory ensemble. This simple measure of the repeatability of the trajectories varies from in the case all trajectories in the ensemble are different to in the case there is a single trajectory joining the initial and end points of the search Visser_14 . The inverse of the predictability, , can be viewed as the effective number of trajectories that contribute to the imitative search. We note that in the spin-glass literature is akin to the spin-glass order parameter and is like a participation ratio Mezard_86 .

A drawback of the predictability is that very similar, but not identical, trajectories are counted simply as different trajectories and so this quantity offers no clue on the similarity between the learning trajectories of the imitative search. Since all learning trajectories have the same starting and ending points, we can use the mean path divergence originally introduced in the study of evolutionary paths Lobkovsky_11 to assess the trajectories similarity. The key element here is the divergence between trajectories and , which may exhibit very distinct lengths, that is calculated as follows. For each string in we measure the Hamming distances to all strings in and keep the shortest distance only. Then is defined as the average of these shortest Hamming distances over all strings in . Note that and . Finally, the mean path divergence is defined as

(10)

which yields the expected divergence of two trajectories drawn at random from the ensemble of trajectories Lobkovsky_11 . We note that the time (or the position of the strings in the purged trajectory) factors out of the divergence . This is so because the trajectories typically have very distinct lengths, which limits the usefulness of comparing strings at the same position in both trajectories. In addition, since we purged the loops from the trajectories the time information is lost, although the order of the strings is maintained.

V Results

As a measure of the performance of the population in searching for the global maximum of the NK-fitness landscapes, we consider the mean computational cost , which is obtained by averaging the computational cost (8) over searches for each landscape realization. The dependence of this mean cost on the model parameters and for homogeneous, well-mixed populations has been extensively studied in previous works Fontanari_15 .

Figure 1 shows against the imitation propensity for populations of fixed size and for landscapes of distinct ruggedness. For the smooth, additive landscape () the cost decreases monotonously with increasing . This is expected because in this case the fitness of a string is highly correlated to its distance to the global maximum and so it is always beneficial to imitate the model agent. This scenario changes for rugged landscapes due to the presence of local maxima that may temporarily trap the entire population if the imitation propensity is too high. The catastrophic performance observed in this case is akin to the Groupthink phenomenon Janis_82 , when everyone in a group starts thinking alike, which can occur when people put unlimited faith in a talented leader (the model agent). In this case, there is an optimal value of the imitation propensity that minimizes the cost. For too rugged landscapes (say, in Fig. 1), the best strategy is never imitate the model (i.e., ), thus allowing the agents to explore the landscape independently of each other.

Figure 1: Mean computational cost as function of the imitation propensity for populations of fixed size . The parameters of the fitness landscape are and as indicated.

Figure 1 reveals a curious result: the hardest challenge to the imitative search is not the most rugged landscape () but the moderately rugged landscape (). In fact, the common notion that the hardest challenges to search heuristics are landscapes with a large number of local maxima (i.e., ) stems from the study of adaptive walks, which are prone to get permanently stuck in those maxima Kauffman_87 ; Kauffman_95 . Different search heuristics, however, may be affected by other features of the landscape. For instance, the imitative search is more affected by the local maxima for being farther apart than for , which makes the escape from them much more costly.

Figure 2: Predictability of the learning trajectories as function of the imitation propensity for populations of fixed size . The parameters of the fitness landscape are and as indicated.

We turn now to the study of the learning trajectories using the two measures introduced in Section IV. Figure 2 shows the predictability for the learning trajectories that resulted in the mean computational costs exhibited in Fig. 1. The results of Fig. 2 are bewildering since they show that even if the computational costs are practically indistinguishable for the different landscapes, as in the case of small , the ensembles of learning trajectories are completely different. In particular, for the smooth () or the nearly smooth () landscapes, for which the local maxima have little effect on the search (see Fig. 1), the learning trajectories are practically unpredictable: each run follows a distinct learning trajectory. For the more rugged landscapes, however, the local maxima seem to act as mandatory rest areas of the imitative search resulting in a substantial increase of the predictability. This effect is enhanced by the increase of the imitation propensity, as expected, making the dynamics almost deterministic for the landscape with .

As pointed out in Section IV, the mean path divergence offers a view of the similarity of the learning trajectories. This quantity is shown in Fig. 3 for the same trajectories considered in the analysis of the predictability. The monotonic decreasing of with increasing is expected since the overlap among trajectories must increase due to the boosted influence of the local maxima. We note that there is no direct relation between the predictability and the path divergence . For instance, consider a situation where the search can follow two trajectories only and that those, say equally probable, trajectories have negligible overlap. This hypothetical scenario would produce highly divergent and highly predictable trajectories. Thus, although there are fewer distinct trajectories in the ensemble for than in the ensemble for (see Fig. 2), the distinct trajectories for are much more divergent than those for . In this line, the comparison between the results for and for small indicates that the few distinct trajectories that compose the ensemble of trajectories for are as divergent as the multitude of trajectories of the ensemble for . However, for high the ensemble of trajectories for exhibits the hallmarks of a deterministic dynamics, namely, high predictability and low divergence.

Figure 3: Mean path divergence of the learning trajectories as function of the imitation propensity for populations of fixed size . The parameters of the fitness landscape are and as indicated.

A word is in order about the effect of the size of the ensemble of trajectories (i.e., the number of searches) on our results. This ensemble size is more than enough for a high accuracy estimate of the mean computational cost (Fig. 1) and the mean path divergence (Fig. 3). In fact, a much smaller number of searches (e.g., ) yields the same results for those quantities. However, a very large is required to accurately estimate the predictability . We recall that this quantity can be interpreted as the probability that two identical trajectories are randomly selected with replacement from our ensemble of trajectories (see eq. (9)). Since the minimum value that can assume is , our estimate is reliable provided that , which guarantees a significant number of identical trajectories in the ensemble. Hence for the data shown in Fig. 2, our results are not reliable quantitatively in the regions where is on the order of . Qualitatively, however, they correctly point out the very low predictability of the search in those regions.

The previous analysis considered a fixed population size and focused on the influence of the imitation propensity on the properties of the ensemble of learning trajectories. We recall that the number of agent updates the imitative search requires to find the global maximum is given by (see eq. (8)) and so for the search becomes prohibitively demanding if . Since for the cost increases steeply with increasing (see Fig. 1), it is computationally unfeasible to carry out searches beyond . For very short strings, say , we can easily run searches with because a high computational cost, say on the order of , does not correspond to a too large number of updates. Actually, this is how we know that the computational cost does not diverge for . Of course, this is not an issue for the additive landscape (), since in this case the computational cost decreases monotonically with increasing .

Regarding the predictability of stochastic dynamic processes, the study of the influence of the population size is important as one usually expects that the dynamics becomes more deterministic as the population increases. Figure 4 summarizes our findings for a fixed NK-fitness landscape with and . Both measures and indicate that the learning trajectories become more deterministic, in the sense that there are fewer distinct trajectories and they are more similar to each other, as increases. The reason for that is the strengthening of the attractivity of the local maxima as evinced by the increase of the computational cost. In fact, if the search is forced to visit the regions around those maxima, the resulting trajectories must exhibit a high overlap. Large populations increase the attractivity of the local maxima simply because they allow the existence of several copies of the model agent and this makes it very hard for the imitative search to explore other regions of the state space through the flipping of bits at random, since the extra copies attract the updated model agent back to the local maximum. Hence what makes the dynamics more deterministic with increasing is not the decrease of the size of fluctuations around the deterministic trajectory, but the mandatory passage through the local maxima of the landscape and the few routes of escape from them.

Figure 4: Influence of the population size on (a) the mean computational cost , (b) the predictability and (c) the mean path divergence , for the imitation propensities as indicated. The parameters of the landscape are and .
Figure 5: Mean roughness of the learning trajectories as function of (a) the imitation propensity for and (b) the population size for as indicated. The dashed lines show the global roughness of the landscape, . The parameters of the landscape are and .

Finally, Fig. 5 reveals the utility of the trajectory-dependent roughness , which is defined in eq. (7), in elucidating the nature of the imitative learning search. Actually, since is defined for a particular trajectory, this figure presents the average of over the trajectories of the ensemble of learning trajectories. The results show that the imitative search seems to experience a more rugged landscape as or increases, although the landscape is fixed for the data shown in the figure. We recall that the roughness of a given trajectory on a rugged landscape measures the deviation from additivity of the fitness values of the strings that compose the trajectory Aita_01 . Since the actual realizations of the learning trajectories are determined by the rules of the imitative search, Fig. 5 shows that the search follows trajectories that depart markedly from additivity in the cases of large and , i.e., the imitative search seems to bias the agents towards the roughest paths that connect the global maximum to its antipode. For very large or close to 1, the roughness seems to saturate to a value that depends only on the landscape and that characterizes the trajectory of maximum roughness on it. Moreover, regardless of the values of the parameters and , the roughness of the learning trajectories are always greater than the global roughness of the landscape, as indicated in Fig. 5. The reason is that the imitative search is attracted to regions around the local maxima, where the rugged landscape differs the most from its additive counterpart. Since the computational cost is very high in the regime where the roughness is large (see Fig. 4), our results hint that an efficient search strategy should follow the smoothest paths to the global maximum. This leads to the interesting question of determining the minimum possible roughness of a landscape. In fact, in the same sense that the global roughness can be seen as a lower bound for the maximum roughness path, it is an upper bound for the minimum roughness path. The determination and characterization of those two extreme paths may offer a promising alternative approach to study rugged fitness landscapes.

To conclude, we address now the effect of increasing the dimensionality of the landscape, which is determined by the length of the strings. This can be understood by considering the correlation between the fitness of any two neighboring strings, which is given by , where is the string with bit flipped, and the fact that the mean density of local maxima is given by for the uncorrelated NK landscape, i.e., for Kauffman_87 . Another important piece of information to understand the effect of is the so-called complexity catastrophe, which asserts that, regardless of the value of , the average fitness of the local maxima equals the average fitness of the landscape for large Kauffman_87 ; Solow_99 . Hence for fixed , increase of the dimensionality of the landscape makes it more correlated and the local maxima sparser. In addition, since the fitness of those maxima are only marginally greater than the mean fitness of the landscape for large , the imitative search can easily escape them. In sum, since high dimensional NK-fitness landscapes are essentially smooth landscapes for which the trap effect of the local maxima is greatly attenuated, we expect the learning trajectories to be highly unpredictable, similarly to the case .

The smooth landscape scenario for high dimensional NK landscapes is corroborated by an analysis of the computational cost of NK landscapes with increasing and fixed local fitness correlation. Figure 6 shows this study for landscapes with , , and , which exhibit the same local correlation, viz. . As this analysis focus on the computational cost only, we run searches for each landscape. In addition, for each set of the parameters and we generate and store landscape realizations, so as to guarantee that runs with different search parameters ( in the case of the figure) explore exactly the same landscape realizations. The results of Fig. 6, where each point represents an average over searches and landscape realizations, show that the computational cost tends to a monotonically decreasing function of when the length of the strings becomes very large, which is what one expects of a search on a landscape unplagued by local maxima.

Figure 6: Mean computational cost as function of the population size for four families of NK landscapes , , and as indicated. The mean number of local maxima is , , and , respectively, and the imitation propensity is .

Vi Discussion

The study of the statistical properties of the ensembles of trajectories offers an alternative perspective to assess the behavior of agent-based models, whose analysis has mostly focused on the length and on the nature of the ending points of the dynamic trajectories. Although the original motivation to study ensembles of evolutionary trajectories was to offer an answer to the ultimate (and still open) question whether evolution is deterministic and hence predictable or stochastic and hence unpredictable Lobkovsky_11 ; Visser_14 , the measures for the predictability and the divergence of trajectories introduced in those studies could also be fruitfully applied to study dynamical systems outside the biological realm.

Accordingly, in this paper we have analyzed the learning trajectories of the imitative search Fontanari_14 on NK-fitness landscapes using the predictability measure , which yields the probability that two independent runs of the search follow the same trajectory on the state space Roy_09 , and the path divergence , which measures the dissimilarity of the learning trajectories Lobkovsky_11 . Regarding the use of these measures, a great advantage of the imitative search over the traditional algorithms used to model the evolutionary dynamics is the ease to define a learning trajectory, which is the ordered, loop-free sequence of model strings that ends at the (unique) global maximum of the landscape and begins at its antipode. Since in the imitative search the model strings play an important role in guiding the population towards the global maximum, it is natural to use them to compose the learning trajectories. In the evolutionary algorithms, however, there is no such natural choice: the fittest string at a given generation is more likely to contribute offsprings to the the next generation but does not have a global guiding role as the model string of the imitative learning, and the consensus string is an artificial construct that plays no role in the population dynamics.

We find that the measures and depend not only on the underlying fitness landscape but also on the way the population explores the landscape, which may enhance some features of its topography as, for instance, the attractivity of the local maxima. This effect makes it difficult to draw general conclusions on the influence of the topography on the learning trajectories. Nevertheless, our results for the predictability of the trajectories (see Fig. 2) accord with the empirical findings that the landscape ruggedness caused by magnitude and sign epistasis constrains the pathways of protein evolution without shutting off pathways to the maximum Carneiro_10 . This is exactly what we observe when we compare the predictability for the additive () landscape with the predictability for rugged landscapes. This result is expected since a rugged landscape exhibits a multitude of minima that are actively avoided by both the imitative and the evolutionary searches. Avoiding these minima as well as the regions of low fitness around them, restricts the learning trajectories and hence increases the predictability. However, the predictability exhibits a non-monotonic dependence on the ruggedness of the NK-fitness landscapes: although the trajectories on a rugged landscape are more predictable than on a smooth landscape, increasing the ruggedness of the landscape does not necessarily increase the predictability of the trajectories.

The high predictability and low divergence trajectories observed when the population size or the imitation propensity are large correlate very well with the regions of high computational cost, which indicates that these trajectories were trapped in local maxima, the escape of which was very costly to the imitative search. We note that the reason the local maxima makes the learning trajectories more deterministic is not only that they attract the trajectories but that there are only a few effective escape routes away from them. This perspective allows us to understand why the computational cost and the predictability are higher for the landscape with than for the uncorrelated landscape with (see Figs. 1 and 2): although the latter landscape exhibits more local maxima, there are many effective and distinct escape paths from them. The mean path divergence (see Fig. 3) accords with and adds to this scenario by showing that the escape paths are more divergent for the uncorrelated landscape. In sum, we conclude that high levels of determinism come at the cost of computational effort. We note, however, that there is no gain on the learning trajectories being more (or less) deterministic, and that the imitative search is inherently a stochastic search heuristic, despite the high predictability of the learning trajectories for some parameters of the search.

In addition to the analysis of the predictability and divergence measures, the knowledge of the learning trajectories allows us to examine the influence of the model parameters on the way the imitative search experiences the landscape. This is possible due to the introduction of the trajectory-dependent roughness , defined in eq. (7), that reveals quite explicitly that the imitative search follows qualitatively distinct trajectories as the control parameters and change. As pointed out, large values of these parameters increase the attractivity of the local maxima, which results in an increase of the roughness of the learning trajectories. In fact, since measures essentially the deviation from additivity Aita_01 and the local maxima are the hallmarks of a non-additive landscape, a trajectory that visits a large number of them (or their vicinities) is guaranteed to maximize the roughness.

Although our study focused solely on time-independent fitness landscapes, we mention that in the contexts of cultural evolution and social learning time-dependent fitness landscapes are of utmost relevance (see, e.g., Wilke_99 for the study of adaptive walks on time-dependent NK landscapes), since it is precisely the variability of the environment that promotes the evolution of learning, or more generally, of phenotypic plasticity Scheiner_17 . We note, however, that the whole issue of the predictability of evolution presupposes that the environment is either constant, as considered here, or that it changes in the same way each time the evolution tape is played (i.e., the landscape changes deterministically), otherwise the notion of predictability would make little sense Gould_97 ; Morris_10 .

In view of the widespread use of the fitness (or adaptive) landscape metaphor on so many distinct branches of science Kauffman_95 , the analysis of the ensemble of trajectories using the tools discussed in this paper is likely to greatly impact our understanding of a vast class of dynamical systems. Here we have shown how this analysis sheds light on the intricate interaction between the imitative learning search and the underlying fitness landscape, revealing the crucial role the local maxima play as guideposts on the route towards the global maximum.

Acknowledgements.
P.R.A.C. is supported in part by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) by Grant No. 303497/2014-9 and Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco (FACEPE) under Project No. APQ-0464-1.05/15. J.F.F. is supported in part by Grant No. 2017/23288-0, Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) and by Grant No. 305058/2017-7, Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).

References

  • (1) Simon H A and Newell A, Human problem solving 1971 Am. Psychol. 26 145
  • (2) Blackmore S, Evolution and memes: The human brain as a selective imitation device 2001 Cybern. Syst. 32 225
  • (3) Richerson P J, Boyd R and Henrich J, Gene-culture coevolution in the age of genomics 2010 Proc. Natl. Acad. Sci. USA 107 8985
  • (4) Bloom H 2001 Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century (New York, Wiley)
  • (5) Kennedy J, Minds and cultures: Particle swarm implications for beings in sociocognitive space 1999 Adapt. Behav. 7 269
  • (6) Lazer D and Friedman A, The Network Structure of Exploration and Exploitation 2007 Admin. Sci. Quart. 52 667
  • (7) Fontanari J F, Imitative Learning as a Connector of Collective Brains 2014 PLoS ONE 9 e110517
  • (8) Fontanari J F, Exploring NK fitness landscapes using imitative learning 2015 Eur. Phys. J. B 88 251
  • (9) Janis I L 1982 Groupthink: psychological studies of policy decisions and fiascoes (Boston, Houghton Mifflin)
  • (10) Wright S, The roles of mutation, inbreeding, cross-breeding and selection in evolution 1932 Proc. Sixth Int. Cong. Genet. 1 356
  • (11) Carneiro M and Hartl D L, Adaptive landscapes and protein evolution 2010 Proc. Natl. Acad. Sci. USA 107 1747
  • (12) Gould S J 1997 Full House: The spread of excellence from Plato to Darwin (New York, Three Rivers Press)
  • (13) Morris S C, Evolution: like any other science it is predictable 2010 Philos Trans R Soc Lond B Biol Sci 365 133
  • (14) Lobkovsky A E, Wolf Y I and Koonin E V, Predictability of evolutionary trajectories in fitness landscapes 2011 PLoS Comput. Biol. 7 e1002302
  • (15) Lobkovsky A E and Koonin E V, Replaying the tape of life: quantification of the predictability of evolution 2012 Frontiers Genet. 3 246
  • (16) de Visser J A G and Krug J, Empirical fitness landscapes and the predictability of evolution 2014 Nat. Rev. Genet. 15 480
  • (17) Kauffman S A and Levin S, Towards a general theory of adaptive walks on rugged landscapes 1987 J. Theor. Biol. 128 11
  • (18) Kauffman S A 1995 At Home in the Universe: The Search for Laws of Self-Organization and Complexity (New York, Oxford University Press)
  • (19) Levinthal D A, Adaptation on Rugged Landscapes 1997 Manag. Sci. 43 934
  • (20) Billinger S, Stieglitz N and Schumacher T R, Search on Rugged Landscapes: An Experimental Study 2013 Organ. Sci. 25 93
  • (21) Reia S M, Herrmann S and Fontanari J F, Impact of centrality on cooperative processes 2017 Phys. Rev. E 95 022305
  • (22) Aita T, Iwakura M and Husimi Y, A cross-section of the fitness landscape of dihydrofolate reductase 2001 Protein Eng. 14 633
  • (23) Derrida B, Random-energy Model: An Exactly Solvable Model of Disordered Systems 1981 Phys. Rev. B 24 2613
  • (24) Saakian D B and Fontanari J F, Evolutionary dynamics on rugged fitness landscapes: exact dynamics and information theoretical aspects 2009 Phys. Rev. E 80 041903
  • (25) Rendell L, Boyd R, Cownden D, Enquist M, Eriksson K, Feldman M W, Fogarty L, Ghirlanda S, Lillicrap T and Laland K N, Why Copy Others? Insights from the Social Learning Strategies Tournament 2010 Science 328, 208
  • (26) King A J, Cheng L, Starke S D and Myatt J P, Is the true ‘wisdom of the crowd’ to copy successful individuals? 2012 Biol. Lett. 8 197
  • (27) Fontanari J F, When more of the same is better 2016 EPL 113 28009
  • (28) Schmiegelt B and Krug J, Evolutionary accessibility of modular fitness landscapes 2014 J. Statist. Phys. 154 334
  • (29) Bäck T 1996

    Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms

    (New York, Oxford University Press)
  • (30) Solow D, Burnetas A, Tsai M and Greenspan N S, On the Expected Performance of Systems with Complex Interactions Among Components 2000 Complex Systems 12 423
  • (31) Solow D, Burnetas A, Tsai M and Greenspan N S, Understanding and attenuating the complexity catastrophe in Kauffman’s NK model of genome evolution 1999 Complexity 5 53
  • (32) Roy S W, Probing evolutionary repeatability: neutral and double changes and the predictability of evolutionary adaptation 2009 PLoS ONE 4 e4500
  • (33) Mezard M, Parisi G and Virasoro M A 1986 Spin Glass Theory and Beyond (Singapore, World Scientific)
  • (34) Wilke C O and Martinetz T, Adaptive walks on time-dependent fitness landscapes 1999 Phys. Rev. E 60 2154
  • (35) Scheiner S M, Barfield M and Holt R D, The genetics of phenotypic plasticity. XV. Genetic assimilation, the Baldwin effect, and evolutionary rescue 2017 Ecol. Evol. 7 8788