DeepAI
Log In Sign Up

Mental Sampling in Multimodal Representations

Both resources in the natural environment and concepts in a semantic space are distributed "patchily", with large gaps in between the patches. To describe people's internal and external foraging behavior, various random walk models have been proposed. In particular, internal foraging has been modeled as sampling: in order to gather relevant information for making a decision, people draw samples from a mental representation using random-walk algorithms such as Markov chain Monte Carlo (MCMC). However, two common empirical observations argue against simple sampling algorithms such as MCMC. First, the spatial structure is often best described by a Lévy flight distribution: the probability of the distance between two successive locations follows a power-law on the distances. Second, the temporal structure of the sampling that humans and other animals produce have long-range, slowly decaying serial correlations characterized as 1/f-like fluctuations. We propose that mental sampling is not done by simple MCMC, but is instead adapted to multimodal representations and is implemented by Metropolis-coupled Markov chain Monte Carlo (MC^3), one of the first algorithms developed for sampling from multimodal distributions. MC^3 involves running multiple Markov chains in parallel but with target distributions of different temperatures, and it swaps the states of the chains whenever a better location is found. Heated chains more readily traverse valleys in the probability landscape to propose moves to far-away peaks, while the colder chains make the local steps that explore the current peak or patch. We show that MC^3 generates distances between successive samples that follow a Lévy flight distribution and 1/f-like serial correlations, providing a single mechanistic account of these two puzzling empirical phenomena.

READ FULL TEXT VIEW PDF
01/14/2020

Hamiltonian Monte Carlo Swindles

Hamiltonian Monte Carlo (HMC) is a powerful Markov chain Monte Carlo (MC...
07/30/2015

Orthogonal parallel MCMC methods for sampling and optimization

Monte Carlo (MC) methods are widely used for Bayesian inference and opti...
06/11/2018

Adaptive MCMC via Combining Local Samplers

Markov chain Monte Carlo (MCMC) methods are widely used in machine learn...
06/23/2017

A-NICE-MC: Adversarial Training for MCMC

Existing Markov Chain Monte Carlo (MCMC) methods are either based on gen...
10/25/2021

Nested R̂: Assessing Convergence for Markov chain Monte Carlo when using many short chains

When using Markov chain Monte Carlo (MCMC) algorithms, we can increase t...
12/03/2020

Penalised t-walk MCMC

Handling multimodality that commonly arises from complicated statistical...
06/26/2020

Anytime Parallel Tempering

Developing efficient and scalable Markov chain Monte Carlo (MCMC) algori...

1 Introduction

In many complex domains, such as vision, motor control, language, categorization or common-sense reasoning, human behavior is consistent with the predictions of Bayesian models (e.g., battaglia2013 ; sanborn2013 ; chater2006 ; anderson1991 ; griffiths2011 ; kemp2009 ; wolpert2007 ; yuille2006

). Bayes’ theorem prescribes a simple normative method to combine prior beliefs with new information to make inferences about the world. However, the sheer number of hypotheses that must be considered in complex domains makes exact Bayesian inference intractable. Instead it must be that individuals are performing some kind of approximate inference

vul2014 ; sanborn2016 .

Sampling is a way to perform approximation for Bayesian models in complex problems that makes many difficult computations easy: instead of integrating over vast hypothesis spaces, samples of hypotheses can be drawn from the posterior distribution. The computational cost of sample-based approximations only scales with the number of samples rather than the dimensionality of the hypothesis space, though using small numbers of samples result in particular biases in inference.

Interestingly, the biases in inference that are introduced by using a small number of samples match some of the biases observed in human behavior. For example, probability matching vul2014 , anchoring effects lieder2012 , and many reasoning fallacies dasgupta2016 ; sanborn2016 can all be explained in this way. However, there is as of yet no consensus on the exact nature of the algorithm used to sample from human mental representations.

Previous work has posited that people either use direct sampling or Markov chain Monte Carlo (MCMC) to sample from their posterior distribution over hypotheses vul2014 ; lieder2012 ; dasgupta2016 ; sanborn2016 . In this paper, we demonstrate that these algorithms cannot explain two key empirical effects that have been found in a wide variety of tasks. In particular, these algorithms do not produce distances between samples that follow a Lévy flight distribution, and separately they do not produce autocorrelations that follow scaling. To find a sampling algorithm that does match these empirical effects, we note that mental representations have been shown to be “patchy” with high probability regions separated by large regions of low probability. We then compare one of the first algorithms developed for sampling from multimodal distributions, Metropolis-coupled MCMC (MC), and demonstrates that it produces both key empirical phenomena. Previously Lévy flight distributions and scaling have been explained separately as the result of efficient search and the signal of self-organizing behavior respectively viswanathan1999 ; vanorden2003 , and we provide the first account that can explain both phenomena as the result of the same purposeful mental activity.

1.1 Spatial structure of mental samples

In the real world, resources are rarely distributed uniformly in the environment. Food, water, and other critical nature resources often occur in spatially isolated patches with large gaps in between. Therefore, humans and other animals’ foraging behaviors should adapt to such patchy environments. In fact, foraging behaviors have been observed to display a Lévy flight, which is a class of random walk whose step lengths follow a heavy-tailed power-law distribution shlesinger1995 . In a Lévy flight distribution, the probability of executing a jump of length is given by:

(1)

where . The values

do not correspond to normalizable probability distributions. Examples of mobility patterns following the Lévy flight has been recorded in Albatrosses

viswanathan1996 , marine predators sims2008 , monkeys ramos2004 , and humans gonzalez2008 .

Lévy flights are advantageous in patchy environments where resources are sparsely and randomly distributed because the probability of returning to a previously visited target site is smaller than in a standard random walk. In the same patchy environment, Lévy flights can visit more new target sites than a random walk does berkolaiko1996 . Interestingly, it has been proven that in foraging the optimal exponent is regardless the dimensionality of the space if (1) the target sites are sparse, (2) they can be visited any number of times, and (3) the forager can only detect and remember a target site in a close vicinity viswanathan1999 .

Remarkably, mental representations of concepts are also patchy and the distance between mental samples also follows a Lévy flight distribution. For example, in semantic fluency tasks (e.g., asking participants to “name as many animals as you can”), the retrieved animals tend to form clusters (e.g., pets, water animals, African animals) troyer1997 ; abbott2012 . This same task has also been found to produce Lévy flight distributions of inter-response intervals (IRI) rhodes2007 , which can be considered a measure of distance between samples by making the reasonable assumption that there is a linear relationship between IRI and mental distance111There are various ways to make the link between IRI and distance between samples. One is to assume that it takes longer to transition to a sample that is further away in the mental space. A second is to assume that while generating any sample takes the same fixed amount of time, there are unreported samples that are generated between each reported sample, and that the sampler has travelled further the more unreported samples that are generated; unreported samples are plausible in this task because participants are only given credit for each new animal name they report..

1.2 Temporal structure of mental samples

Besides the spatial structure of the distance between two successive locations following a power-law distribution, a number of studies has reported that the temporal structure of many cognitive activities contains long-range, slowly decaying serial correlations. These correlations tend to follow a scaling law kello2010 :

(2)

where is the autocorrelation function of temporal lag . The same phenomenon is often expressed in the frequency domain:

(3)

where is frequency, is spectral power and is considered scaling. The power spectra can be derived from submitting the time series to Fourier analysis.

noise is also known as pink or flicker noise, which varies in predictability intermediately between white noise (no serial correlation,

) and brown noise (no correlation between increments, ). Note that Lévy flights are random walks so they do not produce noise, but noise instead.

-like temporal fluctuations in human cognition were first reported in time estimation and spatial interval estimation tasks in which participants were asked to repeatedly estimate a pre-determined time interval of 1 second or spatial interval of 1 inch

gilden1995 . Subsequent studies have shown scaling laws in the response times of mental rotation, lexical decision, serial visual search, and parallel visual search gilden1997 , as well as the time to switch between different percepts when looking at a bistable stimulus (i.e., a Necker cube gao2006 ).

Given that sampling can be described as a Lévy flight spatially and has autocorrelations (see Table 1 for summary), we now investigate which sampling algorithms can capture both the spatial and temporal structure of human cognition.

Papers Experiments Main findings
gilden1995 Time interval estimation Power spectra slopes of
Spatial interval estimation Power spectra slope of
gilden1997 Mental rotation RT power spectra slope of
Lexical decision RT power spectra slope of
Serial search RT power spectra slope of
Parallel search RT power spectra slope of
rhodes2007 Memory retrieval task IRI power-law exponents
rhodes2011 Natural scene perception Eye movement trajectories follow both noise and Lévy flight
Table 1: Key findings for noise and Lévy flight in human mental sampling

2 Sampling algorithms

We consider three possible sampling algorithms that might be employed in human cognition: Direct Sampling (DS), Random walk Metropolis (RwM), and Metropolis-coupled MCMC (MC

). We define DS as independently drawing samples in accord with the posterior probability distribution. DS is the most efficient algorithm for sampling of the three, but it may not be possible to implement in human cognition as it often requires calculating intractable normalizing constants that scale exponentially with the dimensionality of the hypothesis space

mackay2003 ; chater2006a . DS has been used to explain biases in human cognition such as probability matching vul2014 .

MCMC algorithms can bypass the problem of the normalizing constant by simulating a Markov chain that transitions between states according only to the ratio of the probability of hypotheses mackay2003 . We define RwM as a classical Metropolis-Hastings MCMC algorithm, which can be thought of as a random walker exploring the probability landscape of hypotheses, preferentially climbing the peaks of the posterior probability distribution metropolis1953 ; hastings1970 . However, with limited number of samples, RwM is very unlikely to reach modes in the probability distribution that are separated by large regions of low probability. This leads to biased approximations of the posterior distribution swendsen1986 ; sanborn2016 . Random walks have been used to model clustered responses in memory retrieval abbott2012 , and RwM in particular has been used to model multistable perception gershman2012 , the anchoring effect lieder2012 , and various reasoning biases dasgupta2016 ; sanborn2016 .

Our third algorithm is MC, also known as parallel tempering or replica-exchange MCMC, was one of the first algorithms to successfully tackle the problem of multimodality geyer1991 . MC involves running Markov chains in parallel, each at a different temperature: . In general, , and is the temperature of the interest where the target distribution is unchanged. The purpose of the heated chains is to traverse valleys in the probability landscape to propose moves to far-away peaks (by sampling from heated target distributions: ), while the colder chains make the local steps that explore the current probability peak or patch. MC decides whether to swap the states between two randomly chosen chains in every iteration geyer1991 . In particular, swapping of chain and is accepted or rejected according to a Metropolis rule; hence, the name Metropolis-coupled MCMC

(4)

Coupling induces dependence among the chains, so each chain is no longer Markovian. The stationary distribution of the entire set of chains is thus but we only use samples from the cold chain () to approximate the posterior distribution geyer1991 . Pseudocode for MC is presented below. Note that MC reduces to RwM when the number of parallel chains .

Algorithm Metropolis-coupled Markov chain Monte Carlo
1: Choose a starting point .
2: for to do
3:      for to do update all chains
4:          Draw a candidate sample Gaussian proposal distribution
5:          Sample
6:         
7:          if then else end if Metropolis acceptance rule
8:      end for
9:      repeat floor() times swapping scheme for Markov chains
10:          Randomly select two chain without repetition
11:          Sample
12:         
13:          if then swap() end if Metropolis-coupled swapping rule
14:      end repeat
15: end for

3 Results

In this section, we evaluate whether the two key empirical effects of Lévy flights and autocorrelations can be produced by the Direct Sampling (DS), Random walk Metropolis (RwM), and Metropolis-coupled MCMC (MC) algorithms.

3.1 Lévy flight

We simulated a 2D patchy environment with Gaussian mixtures where the means are uniformly generated from for both dimensions, where

and the covariance matrix is fixed as the identity matrix for all mixtures. This method will produce a patchy environment (for example the top panel of Figure 1). We ran DS, RwM, and MC

on this multimodal probability landscape, and the first 100 positions for each algorithm can be found in the top panel of Figure 1. The empirical flight distances were obtained by calculating the Euclidean distance between two consecutive positions of the sampler. For MC, only the positions of the cold chain () were used.

Power-law distributions should produce straight lines in a log-log plot. Therefore, the power-law exponents were fitted by linear regression on the window-averaged log-binned flight distance data

rhodes2007

. We used 10 non-overlapping windows that evenly split the x-axis, and cell means are represented in the yellow filled dots in the bottom panel of Figure 1. Fitting the cell means provides a lower-variance method for estimating the slope than fitting the log-binned data directly. Figure 1 (bottom panel) shows that only MC

can reproduce the distributional property of flight distance as a Lévy flight with estimated power-law exponent . Both DS () and RwM () 222The log-binned data for RwM shows that the algorithm is certainly not producing a power-law as the empirical flight distance distribution is not a straight line in the log-log plot produced values outside the range of a Lévy flight.

Figure 1: Searching behavior in a simulated 2D patchy environment of a 15 component Gaussian mixture. (Left Panel) the trajectory of first 100 positions (red dots in top panel) and the log-log plot of flight distance (bottom panel) for DS. The best-fitted lines used to estimate the Levy flight exponent (bottom panel red dashed lines) were based on cell means using non-overlapping windows (yellow filled dots) of the log-binned data (blue dots). (Middle Panel) the same plots for RwM algorithm. The Gaussian proposal distribution was an identity covariance matrix. (Right Panel) the same plots for MC algorithm with 8 parallel chains with only the cold chain shown here. The Gaussian proposal distributions for all 8 chains had the same identity covariance matrix. For all the algorithms only the first 1024 samples were used.

We then investigated the impact of spatial sparsity on the estimated power-law exponents. In this simulation, the same number of Gaussian mixture were used but the range was varied. The spatial sparsity was computed as the mean distance between Gaussian modes. With small or moderate spatial sparsity we found a positive relationship between spatial sparsity and the estimated power-law exponents for both DS and MC (see Figure 2 left). In this range, only MC produced power-law exponents in the range reported in human mental foraging studies (see Table 1), while both DS and RwM failed to do so. For all three algorithms, once spatial sparsity was too great only a single mode was explored and no large jumps were made.

We also checked whether MC

really is more suitable to explore patchy mental representations than RwM. In our simulated patchy environment, which used 15 identical Gaussian mixtures with identity covariance matrix, an optimal sampling algorithm should visit each mode equally often, hence will produce a uniform distribution of visit frequencies over all the modes. To this end, the effectiveness of exploring the representation was examined by computing a Kullback-Leibler divergence (KL)

mackay2003 between a uniform distribution over all modes and a the relative frequency of how often an algorithm visited each mode:

(5)

where is a discrete uniform distribution, is the number of identical Gaussian mixtures, and is the empirical frequency of visited modes up to time . Samples were assigned to the closest mode when determining these empirical frequencies. The faster the KL divergence for an algorithm reaches zero, the more effective the algorithm is at exploring the underlying environment and the DS algorithm serves as a benchmark for the other two algorithms. As shown in Figure 2 (middle), MC quickly catches up to DS, while RwM lags far behind in exploring this patchy environment.

Figure 2: (Left) Estimated power-law exponents for flight distance distributions for the three sampling algorithms, manipulating the spatial sparsity of the Gaussian mixture environment. Spatial sparsity measurement was defined as the mean distance between modes. All three algorithms used the same settings as in Figure 1. The dashed lines show the range of human data. (Middle) KL divergence of mode visiting from the true distribution for the three sampling algorithms. The underlying patchy environments are the same for all three algorithms. (Right) Simulated RwM with a Lévy flight proposal distribution. Darker colors represent higher spatial sparsities. The dashed lines show the range of human data.

Of course it may seem that we were simply using the wrong proposal distribution for RwM. Instead of using a Gaussian proposal distribution we can use a Lévy flight proposal distribution, which will straightforwardly produce Lévy flights if the posterior distribution is uniform over the entire space (i.e., every proposed flight will be accepted). However, in a patchy environment a Lévy flight proposal distribution will not typically produce a Lévy flight distribution of distances between samples that has estimated power-law exponents in the range of human data, as also can be seen in Figure 2 (right) with different spatial sparsities. The reason for this is that the long jumps in the proposal distribution are unlikely to be successful: these long jumps often propose new states that lie in regions of nearly zero posterior probability.

3.2 noise

A typical interval estimation task requests participants to repeatedly produce an estimation for the same target interval over many repeated trials gilden1995 ; gilden1997 . For instance, participants were first given an example of a target interval (e.g., 1 second time interval or 1-inch spatial interval) and then repeated the judgments again and again without feedback for up to 1000 trials. These time series produced by human subjects showed noise, with an exponent close to 1. However, the log-log plot in human data is typically observed flatten out for the highest frequencies gilden1995 . This effect has been explained as the result of two processes: fractional Brownian motion combined with white noise at the highest frequencies gilden1995 .

Figure 3 shows an example of time series for the first 1024 samples generated by DS (left), RwM (middle), and MC (right). We used a simple Gaussian target distribution in this simulation because the distribution of responses produced by participants was indistinguishable from a Gaussian gilden1995 . Note that RwM and MC

were initiated at the mode of the Gaussian distribution, and there was no burn-in period in our simulation. This results show that only MC

produces noise (), whereas DS produces white noise () and RwM is closest to brown noise (). RwM tends to generate brown noise because, if every proposed sample is accepted, then the algorithm reduces to first-order autoregressive process (i.e., AR(1)) xu2009 . This is shown through simulation in Figure 4: when the Gaussian width () becomes much greater width of the Gaussian proposal distribution (), RwM produces brown noise.

Figure 3: Sampling from a unimodal Gaussian . (Left Panel) the power spectra, traceplot, and sample distribution of DS from top to bottom. The best-fitted lines in power spectra (top panel purple dashed lines) are estimated based on block-averaged periodograms (green filled dots; thornton2005 ). (Middle Panel) the same plots for RwM algorithm. (Right Panel) the same plots for MC algorithm with 2 parallel chains and only the cold chain is shown here. The result will be similar to using 8 parallel chains if we restrict swapping between neighboring chains only. For all the algorithms here, the first 1024 samples were used.

In contrast, MC has a tendency to produce noise when the acceptance rate is high. It has been shown that the sum of as few as three AR(1) processes with widely distributed autoregressive coefficients produces an approximation to noise ward2002 . As the higher-temperature chains can be thought of as very roughly similar to AR(1) processes with lower autoregressive coefficients, this may explain why the asymptotic behavior of the MC is noise.

What is also interesting about MC is that it is a single process that is able to produce both the slope at lower frequencies as well as the flattening of the slope at higher frequencies, which was ascribed to two different processes by gilden1995 . The reason MC produces this result appears to be because when two chains with similar temperatures find states with similar posterior probability they will repeatedly swap back and forth, which can produce high frequency oscillations in the coldest chain.

Figure 4: Estimated slopes in the power spectra are related to the ratio between Gaussian width and proposal step size. When the ratio is low, the acceptance rate of proposed sample should be low; it is the opposite case for the high ratio. The asymptotic behaviors of MC are noise, of RwM are brown noise, and of DS are white noise.

4 Discussion

Lévy flights are advantageous in a patchy world, and have been observed in many foraging task with humans and other animals. A random walk with Gaussian steps does not produce the occasional long-distance jump as a Lévy flight does. However, the swapping scheme between parallel chains of MC enables it to produce Lévy-like scaling in the flight distance distribution. Additionally MC produces the long-range slowly-decaying temporal correlations of scaling. This long-range dependence rules out any sampling algorithm that draws independent samples from the posterior distribution, such as DS, since the sample sequence would have no serial correlation (i.e., white noise). It also rules out RwM because the current sample solely depends on the previous sample. Both of these results suggest that the algorithms people use to sample mental representations are more complex than DS or RwM, and, like MC, are instead adapted to sampling from multimodal distributions.

However, if people are adapted to multimodal distributions, their behavior appears to have the same temporal pattern even when they are actually sampling from a unimodal distribution. In Gilden’s experiments, the overall distribution of estimated intervals (i.e., ignoring serial order) was not multimodal, instead it was indistinguishable from a Gaussian distribution gilden1995 . Assuming that the posterior distribution in the hypothesis space is also unimodal then it is somewhat inefficient to use MC rather than simple MCMC. Potentially the brain is hardwired to use particular algorithms, or it is slow to adapt to unimodal representations because it is very difficult to know that a distribution is unimodal rather than just a single mode in a patchy space.

Previous explanations of scale-free phenomenon in human cognitions such as self-organized criticality argue that noise is generated from the interactions of many simple processes that produce such hallmarks of complexity vanorden2003 . Other explanations assume that it is due to a mixture of scaled processes like noise in attention or noise in our ability to perform cognitive tasks wagenmakers2004 . These approaches argue that noise is a general property of cognition, and do not tie it to other empirical effects. Our explanation of this scale-free process is more mechanistic, assuming that they reflect the cognitive need to gather vital resources in a multimodal world. While autocorrelations make samplers less effective when sampling from simple distributions, they may need to be tolerated in a multimodal world in order to sample other isolated modes.

Of course, we do not claim that MC is the only sampling algorithm that is able to produce both noise and Lévy flights. It is possible that other algorithms that deal better with multimodality than MCMC, such as running multiple non-random walk Markov chains in parallel or Hamiltonian Monte Carlo, could produce similar results. Future work will explore which algorithms can match these key human data.

References

  • [1] J. T. Abbott, J. L. Austerweil, and T. L. Griffiths. Human memory search as a random walk in a semantic network. In Advances in Neural Information Processing Systems, pages 3050–3058, 2012.
  • [2] J. R. Anderson. The adaptive nature of human categorization. Psychological Review, 98(3):409, 1991.
  • [3] P. W. Battaglia, J. B. Hamrick, and J. B. Tenenbaum.

    Simulation as an engine of physical scene understanding.

    Proceedings of the National Academy of Sciences, 110(45):18327–18332, 2013.
  • [4] G. Berkolaiko, S. Havlin, H. Larralde, and G. Weiss. Expected number of distinct sites visited by n lévy flights on a one-dimensional lattice. Physical Review E, 53(6):5774, 1996.
  • [5] N. Chater and C. D. Manning. Probabilistic models of language processing and acquisition. Trends in Cognitive Sciences, 10(7):335–344, 2006.
  • [6] N. Chater, J. B. Tenenbaum, and A. Yuille. Probabilistic models of cognition: Conceptual foundations. Trends in Cognitive Sciences, 10(7):287–291, 2006.
  • [7] I. Dasgupta, E. Schulz, and S. J. Gershman. Where do hypotheses come from? Technical report, Center for Brains, Minds and Machines (CBMM), 2016.
  • [8] J. Gao, V. A. Billock, I. Merk, W. Tung, K. D. White, J. Harris, and V. P. Roychowdhury. Inertia and memory in ambiguous visual perception. Cognitive Processing, 7(2):105–112, 2006.
  • [9] S. J. Gershman, E. Vul, and J. B. Tenenbaum. Multistability and perceptual inference. Neural Computation, 24(1):1–24, 2012.
  • [10] C. J. Geyer. Markov chain monte carlo maximum likelihood. 1991.
  • [11] D. L. Gilden. Fluctuations in the time required for elementary decisions. Psychological Science, 8(4):296–301, 1997.
  • [12] D. L. Gilden, T. Thornton, and M. W. Mallon. noise in human cognition. Science, 267(5205):1837, 1995.
  • [13] M. C. Gonzalez, C. A. Hidalgo, and A.-L. Barabasi. Understanding individual human mobility patterns. Nature, 453(7196):779–782, 2008.
  • [14] T. L. Griffiths and J. B. Tenenbaum. Predicting the future as bayesian inference: people combine prior knowledge with observations when estimating duration and extent. Journal of Experimental Psychology: General, 140(4):725, 2011.
  • [15] W. K. Hastings. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):97–109, 1970.
  • [16] C. T. Kello, G. D. Brown, R. Ferrer-i Cancho, J. G. Holden, K. Linkenkaer-Hansen, T. Rhodes, and G. C. Van Orden. Scaling laws in cognitive sciences. Trends in Cognitive Sciences, 14(5):223–232, 2010.
  • [17] C. Kemp and J. B. Tenenbaum. Structured statistical models of inductive reasoning. Psychological Review, 116(1):20, 2009.
  • [18] F. Lieder, T. Griffiths, and N. Goodman. Burn-in, bias, and the rationality of anchoring. In Advances in Neural Information Processing Systems, pages 2690–2798, 2012.
  • [19] D. J. MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003.
  • [20] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087–1092, 1953.
  • [21] G. Ramos-Fernández, J. L. Mateos, O. Miramontes, G. Cocho, H. Larralde, and B. Ayala-Orozco. Lévy walk patterns in the foraging movements of spider monkeys (ateles geoffroyi). Behavioral Ecology and Sociobiology, 55(3):223–230, 2004.
  • [22] T. Rhodes, C. T. Kello, and B. Kerster. Distributional and temporal properties of eye movement trajectories in scene perception. In 33th Annual Meeting of the Cognitive Science Society, 2011.
  • [23] T. Rhodes and M. T. Turvey. Human memory retrieval as lévy foraging. Physica A: Statistical Mechanics and its Applications, 385(1):255–260, 2007.
  • [24] A. N. Sanborn and N. Chater. Bayesian brains without probabilities. Trends in Cognitive Sciences, 20(12):883–893, 2016.
  • [25] A. N. Sanborn, V. K. Mansinghka, and T. L. Griffiths. Reconciling intuitive physics and newtonian mechanics for colliding objects. Psychological Review, 120(2):411, 2013.
  • [26] M. F. Shlesinger, G. M. Zaslavsky, and U. Frisch. Lévy flights and related topics in physics. Lecture Notes in Physics, 450:52, 1995.
  • [27] D. W. Sims, E. J. Southall, N. E. Humphries, G. C. Hays, C. J. Bradshaw, J. W. Pitchford, A. James, M. Z. Ahmed, A. S. Brierley, M. A. Hindell, et al. Scaling laws of marine predator search behaviour. Nature, 451(7182):1098–1102, 2008.
  • [28] R. H. Swendsen and J.-S. Wang. Replica monte carlo simulation of spin-glasses. Physical Review Letters, 57(21):2607, 1986.
  • [29] T. L. Thornton and D. L. Gilden. Provenance of correlations in psychological data. Psychonomic Bulletin & Review, 12(3):409–441, 2005.
  • [30] A. K. Troyer, M. Moscovitch, and G. Winocur. Clustering and switching as two components of verbal fluency: evidence from younger and older healthy adults. Neuropsychology, 11(1):138, 1997.
  • [31] G. C. Van Orden, J. G. Holden, and M. T. Turvey. Self-organization of cognitive performance. Journal of Experimental Psychology: General, 132(3):331, 2003.
  • [32] G. M. Viswanathan, V. Afanasyev, S. Buldyrev, E. Murphy, et al. Lévy flight search patterns of wandering albatrosses. Nature, 381(6581):413, 1996.
  • [33] G. M. Viswanathan, S. V. Buldyrev, S. Havlin, M. Da Luz, E. Raposo, and H. E. Stanley. Optimizing the success of random searches. Nature, 401(6756):911–914, 1999.
  • [34] E. Vul, N. Goodman, T. L. Griffiths, and J. B. Tenenbaum. One and done? optimal decisions from very few samples. Cognitive science, 38(4):599–637, 2014.
  • [35] E.-J. Wagenmakers, S. Farrell, and R. Ratcliff. Estimation and interpretation of noise in human cognition. Psychonomic Bulletin & Review, 11(4):579–615, 2004.
  • [36] L. M. Ward. Dynamical Cognitive Science. MIT press, 2002.
  • [37] D. M. Wolpert. Probabilistic models in human sensorimotor control. Human Movement Science, 26(4):511–524, 2007.
  • [38] J. Xu and T. L. Griffiths. How memory biases affect information transmission: A rational analysis of serial reproduction. In Advances in Neural Information Processing Systems, pages 1809–1816, 2009.
  • [39] A. Yuille and D. Kersten. Vision as bayesian inference: analysis by synthesis? Trends in Cognitive Sciences, 10(7):301–308, 2006.