Neural Implementation of Probabilistic Models of Cognition

01/13/2015 ∙ by Milad Kharratzadeh, et al. ∙ McGill University 0

Bayesian models of cognition hypothesize that human brains make sense of data by representing probability distributions and applying Bayes' rule to find the best explanation for available data. Understanding the neural mechanisms underlying probabilistic models remains important because Bayesian models provide a computational framework, rather than specifying mechanistic processes. Here, we propose a deterministic neural-network model which estimates and represents probability distributions from observable events --- a phenomenon related to the concept of probability matching. Our model learns to represent probabilities without receiving any representation of them from the external world, but rather by experiencing the occurrence patterns of individual events. Our neural implementation of probability matching is paired with a neural module applying Bayes' rule, forming a comprehensive neural scheme to simulate human Bayesian learning and inference. Our model also provides novel explanations of base-rate neglect, a notable deviation from Bayes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Bayesian models are becoming prominent across a wide range of problems in cognitive science including inductive learning (Tenenbaum et al., 2006), language acquisition (Chater and Manning, 2006), and vision (Yuille and Kersten, 2006)

. These models characterize a rational solution to problems in cognition and perception in which inferences about different hypotheses are made with limited data under uncertainty. In Bayesian models, beliefs are represented by probability distributions and are updated by Bayesian inference as additional data become available. For example, we generally assume that the probability that someone has cancer is very low because only a small portion of people have cancer. On the other hand, a lot more people have heartburn or catch colds. These beliefs are represented by assigning high prior probabilities to cold and heartburn and low prior probabilities to cancer. Now, imagine that we see someone coughing and want to infer which of the three mentioned diseases she most probably has. Coughing is most likely caused by cancer or cold rather than heartburn. Thus, cold is the most probable candidate as it has high probability (belief) assigned to it both before and after the observation of coughing. Bayesian models of cognition state that humans make inferences in a similar fashion. More formally, these models hypothesize that humans make sense of data by representing probability distributions and applying Bayes’ rule to find the best explanation for any given data.

Forming internal representations of probabilities of different hypotheses (as a measure of belief) is one of the most important components of several explanatory frameworks. For example, in decision theory, many experiments show that participants select alternatives proportional to their reward frequency. This means that in many scenarios, instead of maximizing their utility by always choosing the alternative with the higher chance of reward, they match the underlying probabilities of different alternatives. For a review, see (Vulkan, 2000).

There are several challenges for Bayesian models of cognition as suggested by recent critiques (Jones and Love, 2011; Eberhardt and Danks, 2011; Bowers and Davis, 2012; Marcus and Davis, 2013). First, these models mainly operate at Marr’s computational level (Marr, 1982), with no account of the mechanisms underlying behaviour. That is, they are not concerned with how people actually learn and represent the underlying probabilities. Jones and Love characterize this neglect of mechanism as “the most radical aspect of Bayesian Fundamentalism” (Jones and Love, 2011, p. 175). Second, in current Bayesian models, it is typical for cognitive structures and hypotheses to be designed by human programmers, and for Bayes’ rule to select the best hypothesis or structure to explain the available evidence (Shultz, 2007). Such models often do not explain or provide insight into the origin of such hypotheses and structures. Bayesian models are also under-constrained in the sense that they can predict multiple outcomes depending on assumed priors and likelihoods (Bowers and Davis, 2012)

. Finally, it is shown that people can be rather poor Bayesians and deviate from the optimal Bayes’ rule due to biases such as base-rate neglect, the representativeness heuristic, and confusion about the direction of conditional probabilities 

(Kahneman and Tversky, 1996; Eberhardt and Danks, 2011; Marcus and Davis, 2013).

In this paper, we address these challenges by providing a psychologically plausible neural framework to explain probabilistic models of cognition at Marr’s implementation level. First, we introduce an artificial neural network framework which can be used to explain how the brain could learn to represent probability distributions in neural circuitry, even without receiving any direct representations of these probabilities from the external world. We offer an explanation of how the brain is able to estimate and represent probabilities solely from observing the occurrence patterns of events, in a manner suggesting probability matching. In the context of Bayesian models of cognition, such probability-matching processes could explain the origin of the prior and likelihood probability distributions that are currently assumed or constructed by modelers. Probability matching also addresses the issue of under-constrained models by providing a neural mechanism for learning the probability distributions from examples. In contrast to current literature that proposes probability matching as an alternative to Bayesian models (Bowers and Davis, 2012; Eberhardt and Danks, 2011), we use probability matching as part of a larger Bayesian framework to learn prior and likelihood distributions which can then be used in Bayesian inference and learning of posterior distributions.

The question of how people can perform any kind of Bayesian computations (including probability representations) can be answered at two levels (Marr, 1982). First, it can be explained at the level of psychological processes, showing that Bayesian computations can be carried out by modules similar to the ones used in other psychological process models (Kruschke, 2006)

. Second, probabilistic computations can also be treated at a neural level, explaining how these computations could be performed by a population of connected neurons 

(Ma et al., 2006). Our artificial neural network framework combines these two approaches. It provides a neurally–based model of Bayesian inference and learning that can be used to simulate and explain a variety of psychological phenomena.

We use this comprehensive modular neural implementation of Bayesian learning and inference to explain some of the well-known deviations from Bayes’ rule, such as base-rate neglect, in a neurally plausible fashion. In sum, by providing a psychologically plausible implementation-level explanation of probabilistic models of cognition, we integrate some seemingly opposite accounts within a unified framework.

The paper is organized as follows. First, we review necessary background material and introduce the problem’s setup and notation. Then, we introduce our proposed framework for realizing probability matching with neural networks. Next, we present empirical results and discuss some relevant phenomena often observed in human and animal learning. Finally, we propose a neural implementation of Bayesian learning and inference, and show that base-rate neglect can be implemented by a weight-disruption mechanism.

2 Probability Matching with Neural Networks

2.1 Probability Matching

The first objective of this paper is to provide a neural-network framework capable of implementing probability matching, by which we mean learning the underlying probabilities for possible outcomes. (We discuss the relation of our work to probability matching literature in a later section.) The goal of probability-matching neural networks is to learn a probability distribution function over a set of inputs from observations. Although an observer does not receive direct representation of these probabilities from the external world, the probabilities are estimated from input instances occurring at various frequencies. For example, for a stimulus, , reinforced on out of its total presentations in the training set, probability matching yields .

We assume the task of learning a probability mass function , where is a discrete hypotheses space. In a realistic probability matching problem, the training set consists of a collection of input instances reinforced with a frequency proportional to an underlying probability function; i.e., observations for hypothesis are sampled from . Then, the problem of probability matching reduces to estimating the actual probabilities from these 0 or 1 observations.

2.2 Artificial Neural Networks

Artificial neurons are the constitutive units in artificial neural networks. In essence, they are mathematical functions conceived as an abstract model of biological neurons. In a network, each unit takes a weighted sum of inputs from some other units and, using its internal activation function, computes its output. These outputs are propagated through the network until the network’s final outputs are computed in the last layer. Classical feed–forward neural–network models in artificial intelligence are mathematical models implementing a function mapping inputs to outputs,

. To achieve this goal, a learning algorithm modifies the network’s connection weights (synapses) to reduce an error metric. This optimization is based on a training set consisting of sample inputs paired with their correct outputs.

More specifically, through a supervised learning procedure, the input/output pairs are presented to the network and the network’s connection weights are modified in order to reduce the sum-of-squared error:

(1)

where is the network’s output when is presented at the input layer, and is the correct output.

In classical artificial neural networks, the target values are fixed and deterministically derived from the underlying function and the corresponding inputs. However, in probability matching, we do not have access to the final, fixed targets (i.e., the actual probabilities). Instead, the training set is composed of input instances that are reinforced with various frequencies. In the next section, we propose a new mechanism for neural networks which addresses this challenge and successfully implements probability matching.

2.3 Neural Networks with Probabilistic Targets

In real-world scenarios, observations are in the form of events which can occur or not (represented by outputs of 1 and 0, respectively) and the learner does not have access to the actual probabilities of those events. An important question is whether a network can learn the underlying probability distributions from such 0, 1 observations. And if yes, how? First, we show that the answer to the first question is positive, and then explain how it is done.

A unit’s activation function is an abstraction computing a neuron’s average firing rate. The most commonly used activation function in artificial neural networks is the asigmoid function defined as . The output of this differentiable function is in the range of

– similar to probability values. We train the network with realistic observations. In each training epoch, we present a sample input,

, to the network and then probabilistically set the target output to either

(positive reinforcement) or (negative reinforcement). The frequency of the reinforcement (outputs of ) is determined by the underlying probability distribution. We show that after this kind of training, the network learns the underlying distribution: if we present a sample input, the output of the network would be its probability of being reinforced. Note that we never present this probability explicitly to the network. This means that the network learns and represents the probability distributions from observing patterns of events.

2.4 Comparison with Other Neural Network Models

Our proposed scheme differs from the classical approach to neural networks in that there is no one-to-one relationship between inputs and output. Instead of being paired with one fixed output, each input is here paired with a series of s and s presented separately at the output unit. Moreover, in our framework, the actual targets (underlying probabilities) are hidden from the network and, in the training phase, the network is presented only with inputs and their probabilistically varying outputs.

The relationship between neural network learning and probabilistic inference has been studied previously. One approach is to use networks with stochastic units that fire with particular probabilities. Boltzmann machines 

(Ackley et al., 1985)

and their various derivatives, including Deep Learning in hierarchical restricted Boltzmann machines (RBM) 

(Hinton and Osindero, 2006), have been proposed to learn a probability distribution over a set of inputs. RBM tries to maximize the likelihood of the data using a particular graphical model. In an approach similar to Boltzmann machines, Movellan and McClelland introduced a class of stochastic networks called Symmetric Diffusion Networks (SDN) to reproduce an entire probability distribution (rather than a point estimate of the expected value) on the output layer (Movellan and McClelland, 1993). In their model, unit activations are probabilistic functions evolving from a system of stochastic differential equations.  McClelland (1998) showed that a network of stochastic units can estimate likelihoods and posteriors and make “quasi-optimal” probabilistic inference. More recently, it is shown that a multinomial interactive activation and competition (mIAC) network, which has stochastic units, can correctly sample from the posterior distribution and thus, implement optimal Bayesian inference (McClelland et al., 2014). However, the presented mIAC model is specially designed for a restricted version of the word recognition problem and is highly engineered due to preset biases and weights and preset organization of units into multiple pools.

Instead of assuming stochastic units, we show how probabilistic representations can be constructed with deterministic units where probabilities are represented as the output of a population of units. In contrast to the work reviewed in last paragraph, we show that producing probability distributions on the output can be done by units with fixed, deterministic activations. In our model, representation of probability distributions emerges as a property of a network of deterministic units rather than having individual units with activations governed by some probability distribution. Moreover, models with stochastic units such as RBM “require a certain amount of practical experience to decide how to set the values of numerical meta-parameters” (Hinton, 2010), which makes them neurally and psychologically implausible for modeling probability matching in the relatively autonomous learning of humans or animals. On the other hand, as we see later, our model implements the probability matching in a relatively autonomous, neurally–plausible fashion, by using simple deterministic units and learning biases, weights, and the network topology from data.

Probabilistic interpretations of deterministic back–propagation (BP) learning have also been studied (Rumelhart et al., 1995). Under certain restrictions, BP can be viewed as learning to produce the most likely output, given a particular input. To achieve this goal, different cost functions (for BP to minimize) are introduced for different distributions (McClelland, 1998). This limits the plausibility of this model in realistic scenarios, where the underlying distribution might not be known in advance, and hence the appropriate cost function for BP cannot be chosen a priori. Moreover, the ability to learn probabilistic observations is only shown for members of the exponential family where the distribution has a specific form. In contrast, our model is not restricted to any particular type of probability distribution, and there is no need to adjust the cost function to the underlying distribution in advance. Also, unlike BP, where the structure of the network is fixed in advance, our constructive network learns both weights and the structure of the network in a more autonomous fashion, resulting in a psychologically plausible model.

Neural networks with simple, specific structures have been proposed for specific tasks (Shanks, 1990, 1991; Lopez et al., 1998; Dawson et al., 2009; Griffiths et al., 2012a; McClelland et al., 2014). For instance, (Griffiths et al., 2012a) considered a specific model of property induction and observed that for certain distributions, a linear neural network shows a similar performance to Bayesian inference with a particular prior. Dawson et al. proposed a neural network to learn probabilities for a multiarm bandit problem (Dawson et al., 2009)

. The structure of these neural networks is engineered and depends on the structure of the problem at hand. In contrast, our model is general in that it can perform probability matching for any problem structure. Also, unlike previous models proposing neural networks to estimate the posterior probabilities 

(Hampshire and Pearlmutter, 1990), our model does not require explicit representations of the probabilities as inputs. Instead, it constructs an internal representation based on reinforced observations.

2.5 Theoretical Analysis

The statistical properties of feed-forward neural networks with deterministic units have been studied as non-parametric density estimators. Denote the inputs of a network with and the outputs with

(both can be vectors). In a probabilistic setting, the relationship between

and is determined by the conditional probability . (White, 1989) and (Geman et al., 1992) showed that under certain assumptions, feed-forward neural networks with a single hidden layer can consistently learn the conditional expectation function . However, as White mentions, his analyses “do not provide more than very general guidance on how this can be done” and suggest that “such learning will be hard” (White, 1989, p. 454). Moreover, these analyses “say nothing about how to determine adequate network complexity in any specific application with a given training set of size ” (White, 1989, p. 455). In our work, we first extend these results to a more general case with no restrictive assumptions about the structure of the network and learning algorithm. Then, we propose a learning algorithm that automatically determines the adequate network complexity in any specific application.

In the following, we state the theorem and our learning technique for the case where , since in this case . Thus, learning results in representing the underlying probabilities in the output unit. The extension of the theorem and learning algorithm to more general cases is straightforward.

Theorem 1. Assume that is a probability mass function on a hypothesis space, , and we have observations . Define the network error as the sum–of–squared error at the output:

(2)

where is the network’s output when is presented at the input, and is the probabilistic output determining whether the hypothesis is reinforced () or not (). Then, any learning algorithm that successfully trains the network to minimize the output sum–of–squared error yields probability matching (i.e., reproduces in the output).

Proof. Minimizing the error, we have:

(3)
(4)

According to the strong law of large numbers

, where denotes almost sure convergence. Therefore, the network’s output converges to the underlying probability distribution, , at all points. Although the theorem is presented only for discrete probability measure, it can easily be extended to the continuous case by defining discrete hypotheses as narrow slices of the continuous space.

Theorem 1 shows the important point that neural networks with deterministic units are able to asymptotically estimate an underlying probability distribution solely based on observable reinforcement rates. Unlike previous similar results in literature (White, 1989; Geman et al., 1992; Rumelhart et al., 1995), Theorem 1 does not impose any constraint on the network structure, the learning algorithm, or the distribution being learned. However, an important assumption in this theorem is the successful minimization of the error by the learning algorithm. As pointed out earlier, two important questions remain to be answered: (i) how can this learning be done? and (ii) how can adequate network complexity be automatically identified for a given training set? In the next two subsection we address these problems and propose a learning framework to successfully minimize the output error. We consider a neural network with a single input unit (taking ) and a single output unit. Thus, our goal is to learn a network that outputs when is presented at the input.

2.6 Learning Cessation

In artificial neural networks, learning normally continues until the error metric is less than a fixed small threshold. However, that approach may lead to overfitting and also would not work here, because in the probability matching problem, the least possible error is a positive constant instead of zero. We use the idea of learning cessation to overcome these limitations (Shultz et al., 2012)

. The learning cessation method monitors learning progress in order to autonomously abandon unproductive learning. It checks the absolute difference of consecutive errors and if this value is less than a fixed threshold multiplied by the current error for a fixed number of consecutive learning phases (called patience), learning is abandoned. This technique for stopping deterministic learning of stochastic patterns does not require the psychologically unrealistic validation set of training patterns 

(Prechelt, 1998; Wang et al., 1993).

Our method (along with the learning cessation mechanism) is presented in Algorithm 1. In this algorithm, we represent the whole network (units and connections) by the variable . Also, the learning algorithm we use to train our network is represented by the operator train_one_epoch, where an epoch is a pass through all of the training patterns. We can use any algorithm to train our network, as long as it successfully minimizes the error term in (2). We discuss the details of the learning algorithm in the following.

Input: Training Set ;
   Cessation threshold ; Cessation patience
Output: Learned network outputs
while true do
      Updating the network
      Computing the updated error
     if  then Checking the learning progress
         
     else
         
         if  then
              break
         end if
     end if
     
end while
Algorithm 1 Probability matching with neural networks and learning cessation

2.7 The Learning Algorithm

Theorem 1 proves that the minimization of the output sum–of–squared error yields probability matching. However, the unusual properties of the training set we employ (such as the probabilistic nature of input/output relations) as well as the fact that we do not specify the complexity of the underlying distribution in advance may cause problems for some neural learning algorithms. The most widely used learning algorithm for neural networks is Back Propagation, also used by Dawson et al., (2009) in the context of probability matching. In Back Propagation (BP), the output error is propagated backward and the connection weights are individually adjusted to minimize this error. Despite its many successes in cognitive modeling, we do not recommend using BP in our scheme for two important reasons. First, when using BP, the network’s structure must be fixed in advance (mainly heuristically). This makes it impossible for the learning algorithm to automatically adjust the network complexity to the problem at hand (White, 1989). Moreover, this property limits the generalizability and autonomy of BP and also, along with back-propagation of error signals, makes it psychologically implausible. Second, due to their fixed design, BP networks are not suitable for cases where the underlying distribution changes over time. For instance, if the distribution over the hypotheses space gets much more complicated over time, the initial network’s complexity (i.e., number of hidden units) would fall short of the required computational power.

Instead of BP, we use a variant of the cascade correlation (CC) method called sibling-descendant cascade correlation (SDCC) which is a constructive method for learning in multi-layer artificial neural networks (Baluja and Fahlman, 1994). SDCC learns both the network’s structure and the connection weights; it starts with a minimal network, then automatically trains new hidden units and adds them to the active network, one at a time. Each new unit is employed at the current or a new highest layer and is the best of several candidates at tracking current network error.

SDCC offers two major advantages over BP. First, it constructs the network in an autonomous fashion (i.e., a user does not have to design the topology of the network, and also the network can adapt to environmental changes). Second, its greedy learning mechanism can be orders of magnitude faster than the standard BP algorithm (Fahlman and Lebiere, 1990). SDCC’s relative autonomy in learning is similar to humans’ developmental, autonomous learning (Shultz, 2012). With SDCC, our method implements psychologically realistic learning of probability distributions, without any preset topological design. The psychological and neurological validity of cascade-correlation and SDCC has been well documented in many publications (Shultz, 2003, 2013)

. These algorithms have been shown to accurately simulate a wide variety of psychological phenomena in learning and psychological development. Like all useful computational models of learning, they abstract away from neurological details, many of which are still unknown. Among the principled similarities with known brain functions, SDCC exhibits distributed representation, activation modulation via integration of neural inputs, an S-shaped activation function, layered hierarchical topologies, both cascaded and direct pathways, long-term potentiation, self-organization of network topology, pruning, growth at the newer end of the network via synaptogenesis or neurogenesis, weight freezing, and no need to back-propagate error signals.

3 Empirical Results and Applications

3.1 Probability Matching

Through simulations, we show that our proposed framework is indeed capable of learning the underlying distributions. We consider two cases here, but similar results are observed for a wide range of distributions. First, we consider a case of four hypotheses with probability values and . Also, we consider a Normal probability distribution where the hypotheses correspond to small intervals on the real line from to 4. For each input sample we consider randomly selected instances in each training epoch. As before, these instances are positively or negatively reinforced independently and with a probability equal to the actual underlying probability of that input. We use SDCC with learning cessation to train our networks. Fig. 1

, plotted as the average and standard deviation of the results for

networks, demonstrates that for both discrete and continuous probability distributions, the network outputs are close to the actual distribution. Although, to save space, we show the results for only two sample distributions, our experiments show that our model is able to learn a wide range of distributions including Binomial, Poisson, Gaussian, and Gamma (Kharratzadeh and Shultz, 2013). Replication of the original probability distribution by our model is important, because, contrary to previous models, it is done without stochastic neurons and without any explicit information about the actual distribution or fitting any parameter or structure in advance. Moreover, it is solely based on observable information in the form of positive and negative reinforcements.

(a) Discrete distribution
(b) Continuous distribution (Normal)
Figure 1: Replication of the underlying probability distribution by our SDCC model. The results (mean and standard deviation) are averaged over different networks.

3.2 Overweighting Rare Events

A common phenomenon observed in both discrete and continuous cases in Fig. 1 is overweighting of rare events. In Fig. 1(a), the probability assigned to

is overweighted. The same is true for the tails of the Normal distribution in Fig. 

1(b) where the learned probabilities are higher than the actual ones. This, in fact, is one of the known results in the context of probability matching by humans. Psychological studies have shown that while making decisions based on descriptions (similar to our case), “people make choices as if they overweight the probability of rare events”  (Hertwig et al., 2004, p. 534). The fact that our model can capture these phenomena suggests that neural networks (at least SDCC) could form suitable implementation–level models to describe probabilistic computations. It is not clear that other learning algorithms would naturally make such errors while being generally successful.

Examining the reason for this behaviour in neural networks reveals some interesting insights. As mentioned earlier, we employ a learning cessation mechanism. This ensures that the unproductive learning is autonomously stopped. We show that this is the main cause of assigning higher probabilities to events with low probabilities. In Fig. 2, we present the results of learning with no cessation mechanism for both discrete and continuous examples described above. In this case, learning continues for a longer time; 2000 epochs is long enough for these examples (learning with cessation takes less than 1000 epochs in most cases). We observe that the networks successfully learn the probabilities whether they are small or large. Therefore, the phenomenon of overweighting the rare events disappears as the learning cessation mechanism is removed, a prediction that could be tested with biological learners.

(a) Discrete distribution
(b) Continuous distribution (Normal)
Figure 2: Networks’ outputs when learning cessation is removed. Learning is continued for long time (2000 epochs). This eliminates overweighting of rare events.

In sum, we can explain the overweighting of rare events as follows. Because of employing the learning cessation mechanism, our probability–matching neural networks form a satisficing representation of the underlying probabilities by stopping the learning before completely learning the input patterns. In the learning process, they initially and mainly focus on capturing the probabilistic behavior of more frequent phenomena. For events with low probability, a rough representation of the probabilities is made and because of the network’s generalizations from high-frequency events to low-frequency events, these low probabilities are generally overweighted (i.e., closer to the probabilities of more frequent events).

3.3 Adapting to Changing Environments

In many naturally–occurring environments, the underlying reward patterns change over time. For example, in a Bayesian context, the likelihood of an event can change as the underlying conditions change. Because humans are able to adapt to such changes and update their internal representations of probabilities, successful models should have this property as well. We examine this property in the following example experiment. Assume we have a binary distribution where the possible outcomes have probabilities and , and these probabilities change after epochs to and , respectively. In Fig. 3(a), we show the network’s outputs for this scenario. We perform a similar simulation for the continuous case where the underlying distribution is Gaussian and we change the mean from to at epoch ; the network’s outputs are shown in Fig. 3(b). We observe that in both cases, the network successfully updates and matches the new probabilities.

We also observe that adapting to the changes takes less time than the initial learning. For example, in the discrete case, it takes 400 epochs to learn the initial probabilities while it takes around 70 epochs to adapt to the new probabilities. The reason is that for the initial learning, constructive learning has to grow the network until it is complex enough to represent the probability distribution. However, once the environment changes, the network has enough computational capability to quickly adapt to the environmental changes with a few internal changes (in weights and/or structure). We verify this in our experiments. For instance, in the Gaussian example, we observe that all 20 networks recruited 5 hidden units before the change and 11 of these networks recruited 1 and 9 networks recruited 2 hidden units afterwards. We know of no precise psychological evidence for this reduction in learning time, but our results serve as a prediction that could be tested with biological learners. This would seem to be an example of the beneficial effects of relevant existing knowledge on new learning.

(a) Discrete case
(b) Continuous case
Figure 3: Reaction of the network to the changes in target probabilities. Our networks can adapt successfully.

In summary, we propose that many of the hypotheses and structures currently designed by Bayesian modelers could be autonomously built by constructive artificial networks that learn by observing the occurrence patterns of discrete events.

3.4 Discussion on Probability Matching

So far, we have shown that our neural-network framework is capable of learning the underlying distributions of a sequence of observations. The main point is to provide an explanation of how the prior and likelihood probability distributions required for Bayesian inference and learning can be formed. (More on this in the next section.) This learning of probability distributions is closely related to the phenomenon of probability matching. The matching law states that the rate of a response is proportional to its rate of observed reinforcement and has been applied to many problems in psychology and economics (Herrnstein, 1961, 2000). A closely related empirical phenomenon is probability matching where the predictive probability of an event is matched with the underlying probability of its outcome (Vulkan, 2000). This is in contrast with the reward-maximizing strategy of always choosing the most probable outcome. This apparently suboptimal behaviour is a long-standing puzzle in the study of decision making under uncertainty and has been studied extensively.

There are numerous, and sometimes contradictory, attempts to explain this choice anomaly. Some suggest that probability matching is a cognitive shortcut driven by cognitive limitations (Vulkan, 2000; West and Stanovich, 2003). Others assume that matching is the outcome of misperceived randomness which leads to searching for patterns even in random sequences (Wolford et al., 2004, 2000). It is shown that as long as people do not believe in the randomness of a sequence, they try to discover regularities in it to improve accuracy (Unturbe and Corominas, 2007; Yellott Jr, 1969). It is also shown that some of those who perform probability matching in random settings have a higher chance of finding a pattern when one exists (Gaissmaier and Schooler, 2008). In contrast to this line of work, some researchers argue that probability matching reflects a mistaken intuition and can be overridden by deliberate consideration of alternative choice strategies (Koehler and James, 2009). In (James and Koehler, 2011), the authors suggest that a sequence-wide expectation regarding aggregate outcomes might be a source of the intuitive appeal of matching. It is also shown that people adopt an optimal response strategy if provided with (i) large financial incentives, (ii) meaningful and regular feedback, or (iii) extensive training (Shanks et al., 2002).

We believe that our neural-network framework is compatible with all these accounts of probability matching. Firstly, in many settings probability matching is the norm; for instance, among many other examples, in animals (Behrend and Bitterman, 1961; Kirk and Bitterman, 1965; Greggers and Menzel, 1993) or in human perception (Wozny et al., 2010). It is clear that in these settings agents who match probabilities form an internal representation of the outcome probabilities. Even for particular circumstances where a maximizing strategy is prominent (Gaissmaier and Schooler, 2008; Shanks et al., 2002), it is necessary to have some knowledge of the distribution to produce optimal-point responses. Having a sense of the distribution provides the flexibility to focus on the most probable point (maximizing), sample in proportion to probabilities (matching), or even generate expectations regarding aggregate outcomes (expectation generation), all of which are evident in psychology experiments.

4 Bayesian Learning and Inference

4.1 The Basics

The Bayesian framework addresses the problem of updating beliefs in a hypothesis in light of observed data, enabling new inferences. Assume we have a set of mutually exclusive and exhaustive hypotheses, , and want to infer which of these hypotheses best explains observed data. In the Bayesian setting, the degrees of belief in different hypotheses are represented by probabilities. A simple formula known as Bayes’ rule governs Bayesian inference. This rule specifies how the posterior probability of a hypothesis (the probability that the hypothesis is true given the observed data) can be computed using the product of data likelihood and prior probabilities:

(5)

The probability with which we would expect to observe the data if a hypothesis were true is specified by likelihoods, . Priors, , represent our degree of belief in a hypothesis before observing data. The denominator in (5) is called the marginal probability of data and is a normalizing sum which ensures that the posteriors for all hypotheses are between and and sum to 1.

In the Bayesian framework, we assume there is an underlying mechanism to generate the observed data. The role of inference is to evaluate various hypotheses about this mechanism and choose the most likely mechanism responsible for generating the data. In this setting, the generative processes are specified by probabilistic models (i.e., probability densities or mass functions).

4.2 Modular Neural-network Implementation of Bayesian Learning and Inference

Here, we use our probability matching module to model uncertainty over the hypotheses space and eventually aid Bayesian inference and learning. Bayesian models of cognition hypothesize that human brains make sense of data by representing probability distributions and applying Bayes’ rule to find the best explanation for any given data. One of the main challenges for Bayesian modellers is to explain how these two tasks (representing probabilities and applying Bayes’ rule) are implemented in the brain’s neural circuitry (Perfors et al., 2011). We address this challenge by introducing a two–module artificial neural system to implement Bayesian learning and inference. The first module estimates and represents the underlying probabilities (priors and likelihoods) based on experienced positive or negative reinforcements as described earlier (i.e., probability matching). Given these internally-represented probabilities, the second module reproduces the posterior distribution by applying Bayes’ rule.

Assume that we have two mutually exclusive and exhaustive hypotheses, , and want to infer which of these hypotheses better explains the observed data, . Extending this problem for any finite number of hypotheses is straightforward. Module 1 (i.e., the probability matching module) forms an internal representation of the likelihoods, , and priors, , based on the previously observed data, (i.e., experienced reinforcements). For instance, in a coin flip example, assume is the hypothesis that a typical coin is fair. A person has seen a lot of coins and observed the results of flipping them. Because most coins are fair, hypothesis is positively reinforced most of the times in those experiences (and very rarely negatively reinforced). Therefore, based on the binary feedback on the fairness of coins, our probability matching module forms a high prior (close to 1) for hypothesis . This is in accordance with the human assumption (prior) that a typical coin is most probably fair. Likelihood representations could be formed in a similar fashion based on binary feedback; in the coin example, is reinforced if the number of observed heads and tails in small batches of coin flips (available in the short-term memory) are approximately equal and negatively reinforced otherwise.

Then, module 2, shown in Fig. 4(a), takes these distributions as inputs, applies Bayes’ rule, and produces the posterior distribution, , as the output. When data are observed in consecutive rounds, posteriors at one round are used as priors for the next round. In this way, beliefs are continually updated in light of new observed data. We model this by mapping the output of module 2, , to the input corresponding to the prior, .

(a) Module 2 structure
(b) Outputs of module 2 plotted against true values.
Figure 4: Module 2 applies Bayes’ rule and reproduces the posterior distribution based on the likelihoods and priors provided by the probability matching module.

The representation of uncertainty of the parameter space lies at the heart of Bayesian inference. So far, we have analysed the first module and explained how it can represent this uncertainty by discovering underlying probability distributions based on observed data. What remains for a fully Bayesian inference framework is the application of Bayes’ rule. Here, we show that neural networks can in fact learn Bayes’ rule (implemented in module 2). In essence, Bayes’ rule (for the case of two hypotheses) is a non–linear function with three inputs (the likelihoods of each of the two hypotheses producing the observed new data, and the prior of hypothesis 1) and one output (the posterior probability of hypothesis 1). As with other functions, a neural network can learn to implement it. Here, the training is done by using a set of sample inputs paired with their probabilistic correct outputs. After training, we examine the performance of the network by comparing its output for a test set with the correct outputs given by Bayes’ rule. In Fig. 4(b), we plot the network’s outputs against the correct outputs. The high correlation between these two values as well as the close–to–one slope and close–to-zero y–intercept of the fitted line show that module 2 is successful in implementing Bayes’ rule. By combining modules 1 and 2, we get a comprehensive neural–network system for Bayesian learning and inference.

Unlike module 1, where its training set has a plausible psychological interpretation as observed reinforcements, the training set of module 2 might seem unrealistic and without any specific interpretation. We are currently agnostic about the origin of the units and weights in module 2 for human brains. Here, for convenience, we train them with examples, but it is conceivable that Bayes’ rule could have evolved in some species, including humans, and therefore is an innate construct. For present purposes, we need a neural representation of Bayesian inference, and this is conveniently supplied by training constructive neural networks on examples of Bayes rule. This may not be the way it happens in biological learners, but for now it suffices to show that neural networks can represent Bayes’ rule. We can illuminate this issue regarding the origin of Bayesian competencies of our model by agent-based simulations of evolution of Bayesian inference and learning. Preliminary results in the context of social learning strategies show that evolution favours Bayesian learning, based on passing posteriors, over imitation and environment sampling (Montrey and Shultz, 2010). In–progress results suggest that a combination of environment sampling and theory passing of posterior distributions is particularly favored in evolution. More precise details of possible evolution of Bayes’ rule need to be worked out in future research.

The idea of introducing a modular neural network implementing Bayesian learning and inference has two important benefits. First, it is an initial step towards addressing the implementation of the Bayesian competencies in the brain. Our model is built in a constructive and autonomous fashion in accordance with accounts of psychological development (Shultz, 2012). It uses realistic input in the form of reinforcements, and it successfully explains some phenomena often observed in human and animal learning (probability matching, adapting to environmental changes, and overweighting the probability of rare events).

The second benefit of our modular neural network is that it provides a framework that unifies the Bayesian accounts and some of the well-known deviations from it, such as base-rate neglect. In the next section, we show how base-rate neglect can be explained naturally as a property of our neural implementation of Bayesian inference.

4.3 Base-rate Neglect as Weight Disruption

Given likelihood and prior distributions, the Bayesian framework finds the precise form of the posterior distribution, and uses that to make inferences. This is used in contemporary cognitive science to define rationality in learning and inference, where it is frequently defined and measured in terms of conformity to Bayes’ rule (Tenenbaum et al., 2006). However, this appears to conflict with the Nobel-prize-winning work showing that people are somewhat poor Bayesians due to biases such as base-rate neglect, representativeness heuristic, and confusing the direction of conditional probabilities (Kahneman and Tversky, 1996). For example, by not considering priors (such as the frequency of a disease), even experienced medical professionals deviate from optimal Bayesian inference and make major errors in their probabilistic reasoning (Eddy, 1982). More recently, it has been suggested that base rates (i.e., priors) may not be entirely ignored but just de-emphasized (Prime and Shultz, 2011; Evans et al., 2002).

In this section, we show that base-rate neglect can be explained in our neural implementation of the Bayesian framework. First, we show how base-rate neglect can be interpreted by Bayes’ rule. Then, we show that this neglect can result from neurally–plausible weight disruption in a neural network representing priors. Our weight disruption idea can cover several ways of neglecting base rates: immediate effects such as deliberate neglect (as being judged irrelevant) (Bar-Hillel, 1980), failure to recall, partial use or partial neglect, preference for specific (likelihood) information over general (prior) information (McClelland and Rumelhart, 1985), and decline in some cognitive functions (such as memory loss) as a result of long term synaptic decay or interference (Hardt et al., 2013).

Base-rate neglect is a Bayesian error in computing the posterior probability of a hypothesis without taking full account of the priors. We argue that completely ignoring the priors is equivalent to assigning equal prior probabilities to all the hypotheses which gives:

(6)

This equation can be interpreted as follows. We can assume that in the original Bayes’ rule, all the hypotheses have equal priors and these priors are cancelled out from the numerator and denominator to give equation (6

). Therefore, in the Bayesian framework, complete base-rate neglect is translated into assuming equal priors (i.e., equi-probable hypotheses). This means that the more the true prior probabilities (base rates) are averaged out and approach the uniform distribution, the more they are neglected in Bayesian inference. A more formal way to explain this phenomenon is by using the notion of entropy, defined in information theory as a measure of uncertainty. Given a discrete hypotheses space

with probability mass function , its entropy is defined as:

(7)

In our setting, represents the prior distribution. Entropy quantifies the expected value of information contained in a distribution. It is easy to show that a uniform distribution has maximum entropy (equal to ) among all discrete distributions over the hypotheses set (Cover and Thomas, 2006). We can conclude that in the Bayesian framework, base-rate neglect is equivalent to ignoring the priors in the form of averaging them out to get a uniform distribution, or equivalently, maximizing their entropy.

We model the effects of attention, memory indexing, and relevance by a weight disruption mechanism. There is an attention unit in our model which applies specific weight factors to the various probability–matching modules. This is shown in Fig. 5 for the case where we have two hypotheses. The attention module multiplies all the connection weights of a module by an attention parameter ratio, , between and . (Note that the disruption is applied to the connections in the network and not directly to the output.) For , the weights of a module remain unchanged, while sets all the weights to zero, causing a flat output (see equation (8) below). This weight–disruption factor reflects the strength of memory indexing or lack of relevance in a specific instance of inference, without permanently affecting the weights. It could also simulate long–term synaptic decay or interference which creates more permanent weight disruption in a neural network. In our model, the attention parameter could be for likelihoods because they are learned based on new evidence in an inference task and hence most noticed. For priors, we could allocate an attention factor less than to reflect complete or partial neglect. Therefore, in Fig. 5, we have and .

Figure 5: The effects of attention, memory indexing, and relevance are modelled by an attention module imposing weight disruption on probability matching modules.

In the next section, we describe the mathematical details of this weight disruption and show that its application to prior probability–matching modules in our model results in anything from full use to partial use to complete neglect of the priors. This means that after the probability matching network learns the prior distributions from input reinforcements, weight disruption of its connections—caused by the attention module—results in averaging out these learned probabilities and therefore causes base-rate neglect. In other words, we can take priors as the states of a learning and inference system. As weights are modulated by the attention factor, the system can move towards higher entropy.

We conclude that we can model base-rate neglect in the Bayesian framework by an attention module imposing weight disruption in our brain-like network, after implementing probability matching to construct priors and likelihoods. Note that weight disruption in our neural system could potentially simulate a range of biological and cognitive phenomena such as decline in attention or memory (partial use), deliberate neglect, or other ways of undermining the relevance of priors (Bar-Hillel, 1980). The weight disruption effects could be all at once as when a prior network is not recalled or is judged irrelevant, or could take a long time reflecting the passage of time or disuse causing synaptic decay. Interference, the other main mechanism of memory decline, could likewise be examined within our neural-network system to implement and explain psychological interpretations of base-rate neglect.

4.4 Results

As mentioned earlier, given a set of hypotheses, the probability matching module can form an internal representation of the priors and likelihoods based on previously experienced reinforcements. To model the effects of the attention module, after the learning process, we update a prior or likelihood network’s connection weights as follows:

(8)

where ’s are the connection weights, is the attention factor imposed by the attention module, and is the number of times is applied. For instantaneous disruptions, such as the cases where a prior network is not recalled or is judged irrelevant, and is a low number, considerably less than 1. For long–term decay, would be slightly less than 1, while would be large (modeling slow synaptic decay over long time). For higher values of and lower values of , the weight disruption is more severe; with , the weights remain unchanged, while with , they are set to zero.

To examine the effects of weight disruption, we perform a set of simulations where the network learns different probability distributions such as Gaussian, Beta, Gamma, Binomial, etc. As mentioned before, the probability–matching module can successfully learn and represent these distributions based on observed reinforcements. Then we perform the weight disruption process as outlined in Equation (8

) on the learned network. Results for the Binomial distribution are shown in Fig. 

6(a). The results for other distributions are very similar and hence we do not include them here. Although we consider hypotheses to better analyze the effect of disruption, the results are similar with smaller, more realistic hypothesis spaces. Fig. 6(a) shows that for larger disruptions (either due to lower value attention factor or higher frequency of its application), entropy is higher and therefore priors approach a uniform distribution and depart farther from the original Binomial distribution (the limit of the entropy is which corresponds to the uniform distribution). Also, Fig. 6(b) shows that as disruption increases (with fixed and increasing ), the output distribution approaches a uniform distribution. This implements the phenomenon of base-rate neglect as described in the last section. For large enough disruptions, the entropy reaches its maximum, and therefore the prior distribution becomes uniform, equivalent to complete base-rate neglect.

(a) The entropy of prior distributions increases and gets closer to the uniform as disruption gets larger.
(b) The distribution of the priors approaches uniform as disruption increases (with and increasing). Because the prior probabilities must add to 1 and we have 400 hypotheses, the final uniform distribution has very low probabilities (1/400). This figure shows the effects of long–term weight decay.
Figure 6: The effects of weight disruption on the output of probability matching module.

In conclusion, we show that the proposed neural network model contributes to the resolution of the discrepancy between demonstrated Bayesian successes and failures by modeling base-rate neglect as weight disruption in a connectionist network implementing Bayesian inference modulated by an attention module. This is done by showing that, as weights are more disrupted, the prior distribution approaches uniformity as its entropy increases.Thus, variation in the attention parameters can represent anything from complete use to partial use to complete neglect of priors.

5 Discussion

In a recent debate between critics (Bowers and Davis, 2012) and supporters (Griffiths et al., 2012b) of Bayesian models of cognition, probability matching becomes one of the points of discussion. Griffiths, et al. mention that probability matching phenomena have a “key role in explorations of possible mechanisms for approximating Bayesian inference” (Griffiths et al., 2012b, p. 420). On the other hand, Bowers and Davis consider probability matching to be non–Bayesian, and propose an adaptive network that matches the posteriors as an alternative to the “ad hoc and unparsimonious” Bayesian account.

We propose a framework which integrates these two seemingly opposing ideas. Instead of the network Bowers and Davis suggest to match the posterior probabilities, we use probability matching to construct prior and likelihood distributions. These distributions are later used in inferring posteriors. Therefore, in our approach, probability matching is a module and part of the whole Bayesian framework. We show that our constructive neural network performs probability matching naturally and in a psychologically realistic fashion through observable reinforcement rates rather than being provided with explicit probabilities or stochastic units. We argue that probability matching with constructive neural networks provides a natural, autonomous way of introducing hypotheses and structures into Bayesian models. Recent demonstrations suggest that the fit of Bayes to human data depends crucially on assumptions of prior, and presumably likelihood, probability distributions (Marcus and Davis, 2013; Bowers and Davis, 2012). Bayesian simulations would be less ad hoc if these probability distributions could be independently identified in human subjects rather than assumed by the modelers. The ability of neural networks to construct probability distributions from realistic observations of discrete events could likewise serve to constrain prior and likelihood distributions in simulations. Whether the full range of relevant hypotheses and structures can be constructed in this way deserves further exploration. The importance of our model is that, at the computational level, it is in accordance with Bayesian accounts of cognition, and at the implementation level, it provides a psychologically–realistic account of learning and inference in humans. To the best of our knowledge, this is a novel way of integrating these opposing accounts.

The question of the origins of Bayes’ rule in biological learners remains unresolved. Future work on origins will undoubtedly examine the usual suspects of learning and evolution. Here we show that constructive neural networks can learn Bayes’ rule from examples, the main point being that this rule can be implemented in a plausible neural format. Our other in-progress work shows that simulated natural selection often favors a combination of individual learning and a Bayesian cultural ratchet in which a teacher’s theory (represented as a distribution of posterior probabilities) serves as priors for a learner. Thus, both learning and evolution are still viable suspects, but many details of how they might act, alone or in concert, to produce Bayesian inference and learning are yet to be worked out.

The question of which kinds of neural networks could support Bayesian processing is an interesting one that should be further explored. Here, we found that the popular and often successful BP algorithm had difficulty converging on probability matching. Similar difficulties of BP convergence have been noted before, both in deterministic (Shultz, 2006) and stochastic (Berthiaume et al., 2013) problems. On probability matching problems, BP often gets stuck in local error minima or in oscillation patterns across a local error minimum because of its static, pre–set structure. In contrast, SDCC and other members of the CC–algorithm family are able to escape from these difficulties by recruiting a useful hidden unit which effectively adds another dimension in connection-weight space, re-enabling gradient descent and hence error reduction.

In this introduction of our model, we deal with only a few Bayesian phenomena: probability matching, Bayes’ rule, base-rate neglect, overweighting of rare events, and relatively quick adapting to changing probabilities in the environment. There is a rapidly increasing number of other Bayesian phenomena that could provide interesting challenges to our neural model. So far, we are encouraged to see that the model can cover both Bayesian solutions and deviations from Bayes, promising a possible theoretical integration of disparate trends in the psychological literature. A number of apparent deviations from Bayesian optimality are listed elsewhere (Marcus and Davis, 2013). In the cases we so far examined, deeper learning can convert deviations into something close to a Bayesian ideal, again suggesting the possibility of a unified account.

With no doubt, Bayesian models provide powerful analytical tools to rigorously study deep questions of human cognition that have not been previously subject to formal analysis. These Bayesian ideas, providing computation-level models, are becoming prominent across a wide range of problems in cognitive science. The heuristic value of the Bayesian framework in providing insights into a wide range of psychological phenomena has been substantial, and in many cases unique. Our neural implementation of Bayes addresses a number of recent challenges by allowing for the constrained construction of prior and likelihood distributions and greater generality in accounting for deviations from Bayesian ideals. As well, connectionist models offer an implementation-level framework for modeling mental phenomena in a more biologically plausible fashion. Providing network algorithms with the tools for doing Bayesian inference and learning could only enhance their power and utility. We present this work in the spirit of theoretical unification and mutual enhancement of these two approaches. We do not advocate replacement of one approach in favour of the other, but rather view the two approaches as being at different and complementary levels.

Acknowledgement

This work was supported by McGill Engineering Doctoral Award to MK, and an operating grant to TS from the Natural Sciences and Engineering Research Council of Canada. Mark Coates, Deniz Üstebay, and Peter Helfer contributed thoughtful comments on an earlier draft.

References

  • Ackley et al. (1985) Ackley, H., Hinton, G., Sejnowski, J.. A learning algorithm for Boltzmann machines. Cognitive Science 1985;:147–169.
  • Baluja and Fahlman (1994) Baluja, S., Fahlman, S.E.. Reducing Network Depth in the Cascade-Correlation Learning Architecture. Technical Report; Carnegie Mellon University, School of Computer Science; 1994.
  • Bar-Hillel (1980) Bar-Hillel, M.. The base-rate fallacy in probability judgments. Acta Psychologica 1980;44(3):211 – 233.
  • Behrend and Bitterman (1961) Behrend, E.R., Bitterman, M.. Probability-matching in the fish. The American Journal of Psychology 1961;:542–551.
  • Berthiaume et al. (2013) Berthiaume, V.G., Shultz, T., Onishi, K.H.. A constructivist connectionist model of transitions on false-belief tasks. Cognition 2013;126(3):441–458.
  • Bowers and Davis (2012) Bowers, J.S., Davis, C.J.. Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin 2012;138(3):389–414.
  • Chater and Manning (2006) Chater, N., Manning, C.D.. Probabilistic models of language processing and acquisition. Trends in Cognitive Sciences 2006;10(7):335–344.
  • Cover and Thomas (2006) Cover, T.M., Thomas, J.A.. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, 2006.
  • Dawson et al. (2009) Dawson, M., Dupuis, B., Spetch, M., Kelly, D.. Simple artificial neural networks that match probability and exploit and explore when confronting a multiarmed bandit. IEEE Transactions on Neural Networks 2009;20(8):1368–1371.
  • Eberhardt and Danks (2011) Eberhardt, F., Danks, D.. Confirmation in the cognitive sciences: The problematic case of Bayesian models. Minds and Machines 2011;21(3):389–410.
  • Eddy (1982) Eddy, D.M.. Probabilistic reasoning in clinical medicine: problems and opportunities; Cambridge Univ. Press.
  • Evans et al. (2002) Evans, J.S.B., Handley, S.J., Over, D.E., Perham, N.. Background beliefs in bayesian inference. Memory & cognition 2002;30(2):179–190.
  • Fahlman and Lebiere (1990) Fahlman, S.E., Lebiere, C.. The cascade-correlation learning architecture. In: Advances in Neural Information Processing Systems 2. Loas Altos, CA: Morgan Kaufmann; 1990. p. 524–532.
  • Gaissmaier and Schooler (2008) Gaissmaier, W., Schooler, L.J.. The smart potential behind probability matching. Cognition 2008;109(3):416–422.
  • Geman et al. (1992) Geman, S., Bienenstock, E., Doursat, R..

    Neural networks and the bias/variance dilemma.

    Neural computation 1992;4(1):1–58.
  • Greggers and Menzel (1993) Greggers, U., Menzel, R.. Memory dynamics and foraging strategies of honeybees. Behavioral Ecology and Sociobiology 1993;32(1):17–29.
  • Griffiths et al. (2012a) Griffiths, T.L., Austerweil, J.L., Berthiaume, V.G.. Comparing the inductive biases of simple neural networks and bayesian models. In: Proc. the 34th Annual Conf. of the Cog. Sci. Society. 2012a. .
  • Griffiths et al. (2012b) Griffiths, T.L., Chater, N., Norris, D., Pouget, A.. How the Bayesians got their beliefs (and what those beliefs actually are). Psychological Bulletin 2012b;138(3):415–422.
  • Hampshire and Pearlmutter (1990) Hampshire, J., Pearlmutter, B..

    Equivalence proofs for multi-layer perceptron classifiers and the Bayesian discriminant function.

    In: Connectionist Models Summer School. 1990. .
  • Hardt et al. (2013) Hardt, O., Nader, K., Nadel, L.. Decay happens: the role of active forgetting in memory. Trends in Cognitive Sciences 2013;17(3):111 – 120.
  • Herrnstein (1961) Herrnstein, R.J.. Relative and absolute strength of response as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behaviour 1961;4:267–272.
  • Herrnstein (2000) Herrnstein, R.J.. The Matching Law: Papers on Psychology and Economics. Cambridge, MA: Harvard University Press, 2000.
  • Hertwig et al. (2004) Hertwig, R., Barron, G., Weber, E.U., Erev, I.. Decisions from experience and the effect of rare events in risky choice. Psychological Science 2004;15(8):534–539.
  • Hinton (2010) Hinton, G.. A practical guide to training restricted boltzmann machines. Momentum 2010;9(1):926.
  • Hinton and Osindero (2006) Hinton, G., Osindero, S.. A fast learning algorithm for deep belief nets. Neural Computation 2006;18:1527 – 1554.
  • James and Koehler (2011) James, G., Koehler, D.J.. Banking on a bad bet probability matching in risky choice is linked to expectation generation. Psychological Science 2011;22(6):707–711.
  • Jones and Love (2011) Jones, M., Love, B.C.. Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences 2011;34:169–188.
  • Kahneman and Tversky (1996) Kahneman, D., Tversky, A.. On the reality of cognitive illusions. Psychological Review 1996;103:582 – 591.
  • Kharratzadeh and Shultz (2013) Kharratzadeh, M., Shultz, T.. Neural-network modelling of Bayesian learning and inference. In: Proceedings of the 35th Annual Meeting of Cognitive Science. Austin, TX: Cognitive Science Society; 2013. p. 2686–2691.
  • Kirk and Bitterman (1965) Kirk, K.L., Bitterman, M.. Probability-learning by the turtle. Science 1965;148(3676):1484–1485.
  • Koehler and James (2009) Koehler, D.J., James, G.. Probability matching in choice under uncertainty: Intuition versus deliberation. Cognition 2009;113(1):123–127.
  • Kruschke (2006) Kruschke, J.K.. Locally Bayesian learning with applications to retrospective revaluation and highlighting. Psychological Review 2006;:677–699.
  • Lopez et al. (1998) Lopez, F.J., Shanks, D.R., Almaraz, J., Fernandez, P.. Effects of trial order on contingency judgments: A comparison of associative and probabilistic contrast accounts. Journal of Experimental Psychology: Learning, Memory, and Cognition 1998;24(3):672.
  • Ma et al. (2006) Ma, W.J., Beck, J.M., Latham, P.E., Pouget, A.. Bayesian inference with probabilistic population codes. Nature Neuroscience 2006;(11):1432 – 1438.
  • Marcus and Davis (2013) Marcus, G.F., Davis, E.. How robust are probabilistic models of higher-level cognition? Psychological Science 2013;24(12):2351–2360.
  • Marr (1982) Marr, D.. Vision. San Francisco, CA: W. H. Freeman, 1982.
  • McClelland (1998) McClelland, J.L.. Connectionist models and bayesian inference. Rational models of cognition 1998;:21–53.
  • McClelland et al. (2014) McClelland, J.L., Mirman, D., Bolger, D.J., Khaitan, P.. Interactive activation and mutual constraint satisfaction in perception and cognition. Cognitive science 2014;38(6):1139–1189.
  • McClelland and Rumelhart (1985) McClelland, J.L., Rumelhart, D.E.. Distributed memory and the representation of general and specific information. Experimental Psychology: General 1985;114:159–188.
  • Montrey and Shultz (2010) Montrey, M., Shultz, T.R.. Evolution of social learning strategies. In: Proceedings of IEEE 9th Int. Conf. on Development and Learning. 2010. p. 95–100.
  • Movellan and McClelland (1993) Movellan, J., McClelland, J.L.. Learning continuous probability distributions with symmetric diffusion networks. Cognitive Science 1993;17:463–496.
  • Perfors et al. (2011) Perfors, A., Tenenbaum, J.B., Griffiths, T.L., Xu, F.. A tutorial introduction to Bayesian models of cognitive development. Cognition 2011;120(3):302 – 321.
  • Prechelt (1998) Prechelt, L.. Early stopping - but when? In: Orr, G., Muller, K.R., editors. Neural Networks: Tricks of the Trade. Berlin: Springer; volume 1524 of Lecture Notes in Computer Science; 1998. p. 55–69.
  • Prime and Shultz (2011) Prime, H., Shultz, T.R.. Explicit Bayesian reasoning with frequencies, probabilities, and surprisals. Proceedings of 33rd Annual Conference Cognitive Science Society 2011;.
  • Rumelhart et al. (1995) Rumelhart, D.E., Durbin, R., Golden, R., Chauvin, Y.. Backpropagation: The basic theory. In: Chauvin, Y., Rumelhart, D.E., editors. Backpropagation: Theory, Arcitecture, and applications. Hillsdale, NJ, USA; 1995. p. 1–34.
  • Shanks (1990) Shanks, D.R.. Connectionism and the learning of probabilistic concepts. The Quarterly Journal of Experimental Psychology 1990;42(2):209–237.
  • Shanks (1991) Shanks, D.R.. A connectionist account of base-rate biases in categorization. Connection Science 1991;3(2):143–162.
  • Shanks et al. (2002) Shanks, D.R., Tunney, R.J., McCarthy, J.D.. A re-examination of probability matching and rational choice. Journal of Behavioral Decision Making 2002;15(3):233–250.
  • Shultz (2003) Shultz, T.. Computational Developmental Psychology. Cambridge, MA: MIT Press, 2003.
  • Shultz (2006) Shultz, T.. Constructive learning in the modeling of psychological development. In: Processes of change in brain and cognitive development: Attention and performance XXI. Oxford: Oxford University Press; 2006. p. 61–86.
  • Shultz (2007) Shultz, T.. The Bayesian revolution approaches psychological development. Developmental Science 2007;10(3):357–364.
  • Shultz (2012) Shultz, T.. A constructive neural-network approach to modeling psychological development. Cognitive Development 2012;27:383–400.
  • Shultz (2013) Shultz, T.. Computational models of developmental psychology. In: Zelazo, P.D., editor. Oxford Handbook of developmental Psychology, Vol. 1: Body and mind. Newyork: Oxford University Press; 2013. .
  • Shultz et al. (2012) Shultz, T., Doty, E., Dandurand, F.. Knowing when to abandon unproductive learning. In: Proceedings of the 34th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society; 2012. p. 2327–2332.
  • Tenenbaum et al. (2006) Tenenbaum, J.B., Kemp, C., Shafto, P.. Theory-based Bayesian models of inductive learning and reasoning. Trends in Cognitive Sciences 2006;10(7):309–318.
  • Unturbe and Corominas (2007) Unturbe, J., Corominas, J.. Probability matching involves rule-generating ability: A neuropsychological mechanism dealing with probabilities. Neuropsychology 2007;21(5):621.
  • Vulkan (2000) Vulkan, N.. An economist’s perspective on probability matching. Journal of Economic Surveys 2000;14(1):101–118.
  • Wang et al. (1993) Wang, C., Venkatesh, S.S., Judd, J.S.. Optimal stopping and effective machine complexity in learning. In: Advances in Neural Information Processing Systems 6. Morgan Kaufmann; 1993. p. 303–310.
  • West and Stanovich (2003) West, R.F., Stanovich, K.E.. Is probability matching smart? associations between probabilistic choices and cognitive ability. Memory & Cognition 2003;31(2):243–251.
  • White (1989) White, H.. Learning in artificial neural networks: A statistical perspective. Neural computation 1989;1(4):425–464.
  • Wolford et al. (2000) Wolford, G., Miller, M.B., Gazzaniga, M.. The left hemisphere’s role in hypothesis formation. The Journal of Neuroscience 2000;.
  • Wolford et al. (2004) Wolford, G., Newman, S.E., Miller, M.B., Wig, G.S.. Searching for patterns in random sequences. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale 2004;58(4):221.
  • Wozny et al. (2010) Wozny, D.R., Beierholm, U.R., Shams, L.. Probability matching as a computational strategy used in perception. PLoS computational biology 2010;6(8):e1000871.
  • Yellott Jr (1969) Yellott Jr, J.I.. Probability learning with noncontingent success. Journal of mathematical psychology 1969;6(3):541–575.
  • Yuille and Kersten (2006) Yuille, A., Kersten, D.. Vision as Bayesian inference: analysis by synthesis? Trends in Cognitive Sciences 2006;10(7):301–308.