1 Introduction
Pattern recognition is a task primates are generally very good at while machines are not so much. Examples are the recognition of human faces or the recognition of handwritten characters. The scientific disciplines of machine learning and computational learning theory have taken on the challenge of pattern recognition since the early days of modern computer science. A wide variety of very sophisticated and powerful algorithms and tools currently exist (Bishop, 2006). In this paper we are going back to some of the roots and address the challenge of learning with networks of simple Boolean logic gates. To the best of our knowledge, Alan Turing was the first person to explore the possibility of learning with simple NAND gates in his long forgotten 1948 paper, which was published much later (Turing, 1969; Teuscher, 2002)
. One of the earliest attempts to classify patterns by machine came from
Selfridge (1958), and Selfridge and Neisser (1960). Later, many have explored random logical nets made up from Boolean or threshold (McCullochPitts) neurons:
(Rozonoér, 1969; Amari, 1971; Aleksander et al., 1984; Aleksander, 1998, 1973). Martland (1987a) showed that it is possible to predict the activity of a boolean network with randomly connected inputs, if the characteristics of the boolean neurons can be described probabilistically. In a second paper, Martland (1987b) illustrated how the boolean networks are used to store and retrieve patterns and even pattern sequences autoassociatively. Seminal contributions on random Boolean networks came from Kauffman (1968, 1993, 1984) and Weisbuch Weisbuch (1989, 1991).In 1987, Carnevali and Patarnello (Patarnello and Carnevali, 1987; Carnevali and Patarnello, 1987), used simulated annealing and in 1989 also genetic algorithms (Patarnello and Carnevali, 1989) as a global stochastic optimization technique to train feedforward Boolean networks to solve computational tasks. They showed that such networks can indeed be trained to recognize and generalize patterns. Van den Broeck and Kawai (1990) also investigated the learning process in feedforward Boolean networks and discovered their amazing ability to generalize.
Teuscher et al. (2007) presented preliminary results that true RBNs, i.e., Boolean networks with recurrent connections, can also be trained to learn and generalize computational tasks. They further hypothesized that the performance is best around the critical connectivity .
In the current paper, we extend and generalize Patarnello and Carnevali’s results to random Boolean networks (RBNs) and use genetic algorithms to evolve both the network topology and the node transfer functions to solve a simple task. Our work is mainly motivated by the application of RBNs in the context of emerging nanoscale electronics (Teuscher et al., 2009). Such networks are particularly appealing for that application because of their simplicity. However, what is lacking is a solid approach that allows to train such systems for performing specific operations. Similar ideas have been explored with noneRBN building blocks by Tour et al. (2002) and by Lawson and Wolpert (2006). One of the broader goals we have is to systematically explore the relationship between generalization and learning (or memorization) as a function of the system size, the connectivity , the size of the input space, the size of the training sample, and the type of the problem to be solved. In the current paper, we restrict ourselves to look at the influence of the system size and of connectivity on the learning and generalization capabilities. In the case of emerging electronics, such as for example selfassembled nanowire networks use to compute simple functions, we are interested to find the smallest network with the lowest connectivity that can learn how to solve the task with the least number of patterns presented.
2 Random Boolean Networks
A random Boolean network (RBN) Kauffman (1968, 1984, 1993) is a discrete dynamical system composed of nodes, also called automata, elements or cells. Each automaton is a Boolean variable with two possible states: , and the dynamics is such that
(1) 
where , and each is represented by a lookup table of inputs randomly chosen from the set of nodes. Initially, neighbors and a lookup table are assigned to each node at random. Note that (i.e., the fanin) can refer to the exact or to the average number of incoming connections per node. In this paper we use to refer to the average connectivity.
A node state is updated using its corresponding Boolean function:
(2) 
These Boolean functions are commonly represented by lookup tables (LUTs), which associate a bit output (the node’s future state) to each possible bit input configuration. The table’s outcolumn is called the rule of the node. Note that even though the LUTs of a RBN map well on an FPGA or other memorybased architectures, the random interconnect in general does not.
We randomly initialize the states of the nodes (initial condition of the RBN). The nodes are updated synchronously using their corresponding Boolean functions. Other updating schemes exist, see for example (Gershenson, 2003) for an overview. Synchronous random Boolean networks as introduced by Kauffman are commonly called networks or models.
The “classical” RBN is a closed system without explicit inputs and outputs. In order to solve tasks that involve inputs and outputs, we modify the classical model and add input nodes and designate nodes as output nodes. The input nodes have no logical function and simply serve to distribute the input signals to any number of randomly chosen nodes in the network. On the other hand, the output nodes are just like any other network node, i.e., with a Boolean transfer function, except that their state can be read from outside the network. The network is constructed in a random unbiased process in which we pick
pairs of source and destination nodes from the network and connect them with probability
. This construction results in a binomial indegree distribution in the initial network population (Erdös and Rényi, 1959). The source nodes can be any of the input nodes, compute nodes, or the output nodes and the destination nodes can be chosen only from the compute nodes and the output nodes. Figure 1 shows an node RBN with input nodes and output node.3 Functional Entropy
Any given network in the space of all possible networks processes information and realizes a particular function. Naturally, the task of GAs (or any other search technique) is to only search in the space of possible networks and to find networks that realize a desired function, such as for example the evenodd task. Therefore, the learning capability, with respect to the entire class of functions, can be interpreted as the frequency of the realization of all possible functions. In our case, that means the class of Boolean functions with three inputs by using a class of “computers,” i.e., the Boolean networks. Van den Broeck and Kawai (1990) and Amirikian and Nishimura (1994) investigated the phase volume of a function, which they defined as the number of networks that realize a given function. Thus, the entropy of the functions realized by all possible networks is an indicator of the richness of the computational power of the networks. We extend this concept to the class of random Automata networks characterized using two parameters: the size of the network and the average connectivity . We call this the functional entropy of the landscape.
Figure 2 shows the landscape of the functional entropy for networks of and with an average connectivity of . To calculate the functional entropy, we create networks with a given and . We then simulate the networks to determine the function each of the networks is computing. The entropy can then be simply calculated using:
(3) 
Here, is the probability of the function being realized by the network of nodes and connectivity. For , there are different Boolean functions. Thus, the maximum entropy of the space is . This maximum entropy is achievable only if all functions are realized with equal probability. This is, however, not the case because the distribution of the functions is not uniform in general. Also, the space of possible networks cannot be adequately represented in
samples. However, our sampling is good enough to estimate a comparative richness of the functional entropy of different classes of networks. For example for
, the peak of the entropy in the space of Boolean functions with three inputs lies at , whereas for the class of fiveinput functions, this peak is at (Figures 2(a) and 2(b).) For , the peak of the entropy for threeinput and fiveinput functions is at and respectively (Figure 2(c) and 2(d).) The lower values of the maximum entropy for larger networks suggests that as increases, the networks will have their highest capacity in a lower connectivity range.bits respectively. Due to exponential probability distribution and the inadequacy of sampling over the space of networks, the actual values are much lower then the theoretical values. However, the position of the maximum empirical entropy as a function of
is valid due to unbiased sampling of the space.To study how the maximum attainable functional entropy changes as a function of , we created networks with sizes and and determined the maximum of the entropy landscape as a function of . Figures 3(a) and 3(b) show the scaling of the maximum functional entropy as a function of on linear and loglog scales respectively. As one can see, the data points from the simulations follow a powerlaw of the form:
(4) 
where, , , and . The solid line in the plots shows the fitted powerlaw equation. In Figure 3(b), the straight line is the result of subtracting from the equation and from the data points.
Studying the functional entropy of the network ensembles reveals features of the network fitness landscape in the context of task solving. In Section 9, we will see how functional entropy explains the result of the cumulative performance measures.
4 Experimental Setup
We use genetic algorithms (GAs) to train the RBNs to solve the evenodd task, the mapping task, and the bitwise AND task. The evenodd task consists of determining if an bit input has an even or an odd number of s in the input. If the number of s is an odd number, the output of the network must be , and otherwise. This task is admittedly rather trivial if one allows for counting the number of s. Also, if enough links are assigned to a single RBN node, the task can be solved with a single node since all the combinations can be enumerated in the lookup table. However, we are not interested to find such trivial solutions, instead, we look for networks that are able to generalize well if only a subset of the input patterns is presented during the training phase. In Section 6 we also use the bitwise AND task, which does exactly what its name suggests, i.e., form the logical AND operation bit by bit with two bit inputs and one bit output. The mapping task is used in Section 7 and consists of a bit input and an bit output. The output must have the same number of bits as the input, but not necessarily in the same order. Throughout the rest of the paper, we use to refer to the total number of input bits to the network. For example, the bitwise AND for two bit inputs is a problem with inputs.
To apply GAs, we encode the network into a bitstream that consists of both the network’s adjacency matrix and the Boolean transfer functions for each node. We represent the adjacency matrix in a list of source and destination node IDs of each link. We then append this list with the lookup tables for each node’s transfer function. Note that the index to the beginning and the end of the lookup table for each node can be calculated by knowing the node index and node indegree. The genetic operators consist of a mutation and a onepoint crossover operator that are applied to the genotypes in the network population. The mutation operator picks a random location in the genome and performs either of the following two operations, depending on the content of that location:

If the location points to a source or a destination node of a link, we randomly replace it with a pointer to a new node in the network.

If the location contains a bit in the LUT, we flip that bit.
We perform crossover by choosing a random location in the two genomes and then exchange the contents of the two genomes split at that point. We further define a fitness function and a generalization function . For an input space of size and an input sample of size we write: with , where is the Hamming distance between the network output for the input in the random sample from the input space and the expected network output for that input. Similarly, we write: with , where is the Hamming distance between the network output for the input from the entire input space and the expected network output for that input.
The simple genetic algorithm we use is as following:

Create a random initial population of networks.

Evaluate the performance of the networks on a random sample of the input space.

Apply the genetic operators to obtain a new population.

For the selection, we use a deterministic tournament in which pairs of individuals are selected randomly and the better of the two will make it into the offspring population.

Continue with steps 2 and 3 until at least one of the networks achieves a perfect fitness or after generations are reached.
To optimize feedforward networks (see Section 6), we have to make sure that the mutation and crossover operators do not violate the feedforward topology of the network. We add an order attribute to each node on the network and the nodes accept connections only from lower order nodes.
Since RBNs have recurrent connections, their rich dynamics need to be taken into account when solving tasks, and in particular interpreting output signals. Their finite and deterministic behavior guarantees that a network will fall into a (periodic or fixed point) attractor after a finite number of steps. The transient length depends on the network’s average connectivity and the network size (Kauffman, 1993). For our simulations, we run the networks long enough until they reach an attractor. Based on (Kauffman, 1993), we run our networks (with ) for time steps to reach an attractor. However, due to potentially ambiguous outputs on periodic attractors, we further calculate the average activation of the output nodes over a number of time steps equal to the size
of the network and consider the activity level as
if at least half of the time the output is , otherwise the activity will be . A similar technique was used successfully in (Teuscher, 2002).5 Training and Network Performance Definitions
Patarnello and Carnevali (1987) introduced the notion of learning probability as a way of describing the learning and generalization capability of their feedforward networks. They defined the learning probability as the probability of the training process yielding a network with perfect generalization, given that the training achieves perfect fitness on a sample of the input space.
The learning probability is expressed as a function of the fraction of the input space, , used during the training. To calculate this measure in a robust way, we run the training process times and store both the fitness and the generalization values. We define the learning probability as a function of , , where is the perfect training likelihood, i.e., the probability of achieving a perfect fitness () after training, and is the probability of obtaining a perfect fitness in generalization, (). In the following sections, we will define new measures to evaluate the network performance more effectively.
One can say that the probabilistic measures, such as the learning probability described above, only focus on the perfect cases and hence describe the performance of the training process rather than the effect of the training on the network performance. Thus, we define the mean training score as and the mean generalization score as , where is the number of evolutionary runs, and and are the training fitness and the generalization fitness of the best networks respectively at the end of training.
To compare the overall network performance for different training sample sizes, we introduce a cumulative measure for all four measures as defined above. The cumulative measure is obtained by a simple trapezoidal integration (Wittaker and Robinson, 1969) to calculate the area under the curve for the learning probability, the perfect training likelihood, the mean generalization score, and mean training score.
6 Learning in Feedforward Boolean Networks
The goal of this first experiment was to simply replicate the results that was reported in (Patarnello and Carnevali, 1989) with feedforward Boolean networks. Figure 4 shows the learning probability of such networks on the evenodd (RIGHT) and the bitwise AND task (LEFT) for networks. We observe that as the size of the input space increases, the training process requires a smaller number of training examples to achieve a perfect learning probability. For , some of the networks can solve a significant number of patterns without training because the task is too easy. We have initially determined the GA parameters (see figure legends), such as the mutation rate and the maximum number of generations experimentally, depending on how quickly we achieved perfect fitness on average. We have found the GA to be very robust against parameter variations for our tasks. These result shown in Figure 4 directly confirm Patarnello and Carnevali’s (Patarnello and Carnevali, 1989) experiments.
7 Learning in RBNs
Next, we trained recurrent RBNs for the evenodd and the mapping tasks. Figure 5 (LEFT) shows the learning probability of the and networks on the evenodd task with different input sizes . While the problem size increases exponentially with , we observe that despite this statespace explosion, a higher number of inputs requires a smaller fraction of the input space for training the networks to achieve a high learning probability. Figure 5 (RIGHT) shows the same behavior for the mapping task, however, since the task is more difficult, we observe a worse generalization behavior. Also, compared to Figure 4, we observe in both cases that the generalization for recurrent networks is not as good as for feedforward Boolean networks. In fact, for the studied input sizes, none of the networks reaches a learning probability of without training it on all the patterns. The lower learning probability in RBNs is mainly due to the larger search space and the recurrent connections, which lead to long transients and bistable outputs that need to be interpreted in a particular way. Nevertheless, studying adaptation and learning in RBNs, i.e., with no constraints on the network connectivity, keeps our approach as generic as possible.
To investigate the effect of the average connectivity on the learning probability, we repeat the evenodd task for networks with . The network size was held constant at . In order to describe the training performance, we defined the perfect training likelihood measure as the probability for the algorithm to be able to train the network with the given fraction of the input space (see section 5 for definition).
Considering the perfect training likelihood, the results in Figure 6 (RIGHT) show that for networks with subcritical connectivity , the patterns are harder to learn than with supercritical connectivity . Close to the “edge of chaos”, i.e., for and , we see an interesting behavior: for sample sizes above of the patterns, the perfect training likelihood increases again. This transition may be related to the changes in information capacity of the network at and needs further investigation with different tasks.
The significant difference between the learning probability and the perfect training likelihood for in Figure 6 is due to the small sample size. It is thus very easy for the network to solve the task correctly, but over all runs of the experiment, there is no network that can generalize successfully despite achieving a perfect training score. Also, according to the definitions in Section 5, it is not surprising that for a fraction of the input space, i.e., all patterns are presented, the learning probability and the perfect training likelihood are different. Out of runs, the GA did not find perfect networks for the task for all example, but if the networks solve the training inputs perfectly, they will also generalize perfectly because in this case, the training sample input includes all possible patterns.
8 Mean Generalization and Training Score
Figure 7 shows the learning probability (LEFT) and the perfect training likelihood (RIGHT) measured as Patarnello and Carnevali did, i.e., they only counted the number of networks with perfect generalization scores (see Section 5). Thus, if a network generalizes only of the patterns, it is not counted in their score. That means that the probabilistic measures of performance that we used so far have the drawback of describing the fitness landscape of the space of possible networks rather than the performance of a particular network, which we are more interested in. To address this issue, we introduce a new way of measuring both the learning and the generalization capability. We define both of these measures as the average of the generalization and learning fitness over runs (see section 5).
Figure 8 shows the generalization (LEFT) and the training score (RIGHT) with this new measure. As opposed to Carnevali and Patarnello’s work, where higher led to a lower learning probability, our results with the new measures for higher lead to a higher performance with a better generalization and training score. Our measures therefore better represent the performance of the networks with regards to a given task because they also include networks that can partially solve the task.
9 Cumulative Measures
In all the previous generalization figures, the question arises which networks are “better” than others, in particular if they do not reach a maximal generalization score when less than of the patterns are presented. This behavior can be observed in Figure 6 (LEFT) for the evenodd task.
Figure 9 shows the cumulative learning probability (LEFT) and the cumulative training likelihood (RIGHT) determined by integrating numerically (see Section 5 for definitions) the area under the curves of Figure 7. Figure 9 (LEFT) shows that has no effect on the generalization and that the generalization capability is very low. Figure 9 (RIGHT) shows that higher
increases the chance of perfect training, i.e., the network can be trained to memorize all training patterns. Each cluster of connectivities in Figure
7 (RIGHT) corresponds to a “step” in the curves of Figure 9 (RIGHT).Figure 10 shows the cumulative generalization score (LEFT) and the cumulative training score (RIGHT) based on the new measure as introduced in Section 8. We have used the evenodd task for two input sizes, and . We observe that has now a significant effect on the generalization score. The higher , the better the generalization. Moreover, different intervals of result in a stepwise generalization score increase. Figure 10 (RIGHT) shows that the cumulative training score for higher increases the chance of perfect training, i.e., the network can be trained to memorize all training patterns. Also, the higher the input size , the better the generalization, which was already observed by Patarnello and Carnevali (see also Section 6).
In Section 3 we introduced the functional entropy as a measure of the computational richness of a network ensembles. Higher functional entropy implies that the probability of functions being realized by a network ensemble is more evenly distributed. Consequently, even if the target function for the training has very low probability of realization in an ensemble with high functional entropy, the evolutionary process can easily find the functions close to the target function. Therefore, higher functional entropy lends itself to higher generalization score. This fact is observable by comparing figures 10 and 2.
In summary, we have seen so far that according to our new measures, higher networks both generalize and memorize better, but they achieve perfect generalization less often. The picture is a bit more complicated, however. Our data also shows that for networks around , there are more networks in the space of all possible networks that can generalize perfectly. For , the networks have a higher generalization score on average, but there is a lower number of networks with perfect generalization. That is because the fraction of networks with perfect generalization is too small with respect to the space of all the networks. For , the networks are hard to train, but if we manage to do so, they also generalize well.
Figure 11 shows the complete cumulative learning probability (LEFT) and cumulative training likelihood (RIGHT) landscapes as a function of and . We observe that according to these measures, neither the system size nor the connectivity affects the learning probability. Also, the networks have a very low learning probability, as seen in Figure 9. That means that the performance of the training method does not depend on the system size and the connectivity and confirms our hypothesis that Carnevali and Patarnello’s measure is more about the method than the network’s performance.
Finally, Figure 12 shows the same data as presented in Figure 11 but with our own score measures. For both the cumulative generalization score and the cumulative training score, the network size has no effect on the generalization and the training, at least for this task. However, we see that for the cumulative generalization score, the higher , the higher the generalization score. The same applies to the cumulative training score. This contrasts what we have seen in Figure 11.
10 Discussion
We have seen that Patarnello and Carnevali’s measure quantifies the fitness landscape of the networks rather than the network’s performance. Our newly defined measures applied to RBNs have shown that higher networks both generalize and memorize better. However, our results suggest that for large input spaces and for and networks, the space of the possible networks changes in a way that makes it difficult to find perfect networks (see Figures 6 and 7). On the other hand, for , finding the perfect networks is significantly easier. This is a direct result of the change in the number of possible networks and the number of networks that realize a particular task as a function of .
In (Lizier et al., 2008), Lizier et al.
investigated information theoretical aspects of phase transitions in RBNs and concluded that subcritical networks (
) are more suitable for computational tasks that require more of an information storage, while supercritical networks () are more suitable for computations that require more of an information transfer. The networks at critical connectivity () showed a balance between information transfer and information storage. This finding is purely information theoretic and does neither consider input and outputs nor actual computational tasks. In our case, solving the tasks depends on the stable network states and their interpretations. The results in (Lizier et al., 2008) do not apply directly to the performance of our networks, but we believe there is a way to link the findings in future work. Compared to Lizier et al., our experiments show that supercritical networks do a better job at both memorizing and generalizing. However, from the point of view of the learning probability, we also observe that for networks with , we are more likely to find perfect networks for our specific computational tasks.We measured the computational richness of a network ensemble by using its functional entropy (Section 3). In Section 9, we explained how higher functional entropy for a network ensemble result in higher generalization score. In addition to higher generalization score, higher functional entropy improves the performance of an evolutionary search because it naturally result in higher fitness diversity in the evolving population (Figure 13). With more evenly distributed probability of functions realized by the individual networks in the ensemble, it is more likely that individuals in the population realize different functions and thus diversifying the fitness of the population. This fitness diversity creates higher gradient that increase the rate of fitness improvement during the evolution (Price, 1972).
11 Conclusion
In this paper we empirically showed that random Boolean networks can be evolved to solve simple computational tasks. We have investigated the learning and generalization capabilities of such networks as a function of the system size , the average connectivity , problem size , and the task. We have seen that the learning probability measure used by Patarnello and Carnevali (1987) was of limited use and have thus introduced new measures, which better describe what the networks are doing during the training and generalization phase. The results presented in this paper are invariant of the training parameters and are intrinsic to both the learning capability of dynamical automata networks and the complexity of the computational task. Future work will focus on the understanding of the Boolean function space, in particular on the function bias.
Acknowledgments
This work was partly funded by NSF grant # 1028120.
References
 Aleksander [1973] I. Aleksander. ‘random logic nets: Stability and adaptation’. International Journal of ManMachine Studies, 5:115–131, 1973.

Aleksander [1998]
I. Aleksander.
‘from Wisard to Magnus: A family of weightless virtual neural
machines’.
In J. Austin, editor,
RAMBased Neural Networks
, volume 9 of Progress in Neural Processing. World Scientific, 1998.  Aleksander et al. [1984] I. Aleksander, W. V. Thomas, and P. A. Bowden. ‘WISARD: A radical step foward in image recognition’. Sensor Review, 4:120–124, July 1984.
 Amari [1971] S. I. Amari. ‘characteristics of randomly connected thresholdelement networks and network systems’. Proceedings of the IEEE, 59(1):35–47, January 1971.
 Amirikian and Nishimura [1994] B Amirikian and H Nishimura. ‘what size network is good for generalization of a specific task of interest?’. Neural Networks, 7(2):321–329, 1994.
 Bishop [2006] C. M. Bishop. ‘Pattern Recognition and Machine Learning’. Springer Verlag, New York, NY, 2006.
 Carnevali and Patarnello [1987] P. Carnevali and S. Patarnello. ‘exhaustive thermodynamical analysis of Boolean learning networks’. Europhysics Letters, 4(10):1199–1204, November 1987.
 Erdös and Rényi [1959] P Erdös and A Rényi. ‘on random graphs’. Publ. Math. Debrecen, 6:290–297, 1959. URL http://www.citeulike.org/user/kb/article/2547689.
 Gershenson [2003] C. Gershenson. ‘classification of random Boolean networks’. In R. K. Standish, M. A. Bedau, and H. A. Abbass, editors, Artificial Life VIII. Proceedings of the Eight International Conference on Artificial Life, pages 1–8, Cambridge, MA, 2003. A Bradford Book, MIT Press.
 Kauffman [1968] S. A. Kauffman. ‘metabolic stability and epigenesis in randomly connected genetic nets’. Journal of Theoretical Biology, 22:437–467, 1968.
 Kauffman [1984] S. A. Kauffman. ‘emergent properties in random complex automata’. Physica D, 10(1–2):145–156, January 1984.
 Kauffman [1993] S. A. Kauffman. ‘The Origins of Order: Self–Organization and Selection in Evolution’. Oxford University Press, New York; Oxford, 1993.
 Lawson and Wolpert [2006] J. Lawson and D. H. Wolpert. ‘adaptive programming of unconventional nanoarchitectures’. Journal of Computational and Theoretical Nanoscience, 3:272–279, 2006.
 Lizier et al. [2008] J. Lizier, M. Prokopenko, and A. Zomaya. ‘the information dynamics of phase transitions in random boolean networks’. In S. Bullock, J. Noble, R. Watson, and M. A. Bedau, editors, Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems, pages 374–381. MIT Press, Cambridge, MA, 2008.
 Martland [1987a] D. Martland. ‘behaviour of autonomous, (synchronous) boolean networks’. In Proceedings of the First IEEE International Conference on Neural Networks, volume II, pages 243–250, San Diego, CA, 1987a.
 Martland [1987b] D. Martland. ‘autoassociative pattern storage using synchronous boolean networks’. In Proceedings of the First IEEE International Conference on Neural Networks, volume III, pages 355–366, San Diego, CA, 1987b.
 Patarnello and Carnevali [1987] A. Patarnello and P. Carnevali. ‘learning networks of neurons with Boolean logic’. Europhysics Letters, 4(4):503–508, August 1987.
 Patarnello and Carnevali [1989] S. Patarnello and P. Carnevali. ‘learning capabilities of boolean networks’. In I. Aleksander, editor, Neural Computing Architectures: The Design of BrainLike Machines, chapter 7, pages 117–129. North Oxford Academic, London, 1989.
 Price [1972] G. R. Price. ‘fisher’s “fundamental theorem” made clear’. Annals of Human Genetics, 36(2):129–140, 1972.
 Rozonoér [1969] L. I. Rozonoér. ‘random logical nets I’. Automation and Remote Control, 5:773–781, 1969. Translation of Avtomatika i Telemekhanika.
 Selfridge [1958] O. G. Selfridge. “‘Pandemonium”: A paradigm for learning’. In Mechanisation of Thought Processes: Proceedings of a Symposium held at the National Physical Laboratory, pages 513–526, 1958.
 Selfridge and Neisser [1960] O. G. Selfridge and U. Neisser. ‘pattern recognition by machine’. Scientific American, 203(2):60–68, 1960.
 Teuscher [2002] C. Teuscher. ‘Turing’s Connectionism. An Investigation of Neural Network Architectures’. SpringerVerlag, London, September 2002. ISBN 1852334754.
 Teuscher et al. [2007] C. Teuscher, N. Gulbahce, and T. Rohlf. ‘learning and generalization in random Boolean networks’. In Dynamics Days 2007: International Conference on Chaos and Nonlinear Dynamics, Boston, MA, Jan 3–6 2007.
 Teuscher et al. [2009] C. Teuscher, N. Gulbahce, and T. Rohlf. ‘an assessment of random dynamical network automata for nanoelectronics’. International Journal of Nanotechnology and Molecular Computation, 1(4):39–57, 2009.
 Tour et al. [2002] J. Tour, W. L. Van Zandt, C. P. Husband, S. M. Husband, L. S. Wilson, P. D. Franzon, and D. P. Nackashi. ‘nanocell logic gates for molecular computing’. IEEE Transactions on Nanotechnology, 1(2):100–109, 2002.
 Turing [1969] A. M. Turing. ‘intelligent machinery’. In B. Meltzer and D. Michie, editors, Machine Intelligence, volume 5, pages 3–23. Edinburgh University Press, Edinburgh, 1969.
 Van den Broeck and Kawai [1990] C. Van den Broeck and R. Kawai. ‘learning in feedforward Boolean networks’. Physical Review A, 42(10):6210–6218, November 1990.
 Weisbuch [1989] G. Weisbuch. ‘Dynamique des systèmes complexes: Une introduction aux réseaux d’automates’. InterEditions, France, 1989.
 Weisbuch [1991] G. Weisbuch. ‘Complex Systems Dynamics: An Introduction to Automata Networks’, volume 2 of Lecture Notes, Santa Fe Institute, Studies in the Sciences of Complexity. AddisonWesley, Redwood City, CA, 1991.
 Wittaker and Robinson [1969] E. T. Wittaker and G. Robinson. ‘the trapezoidal and parabolic rules’. In The Calculus of Observations: A Treatise on Numerical Mathematics, pages 156–158, New York, NY, USA, 1969. Dover.
Comments
There are no comments yet.