Parallel Computation Is ESS

by   Nabarun Mondal, et al.
D. E. Shaw & Co., L.P.

There are enormous amount of examples of Computation in nature, exemplified across multiple species in biology. One crucial aim for these computations across all life forms their ability to learn and thereby increase the chance of their survival. In the current paper a formal definition of autonomous learning is proposed. From that definition we establish a Turing Machine model for learning, where rule tables can be added or deleted, but can not be modified. Sequential and parallel implementations of this model are discussed. It is found that for general purpose learning based on this model, the implementations capable of parallel execution would be evolutionarily stable. This is proposed to be of the reasons why in Nature parallelism in computation is found in abundance.



There are no comments yet.


page 1

page 2

page 3

page 4


Arrows for Parallel Computation

Arrows are a general interface for computation and an alternative to Mon...

Wittgenstein and Turing: Machines, Language games and Forms of life

We propose here to make the connection between the definitions given by ...

Exploring Parallel Execution Strategies for Constraint Handling Rules - Work-in-Progress Report

Constraint Handling Rules (CHR) is a declarative rule-based formalism an...

Model-Parallel Model Selection for Deep Learning Systems

As deep learning becomes more expensive, both in terms of time and compu...

HyPar-Flow: Exploiting MPI and Keras for Scalable Hybrid-Parallel DNN Training using TensorFlow

The enormous amount of data and computation required to train DNNs have ...

Assessing the Effectiveness of (Parallel) Branch-and-bound Algorithms

Empirical studies are fundamental in assessing the effectiveness of impl...

Nonlinear Equation Solving: A Faster Alternative to Feedforward Computation

Feedforward computations, such as evaluating a neural network or samplin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Computations abound in nature, and it is not hard to fathom that one of the purposes of the natural computation is to learn. A more learned individual would thereby gather advantage over the others [17], and survive. How learning evolved in nature remains a crucial question to be answered.

The extended Church Turing Thesis [14][15] [2][3][4] normally stated in the form: any computation happening in nature must also have a Turing Machine analog; suggests that no computation in nature can exceed the power of a Turing Machine [6]

. This intriguing thought of simulating nature on Turing Machines provoked the study of machine learning and artificial intelligence

[7] [5].

One school of thought about machine learning follows the sequential way of creating machines which are capable of learning. In fact scholars from the sequential school argue [14][15] that parallel systems won’t have any added advantage over the sequential ones, because both are reducible to Turing Machines.

Another school of thought [1]

questions the sequential learning strategies, as nature seems to be inherently running in parallel. The massive parallel structures of neurones in brain


prompted the artificial neural network studies

[9] [10], and it is now known that bacterial colony learn by pooling in their individual (bacterium) resources [12] which makes the learning strategy parallel by definition. Scholars from the parallel school ask the question :-

Given Nature is inherently parallel, how useful the sequential way of learning going to be?

It is well known that computationally both of these models actually yield the same power [4] [8] : every parallel system must have a sequential analog. The sequential models would take more time, less computing power, and less space, while the parallel ones would take less time, more computing power and more space. Clearly then, the answer to “why there are parallelism in nature?” lies not in the computing power of the abstract Turing Machine, but had to do with how each model has evolved in nature with no supervision ( a.k.a blind designoid ) [16].

Can one then decisively argue the need for which parallel computation is prevailing in Nature? This question begs an answer because parallelism in execution is harder to attain in the artificial settings of man made computing machines. Artificial parallel systems are faster, but too simple when compared against the natural systems like human brain, which are slower but far more complex[14].

In the present paper we take up this exact problem. We notice that the real problem about systems evolving in nature is:-

Given redesign is not possible in nature, what type of computation circuitry would evolutionarily arise and would become dominant?

We discuss the abstract learning procedure in the Section 2. We establish that there is an abstract autonomous (blind,directionless) system capable of learning.

Section 3 discusses sequential and parallel design (Dawkins designoid, as any evolved, blind organization is not a design in a true sense [17]). In the Section 4 we compare these two models from optimality or resources standpoint. We find that for general purpose learning, parallelism has evolutionary advantage. It is not only because it is faster, but also because no other computationally better organization is possible, and computationally it can not be improved. Finally in the Section 5

we put all these results together and show that once parallel strategy (in the sense of game theory) has invaded the population, it would become dominant, and stay dominant, because it is an

evolutionarily stable strategy

. Therefore, we conclude that, the theory of computation and game theory together can explain the prevalence of parallelism in computational circuitry in nature.

2. Learning

Everyone has an intuitive idea about what learning is. But that idea relies upon another intuitive idea of what is knowledge. For example Learning is, knowing what one did not know earlier.

In this sense, there are some properties of learning one can summaries:-

Definition 2.1.

Informal Description of Learning.

After a system has learned, at some time the following properties hold:-

  1. The system could achieve something which was unachievable at , .

  2. The system would continue achieving everything it could achieve before .

  3. The system should be autonomous, devoid of supervision by any intelligent agent.

The third point needs elaboration. Once the system is set into motion, after that no tweaking should be done with the system. In specificity getting out of the system to perform system optimization is not allowed at all for autonomous learning. This jumping out of the system idea is elaborated heavily in [11].

We are not into the philosophies of understanding. Searle’s Chinese room argument [5] and related arguments both in favour or against elaborates the lack of formalism in understanding.

In a formal sense then, without getting into the understanding , if there is a set of knowledge associated with the system which can somehow be identified as the set at time , and at a later time with the set , then the statement a learning took place between to is equivalent to the formal statement:-

The learned knowledge can be modelled by :-

But these semi-formal definitions would not really formalize learning given that the set was not formally defined. Can we formally quantify the set ?

At this point, we argue that the existence of can not be measured without the effect of exhibited by the system, which should only be identified by experimentation. If the Extended Church Turing thesis [14][15][4] is true, that is any model of computation has an analogous Turing Machine model, then, the effect of can be found in pretty straight forward manner. Considering asystem that can recognize certain input sets, we can define knowledge as the input set the system recognises.

Definition 2.2.


Let be an alphabet, and . Let a system be defined by a “black box” with input a string from :-

and with output symbol :-

Taking input sets the systems output to , (‘working’ symbol). Then later the system can either accepts the input string by outputting , or reject it by outputting , or can do neither. The set of all strings the system “accepts” be called as , the language class accepted by ‘’. Hence,

To give an example take a bacterial colony ‘’ which can produce glucose from a set of chemical soups . The set of all chemical soups, from which the colony can produce glucose can be called the language while, the presence of glucose after some time can be taken as outputting the symbol .

Definition 2.3.

Formal Definition of Learning.

Let a system (definition 2.2) at time accepts the language class . The system has learned within time iff:-

The knowledge of the system , and learning is precisely defined as:-

Comparing the definitions (2.1) with (2.3) we can see that (2.3) is just a type of (2.1) which matches with the intuition. Knowledge accumulation is now experimentally verifiable. To show that a system has learned within the time interval , one has to find a string such that . We also note that the strict subset inclusion for makes learning a filtration [19] [20] process.

Comparing with the earlier example, given that the colony could produce glucose from one for at least one additional chemical soup, let’s call it that it was not able to use earlier, would indeed qualify as learning.

But what is the natural mechanism of this recognition or acceptance? If strong Church Turing Thesis is correct, then the recognition would have an equivalent Turing Machine model. From that perspective, a system can be defined as follows:-

Definition 2.4.

System Modelled by Universal Turing Machine.

A system modelled by an Universal Turing machine is a mechanism comprising of a set of rule-tables : , which can be used as programs for the universal Turing machine. The set changes over time, denoting the time evolution of the system as . The universal Turing machine can nondeterministically choose a rule-table(or rule-book) to recognise a string. The language class is entirely determined by . Let the language class accepted by simulating rule table be . Then,

Now, this modelling has interesting consequences. The time evolution of has to be such that the language class stays a filtration to suggest that the system ‘’ is learning. What would be the possible mechanism of the time evolution of ?

The only way to change the language class is to change the set of the rule-books. But what kind of change would be allowed? The rule books are programs, therefore, random change won’t always work, and then, there is no warranty that the result would remain a filtration after a change, that is the relation will hold.

However, there is another way, which will always have the filtration condition satisfied. That is introducing new rule-tables in the set , instead of modifying any of them. This solves the problem of How a modification of rule-book always work?

How a new rule table initially would get created is a different problem, and is not being considered here. But given that a new rule table somehow gets inserted in ‘’ at , then, we must have:-

with equality holding only when .

This becomes a crucial hypothesis: new rule-book (programs) gets created and added to the existing set, instead of modification of the existing rule-books which we hypothesize to be rare.

These ideas are formalized in the next set of axioms.

Axiom 2.1.

Natural Learning Systems Axioms.

  1. Learning is as defined as definition 2.3 and is a filtration process.

  2. The only permissible operation to modify is the addition of a new rule-book to the existing rules set , where is the set of all rule-books :-

    such that :-

We now show that a model exists which is capable of autonomous learning as in definition 2.3. It adhere to the axioms of (2.1) and learns without any help from any supervisor. Simply speaking we show that a designoid [17] can learn. Again, here we wont be discussing understanding. Learning here is a synonym for recognizing strings(information content) which was not recognized earlier.

Theorem 2.1.

There is a learning process following Axiom (2.1).

Let be a system at time . Let a new rule-book be added using operation , at time such that:-


then, the operation induced learning on the underlying system ‘’.

Proof of the theorem 2.1 .

Obvious from the discussion of the previous paragraphs. Given this condition is met, the operation will be inducing learning to a system by definition. ∎

So how gets created? Anything random might work, but can take many number of steps, with lucky (in a statistical sense) breaks. In this paper we assume that some process induces creation of the newer rule book , but the mechanism is not what we are interested in. We should note that not always adding a rule-set would induce learning, that is why the condition (2.1) is crucial.

The point to be noted here is that the system which would start learning, and the system after some time evolution, can be fundamentally different, due to the nature of the rule-book being used. To call and different system is analogous to the Ship of Theseus or Theseus Paradox. It raises the question of whether an object which had some or all its components replaced remains fundamentally the same object, albeit in our case, in some sense , due to filtration nature of learning.

3. Practical Models Of The Learner

We have established that there exists a system depicted in Theorem 2.1 which is capable of learning. The systems which are capable of learning subsequently would be called learners.

It is easy to notice that by the Theorem 2.1, the way a learner really learns is adding a new rule table to its repository of existing rule tables. So, at every steps of learning, formally one new rule table gets added, and the learner conveniently must switch from one table to another to accept a string. Note that no internal shuffling of the rules are allowed, the rule tables are atomic building blocks by axiom (2.1). With this idea in mind we can now formally define a physically possible learner as follows:-

Definition 3.1.

A Learning System : Learner.

A learner is a system comprise of a set of rule tables , which can be simulated by Universal Turing Machine(s). By simulating a rule table it can accept strings . It is capable of adding new rule table to via a process called learning. To learn the system can add a new rule-book to :-

The language class accepted by a learner is bound by the relation :-

Definition 3.2.

Complete Learner.

Let’s assume a learner has a set of rule tables as . Let the individual language class for each be . Let the language class accepted by the learner be . The learner is said to be complete iff:-

As for mechanism of implementation of a learner, it can belong to two general classes tandem or sequential class, and parallel class.

Definition 3.3.

Sequential Learner.

To accept a string a sequential learner sequentially picks one rule table from and simulates it using the Universal Turing Machine, until either no unused rule table exists, or there is an accept.

Which rule table to be used after which rule table becomes of importance now. There is no obvious answer to that. In fact this question, the inherent sequential nature limits the language class the sequential system can accept. This is the subject matter of the next theorem.

Theorem 3.1.

Language Class Accepted by the Sequential Learner.

Sequential learner accepts a language class which is generally a strict subset of the union of the language class of all the rule tables s.


We construct the exact language class accepted by the sequential learner. We begin by constructing the class of strings which would be accepted by the learner. Let’s define as a function telling (Turing’s Oracle) if the underlying universal Turing machine, while simulating rule-book will or will not halt on input string ‘’. So, if , the Turing machine halts, if , it does not halt.

Let’s define as :-

Then, the language class accepted by the sequential learner is precisely :-

It is obvious that in general , and therefore, in general, when there exists at least a string ‘’ in rule table which makes the underlying Universal Turing machine simulating on rule table get into infinite loop (), we would have:-

That completes the proof. ∎

It can be argued however that by cleverly ordering the selection of the rule tables one can possibly complete (definition 3.2) the learner. But in general that will be impossible. Next theorem establishes this fact.

Theorem 3.2.

Sequential learner can not be completed.

Algorithmic modification of a sequential learner into completeness is impossible.


Let are rule tables.

We demonstrate using the worst case scenario, which is such that simulating with input gets the universal turing machine into infinite loop.

Obviously, it is impossible to reduce such a system into a complete system. Now, assume that there is only one such that simulating with input gets the universal turing machine into infinite loop.

But that is unknown unless we are looking at the system from outside of the system. We must then already have a table which shows what strings from which rule table class produce a infinite loop in which rule table.

The question is can we create such a table algorithmically? The answer is no, because we would never know which string would gets the simulation into infinite loop. The function is not computable, by Turing machine halting theorem [6]. Therefore, automatic creation of such a table is not possible.

Hence, automatic completion of the sequential system, is not possible either, as was stated. ∎

Therefore, it is now established by the Theorem 3.1 that a sequential learner does not have have closure of the language class for which it poses the rule tables never the less. Also, it is not possible to automatically complete it, as stated by Theorem 3.2.

Although the rules in the rule table lets an universal Turing machine precisely accept the language class , but the learner might still not accept all the strings from the class. This is obviously not efficient.

The next learner, does not have that inefficiency, but it achieves that goal with extra processing units, and space. The next learner, of course is the parallel learner. It utilizes the concept of parallel running Turing Machines [8] having dedicated tapes each, but communicate with a monitoring Turing Machine using a shared tape.

Definition 3.4.

Parallel Learner.

Let . Then the parallel learner has Universal Turing Machines. of them are having single dedicated tape, and a shared tape which everyone shares.

To accept an input string is to be supplied to the shared tape, from where all the Turing Machines copy the string into their own dedicated tape, and start running the simulation. If one of them could accept the string, writes back a symbol to the leftmost cell of the shared tape.

The last Turing Machine only reads the left most cell of the shared tape for the symbol in a loop. If it has found the symbol, it accepts the string, as the system has accepted it and halts.

Theorem 3.3.

Language Class Accepted by the Parallel Learner.

Parallel learner accepts a language class which is strictly the union of the language class of the all the rule tables s.


This is elementary. The problem of a simulated system getting into an infinite loop is solved by the Turing Machine running parallel without affecting each others progress. Also the last Turing machine monitoring any of the simulation output ensures if any Turing machine halts with accept, the learner would halt. Therefore, the parallel system is strictly complete over the rules. ∎

4. Comparisons of Learning Strategies

In the previous section, we have established two learning strategies, one in tandem, or the sequential strategy (definition 3.3), another - the parallel strategy (definition 3.4). In the current section we establish the pros and cons of the different strategies.

Note that we are discussing a blind design, that is no conscious optimization, ever. None is looking at the system from the outside, and improving the design or the wiring of the system.

Theorem 4.1.

Completeness of the Learning Models.

Parallel Learners are complete (definition 3.2) while Sequential Learners are not.


The proof is a direct consequences of theorem 3.3 and theorem 3.1. ∎

By this theorem, one clear thing we have established is that the parallel system is more optimal than the sequential system, in general. That is, a parallel system can use all it’s resources to gain maximum coverage on the strings those are to be accepted, while a sequential learner can not. Also by Theorem 3.2 we have established that there is no autonomous way to complete the sequential system. So, parallel systems has inherent evolutionary advantage.

But, this is not the only metric in which the learners to be measured for optimality. In computer science, the time and space complexity are of utmost importance.

Formally the time complexity question is how much time it takes to accept a string?

It is not too hard to show that the parallel strategy wins here. We show it in the next theorem:-

Theorem 4.2.

Time Complexity Comparison of the Learners.

If the time taken to accept a string in sequential learner is and in parallel learner is , then

given both using same rule tables, and same Universal Turing Machines.


This is pretty elementary. For sequential learner acts in tandem, the time taken to accept a string is precisely the time taken to reject the string by all the other simulation previously plus the time taken to accept it. That gives:-

where the string gets accepted using the ’th rule table, and are time taken to reject the string by th rule table, with is time taken to accept the string by simulating th rule table.

However, for the parallel case :-


which clearly establishes the theorem. ∎

We should actually ask the storage complexity of the simulations. That is, to set up the sequential learner, and the parallel learner, how much storage space is needed?

We note that the storage space in either case is a function of the number of rule tables , because one needs to incorporate the newly added rule table. Given the storage required to store the rule table be , and an Universal Turing Machine we have the precise relation as the next theorem:-

Theorem 4.3.

Storage Space Comparison of the Learners.

If storage space of the sequential learner is and in parallel learner is , then

given both using same rule tables, and same Universal Turing Machines.


We note that:-

The parallel machine needs to have Turing machines, but the rule for every universal turing machine is the same. And that is never going to get changed. Then only copy of the rule table for the Universal Turing machine itself suffices. Then

which would imply that . ∎

Now we ask the space complexity of the simulations done both by the sequential and the parallel learner.

Theorem 4.4.

Space Complexity Comparison of the Learners.

If the space used to accept a string in sequential learner is and in parallel learner is , then

given both using same rule tables, and same Universal Turing Machines.


The proof is again elementary, we establish the precise relation between them. Assume that the space required to reject the string by th simulation be and to accept by th simulation is . Then,


which immediately establishes the theorem. ∎

These theorems clearly establishes that when space is not premium, then, parallel learner would be a more optimal strategy for learning. In specificity, if accuracy and completeness is needed, then parallel learners are better then the sequential ones.

But that is not all. The crux of this lies in the fact that autonomous learning in nature has to be inherently blind, triggered by chance mutations. Given that, there would be no way to ensure an out of the system decision making to further optimize the design of the resulting system.

Due to the nature of the blind learning, even if optimization is possible, there has to be parallelism to complete the learner. This intelligent ordering can evolve in nature, and is discussed in the next theorem.

Theorem 4.5.

Existence of a Hybrid Learning System

There exists a hybrid model of learning which uses parallelism and serial model, and is complete.


We prove it by constructing a hybrid system. We note that the halting problem establishes a partial order relation over the rule tables. It can be stated as :-

iff hangs the simulation of .

Given this partial order relation we can create sets (for unrelated ) so that :-

Let . Now individual rules can be run in sequential. But cross set rules and can not be run in sequential.

So, if a system is capable of running the parallel execution, we can make a complete hybrid system that runs related rules parallel, but unrelated rules can be run sequentially.

This completes the proof. ∎

We note that construction of such a system is not algorithmic, that is not computationally possible. The step of finding out the relations between language class rules, (in specificity ) is not computable. Hence, only blind mutation can create such a system, that is by chance. However, up until this section we have established that parallelism, in general, is inherent in nature, even if a Hybrid system is evolved, there is parallelism inherent in it. In the next section we show that why parallel learning strategy dominates nature.

5. Parallel Strategies and Evolution

In this section we finally ask the following question :

Given both the sequential and parallel strategies available in Nature, which strategy would eventually dominate?

This obviously should depend upon the concept of which strategy pays off more for survival. But payoffs like this are in the realm of Game Theory [21]. In the context of the Game Theory payoffs are represented as numbers which represent the motivations of players. Payoffs may represent profit, quantity, that is any “utility”.

As resources are limited in nature, gaining advantage or loosing advantage can be modelled by a fixed sum game[21] where a both the players are competing for a fixed sum in reward. However, we can generalize any fixed sum game into a zero sum game by setting the fixed amount at [21][17][18].

Given accepting all strings having the same utility, we can have the payoffs proportional to the cardinality of the language class accepted by the strategy. In general, if learner plays with learning strategy, and learner plays with learning strategy, then, intuitively, the player is in advantage if , and for it is vice versa with denoting the cardinality of set . Hence, payoff of against denoted by can be calculated as:-

When becomes infinite, to define the payoff formally we have to resort to the measure theory [19].

Definition 5.1.

Utility Measure.

Let be a set of strings. A measure is a utility measure, iff:-


where implies that the set has more utility than .

Note that with this measure in place, we do not need any assumptions about the utilities of different strings. It also glorifies the age old saying: “no learning is useless”, only this time, more formally.

Definition 5.2.

Payoff between Learning Strategies.

Let are two learning strategies. Let denotes the language class accepted by the strategy . Let . Let be a utility measure (5.1) defined over . Then, the payoff of strategy against is given by:-

In particular, if then,

Now we make the bold claim that parallel strategy is evolutionarily stable. To do so we need to state the definition of an evolutionarily stable strategy [16], [17],[21] [18].

Definition 5.3.

Evolutionarily stable strategy(ESS). Let is a set of strategies. Let represent the payoff for playing strategy against strategy . The strategy is ESS iff one of the following conditions holds :-

  1. Strict Nash Equilibrium :

  2. Maynard Smith’s second Condition:

Now with this definition in hand, we establish the criterion for ESS in the evolution of learning.

Theorem 5.1.

ESS for Learning Strategies.

Let be a set of learning strategies. Let be a learning strategy with:-

where is language class accepted by strategy . Then, the strategy is ESS.


We know that such that:-

because , a proper subset. As measure is additive, the above relation implies:-

Hence, we note that :-

This implies:-

implying :-

By definition we have and therefore,

Comparing from definition (5.3, rule 1), is an ESS. In fact we note that this ESS is a strict Nash Equilibrium.

Now we establish that the parallel strategies are ESS.

Theorem 5.2.

Parallel learning strategies are ESS.

Let be a set of strategy such that has rule table sets such that no complete tandem learner is possible. Let is a parallel strategy. is an ESS.


We note that the language class accepted by the incomplete tandem learners are a strict subset of the language class accepted by the parallel learner by the theorems (3.1,3.3,4.1). That is, if is language class for the sequential learners, and is the language class of parallel learners, then

Now using the theorem 5.1 we can immediately deduce that parallel strategies are ESS. ∎

6. Summary

In this paper we have demonstrated why in nature parallel strategies are the optimally suited one. We established this fact using Church Turing Thesis, and an utility measure which tallies with common sense. This demonstrates the power of computer science as a proper science, fully capable of describing natural phenomenon, outside the realm of rather artificial settings of man made computers. The bottleneck in nature starts with the impossibility of algorithmically completing a tandem learner. Then, if every rule-book has strings such that it makes simmulation of the into an infinite loop, then ordering won’t solve the problem of completion. Parallel computation is the only way out then. This is the reason why parallel strategies once evolved, would dominate nature.


  • [1] Boucher, Andrew. Parallel Machines . Minds and Machines Volume 7 : 1997 , 543-551
  • [2] Cohen, I. A. Daniel , Introduction to Computer Theory . Wiley; 2 edition, 1996.
  • [3] Aho; Motwani ; Ullman , Introduction to Automata Theory, Languages, and Computation . Prentice Hall; 3 edition (July 9, 2006).
  • [4] Sipser, Michael , Introduction to the Theory of Computation . Course Technology; 3 edition (June 27, 2012).
  • [5] Searle, John. , Is the Brain’s Mind a Computer Program? . Scientific American, Jan. 1990, pp. 20-25.
  • [6] Turing, Alan M. , On Computable Numbers, With an Application to the Entscheidungsproblem . Proc. London Math. Soc. Ser. 2 42, 1936 ,pp. 230-265.
  • [7] Turing, Alan M. , Computing Machinery and Intelligence . Mind 59, 1950 , pp. 433-460.
  • [8] Wiederman, Juraj. , Parallel Turing Machines . 1984
  • [9] Rosenblatt, F. ,

    The Perceptron: A Probabilistic Model For Information Storage And Organization In The Brain

    . Psychological Review 65 (6), 1958 , pp. 386-408.
  • [10] Rumelhart, D.E ; McClelland, James. , Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge: MIT Press, 1986
  • [11] Hofstadter , Douglas R. , Gödel, Escher, Bach: An Eternal Golden Braid . Basic Books; 20 Anv edition (February 5, 1999).
  • [12] Ben-Jacob E. , Learning from bacteria about natural information processing . Ann N Y Acad Sci. 2009 Oct;1178:78-90. doi: 10.1111/j.1749-6632.2009.05022.x.
  • [13] Ramachandran , V.S. , Phantoms in the Brain . Harpercollins Pb; New Ed edition (May 20, 1999).
  • [14] Penrose , Roger. , Emperor’s New Mind . Random House UK (February 27, 1994).
  • [15] Penrose , Roger. , Shadows of the Mind: A Search for the Missing Science of Consciousness . Oxford University Press, USA; Reprint edition (August 22, 1996).
  • [16] Dawkins , Richard. , Selfish Gene . Scientific American (2004)
  • [17] Dawkins , Richard. , Climbing Mount Improbable . W. W. Norton & Company (September 17, 1997).
  • [18] Smith, Maynard ; J. Price, G.R , The Logic of Animal Conflict. Nature 246, (02 November 1973), pp. 15-18.
  • [19] Doob , J.L. , Measure Theory . Springer (1993).
  • [20] Chung , K.L. ; Farid, A. ,

    Elementary Probability Theory with Stochastic Processes and an Introduction to Mathematical Finance

    . Springer 4th edition (2003).
  • [21] Barron , E.N. , Game Theory: An Introduction . Wiley India Pvt Ltd (2009).