Non-computability of human intelligence

by   Yasha Savelyev, et al.

We revisit the question (most famously) initiated by Turing:Can human intelligence be completely modelled by a Turing machine? To give away the ending we show here that the answer is no. More specifically we show that at least some thought processes of the brain cannot be Turing computable. In particular some physical processes are not Turing computable, which is not entirely expected. The main difference of our argument with the well known Lucas-Penrose argument is that we do not use Gödel's incompleteness theorem, (although our argument seems related to Gödel's) and we do not need to assume fundamental consistency of human reasoning powers, (which is controversial) we also side-step some meta-logical issues with their argument, which have also been controversial. The argument is via a thought experiment and at least partly physical, but no serious physical assumptions are made. Furthermore the argument can be reformed as an actual (likely future) experiment.



There are no comments yet.


page 1

page 2

page 3

page 4


Turing analogues of Gödel statements and computability of intelligence

We show that there is a mathematical obstruction to complete Turing comp...

The non-algorithmic side of the mind

The existence of a non-algorithmic side of the mind, conjectured by Penr...

If a tree casts a shadow is it telling the time?

Physical processes are computations only when we use them to externalize...

The Mimicry Game: Towards Self-recognition in Chatbots

In standard Turing test, a machine has to prove its humanness to the jud...

No Substitute for Functionalism – A Reply to 'Falsification Consciousness'

In their paper 'Falsification and Consciousness' [1], Kleiner and Hoel i...

LT^2C^2: A language of thought with Turing-computable Kolmogorov complexity

In this paper, we present a theoretical effort to connect the theory of ...

Bad Universal Priors and Notions of Optimality

A big open question of algorithmic information theory is the choice of t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Some preliminaries

This section can be just skimmed on a first reading. Really what we are interested in is not Turing machines per se but computations that can be simulated by Turing machine computations, these can for example be computations that a mathematician performs with paper and pencil, and indeed is the original motivation for Turing’s specific model. However to introduce Turing computations we need Turing machines, here is our version, which is a common variation.

Definition 1.1.

A Turing machine consists of:

  • Three infinite (1-dimensional) tapes , divided into discreet cells, one next to each other. Each cell containes a symbol from some finite alphabet. A special symbol for blank, (the only symbol which may appear infinitely many often).

  • Three heads (pointing device), can read each cell in to which it points, can read/write each cell in to which it points. The heads can then move left or right on the tape.

  • A state register that stores one of finitely many internal states of , among these is “start” state , and at least one final or “finish” state .

  • Input string , the collection of symbols on the tape , so that to the left and right of there are only symbols . We assume that in state , points to the beginning of the input string, and that the tape has only symbols.

  • A finite table of instructions that given the state the machine is in currently, and given the symbols the heads are pointing to, tells to do the following:

    1. Replace symbols with another symbol in the cells to which the heads point (or leave them)

    2. Move each head left, right, or leave it in place, (independently)

    3. Change state to another state or keep it.

  • Output string , the collection of symbols on the tape , so that to the left and right of there are only symbols , when the machine state is (or one of the other final states). When the internal state is one of the final states we ask that the instructions are to do do nothing. So that these are frozen states.

We also have the following minor variations on standard definitions.

Definition 1.2.

A complete configuration of a Turing machine or total state is the collection of all current symbols on the tapes, instructions, and current internal state. A Turing computation for is a possibly not eventually constant sequence of complete configurations of , determined by the input and the table of instructions of , with the complete configuration whose internal state is . If the sequence is eventually constant the limiting configuration has internal state one of the final states. When the sequence is eventually constant we say that the computation halts. For a given Turing computation , we shall write

if halts and is the output string. We write if it does not halt.

We write for the output string of , given the input string , if the associated Turing computation, denoted , halts.

Definition 1.3.

Let denote the set of all finite strings of symbols in some fixed finite alphabet, for example . Given a partially defined function , that is a function defined on some subset of . We say that a Turing machine computes if , whenever is defined.

For later, let us call a partially defined function as above an operator, and write for the set of operators.

2. Proof of Theorem 1

The reader may want to have a quick look at preliminaries before reading the following to get a hold of our notation and notions. The proof will be constructed in the form of a thought experiment. In this experiment a human subject is contained in isolation in a room, under supervision of a human experimenter E. If we restrict our to act on certain input (as will be given below), can be interpreted as an operator . Suppose by contradiction that every thought process of a human being can be simulated by a Turing computation, so that is computed by some Turing machine say . Note that this involves some necessary conditions. For even if our gives answers that are unambiguously interpretable as strings, to give his answer may do intermediate steps like “check the clock” (if he had a clock) then output whatever time it is. For to compute it has to be able translate such an action into something it can simulate, for example its pseudo-algorithm could be “I check the time corresponding to the clock available to my human” (if it is able to do this meaningfully, for example if it has access to all information that the human has, in particular the time of the clock usable by ), “then I output this time as a string”. We shall suppose in what follows that there are no obstructions as above to simulating , which means in practice that has access to all the necessary information, that is likewise accessible to , and what is equally important it has no access to information that is not accessible to .

Our (possibly very futuristic) human experimenter, is in communication with , she knows the operation of a Turing machine computing , and she controls all information that passes to . (That is usable by

). We also suppose that S understands natural language, basic mathematics, and basic theory of computation.

also has access to a general purpose digital (or perhaps quantum if that helps) computer with sufficiently large memory. Here sufficiently large is so that it is enough to contain all the necessary data and complete the computation described below - if it halts, if does not halt we shall obtain a contradiction before memory can run out, if it was large enough. We will say ’s computer in what follows.

At this moment E passes to

the following input (which we understand as one string ):

  1. Assume that I (that is E) believe that you (as an operator) are computed by the Turing machine , whose faithful simulation is programmed into your computer. You have access to this simulation and its source code.

  2. You cannot check the processing speed of your computer and I can adjust the processing speed without your knowledge. (This is to avoid some pathological behavior from , and logical issues. We will explain this.)

  3. If you can show that I am in contradiction you will be freed. You may use your answer to 4 below to do so.

  4. Give me an integer.

Note that instruction 3 is in a sense for aesthetics, we can run the following argument without it, will not have any motivation to proceed as follows, but the only essential point is that he can in principle proceed as follows, and that generates a contradiction.

Now knows that the answer that is expected by E is given by the Turing computation that we shall call computation

where is the input string corresponding to the instructions above. As mentioned above we assume that all necessary conditions for to compute on this string are satisfied. In practice all this means is that can access the information on ’s computer that is available to , and no more information than that. Since by we really mean the simulation of that is already on the computer of , it is a simple requirement, (from a physical point of view).

may then proceed to compute the result of using his digital computer. E herself is presumed to be doing the same computation on her own computer. Now assuming the above and 1 in particular, knows that cannot halt or rather he knows that E must believe it cannot halt. For otherwise if halts with an integer answer , instead of answering , may answer , (or something equally absurd) and so he obtains an immediate contradiction, E is expecting . On the other hand if does not halt with an integer answer for E, (which from a Turing machine point of view is possible) he may just answer 8, again giving a contradiction, as E is expecting to answer exactly like . So cannot halt at all.

The truth of this non-halting (given the validity of E’s conviction in 1) is apparent to us human beings, and so to by assumptions (and so to E). Thus upon this reflection, first checks after whatever time that seems reasonable to him that his computer computation of has not halted. If it did he answers as above. That is obtain the value of the result of and answer E: .

If did not halt may simply answer (or anything else). He thus halts with an answer in any case, obtaining a contradiction, whether halts or not. Now the reader may object: well perhaps the time chose to wait is too short, 8 is what was expected by E after all, and sometime after answers E, halts with . But E controls the speed of the computation , so if this computation really halts she could set it up so that whatever time chose to wait would be sufficient, that is halts in that time. That is she could run the experiment once to calculate the time takes to answer and then run it again adjusting the computation speed of the computer in ’s room, so that halts in time less then . Since S has no way to check the computation speed he cannot “conspire” to always answer too early.

The only remaining possibilities is that either after waiting for to halt and obtaining the result , answers anyway, or does not even start the computation , or finally that does not halt and remains silent indefinitely. In the fist two cases either did not understand the instructions or chose not to obtain a contradiction. These are not very interesting. In the final case is simply insane, or was made “insane” by the input . This would be strange, even if believes himself to be computed by a Turing machine he cannot know that he is computed by the particular machine , so it would be perfectly self consistent to proceed via our meta-algorithm. Although knows he would disprove he is computed by it is only E and so us that obtain a total contradiction. In other words there is absolutely nothing special about , or our meta-algorithm so not answering is unreasonable. In any case if we denote by the set of possible (in nature) subjects which fall into one of the above possibilites, then our physical hypothesis is this:

Hypothesis 2.

The complement of is non-empty.

As explained intuitively this seems completely undeniable, but we cannot prove it, although we can experiment on it, as described in the following section. (Naturally this author would like to claim he is not in , but there is a philosphical issue, how exactly does he prove it?)

Let us then summarize the above argument more formally. Let denote the subset of operators corresponding to human beings as above. There exists (assuming our hypothesis) an , (which we described in detail above) with the following property. Suppose by contradiction that is a Turing machine that computes . accepts as input the instructions 1-4 above, (with respect to the chosen ) which takes the form of a string . is defined by the meta-algorithm: wait for amount of time for halt, if answer . If does not halt in time answer . Here time is as measured by the independent observer using some clock.

Then the following is satisfied after it has been arranged (by the independent observer E, who can control the speed of calculation of ) that if halts, it halts in time less then ):

  1. is defined.

  2. .

By the last formula we have: does not compute , a contradiction.

Remark 2.

There is no problem what so ever with measured by being small. (In any sense.) For “halts” in time , so if he is computed by , we may assume that the computer simulation of can be constructed which likewise halts in time , since - a physical machine, does. Then surely E can speed up the simulation slightly.

3. The above argument as an experiment

We mentioned in the introduction that we may construct from our argument an actual experiment that could at least in principle be carried out (sometime in the far future). Let us explain here how that works, for while we certainly do not need it for our argument, it is however interesting. If one believes that human thought processes are entirely Turing computations, then one must believe that it is possible to reverse engineer any particular human to a modelling Turing machine. That may be a very difficult undertaking technologically, but in principle and in practice at some point in the future must be possible. (At least for our specific purpose.) We shall use notation and ideas from the proof above. Then we suppose that E has a human subject in isolation in a room, and that she has reverse engineered a complete Turing machine model for . That is she knows the Turing machine which computes the operator corresponding to our human. Then she may run our thought experiment in the proof as an actual experiment for this human subject . Of course by our argument for suitable (not in ) , E must conclude that is not in fact computed by .

4. Some possible questions

Question 2.

Does the same argument not also prove that S is not even deterministic?

By deterministic, in non-specific terms, we mean here a machine that accepts certain inputs and instructions and not necessarily computably deterministically produces outputs. (Or deterministically produces a probability distribution for the output, recall that we also call this deterministic). Then the answer is no. If we try to run the same argument but with Turing machine replaced by a deterministic machine as above, we shall arrive at the following point. Suppose E has detailed information about S’s working as a deterministic machine. Suppose she somehow passed all this information to . But even if she is able to determine ’s supposed output , may not be able to determine it himself, as it is no longer a matter of a computation. Thus cannot contradict E’s expectation since he is unable to determine what she expects, even though he can certainly answer using his natural faculties! Just to give a colorful example, the answer she expects from may depend on whether some particular 4 manifolds are diffeomorphic. The latter is a computationally unsolvable problem.

Question 3.

Ok, but what if is a Turing machine producing probabilistic answers, that is the answer expected by E is given by a probability distribution?

It is a mostly trivial complication, (and perhaps not completely realistic) the probability distribution is computable by assumption, so E needs only to keep repeating the same experiment, the same argument as before would invalidate that is a Turing machine to any requisite certainty. In other words E can replace 3 by: show that there is a contradiction to within certainty and I will let you go.

5. Conclusion

We may conclude from above the following: first that there must be Turing non-computable processes in nature, and that moreover they appear in the functioning of the human brain. These processes must play some non-trivial role in the human cognition. As mentioned in the introduction although it is possible that Turing computable artificial intelligence will start passing Turing tests, it will always be possible to distinguish some human beings (not in

) from any particular modelling Turing machine. Human beings may not be in any way superior to Turing machines, (at least our argument does not have such a conclusion) but they are certainly different. This of course still leaves the door open for non Turing computable artificial intelligence. But to get there we likely have to better understand what exactly is happening in the human brain.

6. Acknowledgements

Dennis Sullivan and Bernardo Ameneyro Rodriguez for discussions and interest.


  • [1] A.M. Turing, On computable numbers, with an application to the entscheidungsproblem , Proceedings of the London mathematical society, s2-42 (1937).
  • [2]  , Computing machines and intelligence, Mind, 49 (1950), pp. 433–460.
  • [3] S. Bringsjord and H. Xiao, A refutation of Penrose’s Gödelian case against artificial intelligence, Journal of Experimental & Theoretical Artificial Intelligence, 12 (2000), pp. 307–329.
  • [4] S. Hameroff and R. Penrose, Consciousness in the universe: A review of the ‘orch or’ theory, Physics of Life Reviews, 11 (2014), pp. 39 – 78.
  • [5] J.R. Lucas, Minds machines and Goedel, Philosophy, 36 (1961).
  • [6] K. Gödel, Collected Works III (ed. S. Feferman), (1995).
  • [7] R. Penrose, Emperor’s new mind, (1989).