Introduction
Nobelwinning biologist Sydney Brenner recently suggested “researchers in both artificial intelligence and neuroscience might be getting overwhelmed with surface details rather than seeking the bigger questions underneath …”, indicating he was “worried that neuro and cognitive scientists were being “overzealous” in these attempts”. He recommended a “refocus on higher level problems instead”[1].
A gap^{*}^{*}*Readers of this prepublication draft are invited to email comments and questions to the author. Each superscript number within the text has a corresponding NOTE in the Appendix. After the article gives relevant background information, NOTE 18 sketches the history mentioned in the abstract. exists between Cognitive Science and mathematical results within Computer Science whose application to humans would require extremely brittle idealizations and would also contradict other wellknown empirical results as summarized by Philip JohnsonLaird PNAS [2]
. That gap is analogous to the gap between Cognitive Science, in which the ordering of events affects the context of human reasoning, and results of Probability Theory that assume such ordering is irrelevant
[3]. Also analogous is the gap between Cognitive Science and results within Mathematical Economics that assume idealizations about humans, a gap bridged by Nobelprize winning work of Kahneman in 2002, Shiller in 2013, and Thaler in 2017. A reason why bridging such gaps in transdisciplinary research might be difficult is that doing so requires “tolerance for ideas breaking with traditions” PNAS [4], p. 6; also see [5], p. 33.We obtain a result that helps bridge the gap between Cognitive Science and Computer Science. It is important and traditional for nearly all research in Cognitive Science to be measured in terms of high statistical significance within the context of natural randomness affecting experiments. But sometimes gaps are bridged by breaking with tradition. The Cognitive Science result obtained has definitive certitude, not just high statistical significance; the related natural randomness is addressed in our Implications section. To be readable by specialists from either side of the gap, we provide background information wellknown within Cognitive Science but not within Computer Science, and vice versa. Scientists across a variety of fields may thus find this article accessible.
The result obtained is limitative: that either cognition is noncomputational or there is a metacognitive comprehensibilityrelated “blind spot”, or both. In contrast to perceptionrelated blindness – such as the spot where light hits the optic nerve, or inattentional blindness[6], or attentional blink[7], or what has been recently termed “introspection’s blind spot”[8] – the metacognitive limitation applies to a human when perceiving with both eyes and vigilantly attending to relevant input. More broadly, the limitation is applicable to a realworld entity regardless of whether its cognition (if any) is that of a single individual or is based on a neural architecture. The Implications section discusses the application of the limitation to any finite set of humans, as well as to that set’s understanding of an Artificial Intelligence (AI) system that would accurately simulate that set. The limitation is also very specific for each entity, as explained in Open Questions.
Our Main Question is: Could human cognition ever fully comprehend the input/output of human cognition? Our investigation of that question uses Donald Knuth’s strong computationrelated criterion for “understanding”, explained after the next paragraph.
Sometimes applying Kurt Gödel’s Second Incompleteness Theorem (related to Computer Science) is suggested as a way to answer that kind of question. We review how requirements for applying that theorem to human cognition are extremely brittle and contradict wellknown empirical evidence, and why it follows that such an application cannot succeed to even the smallest extent. We also investigate how the recent COAT Theorem[9] is applicable to humans and indicates a negative answer to the Main Question. (COAT is an acronym for Computationalismimpossible Or “Absolute” Truth.)
A fundamental goal of science is to define principles of interaction among lowerlevel phenomena that help explain higherlevel phenomena; e.g., the classic Atomic Theory [10], Ch. 1 , [11], p. 162. Research investigating human cognition can seek to create and understand simulations based on the lowerlevel principles [12, 13, 14]. Forty six years ago computer scientist Donald Knuth argued in an American Scientist article that fully achieving that goal requires expressing the relevant knowledge algorithmically:
Actually a person does not really understand something until after teaching it to a computer, i.e. expressing it as an algorithm … An attempt to formalize things as algorithms leads to a much deeper understanding than if we simply try to comprehend things in the traditional way; [15] emphasis in original, based on [16].
Recently DeepMind’s Demis Hassabis argued similarly.^{1}^{1}endnote: ^{1} In the Nature article “Is the brain a good model for machine intelligence” [77], Hassabis states “To advance AI [Artificial Intelligence], we need to better understand the brain’s workings at the algorithmic level … Conversely, from a neuroscience perspective, attempting to distil intelligence into an algorithmic construct may prove to be the best path to understanding some of the enduring mysteries of our minds.” That article includes contrasting opinions by Dennis Bray, Rodney Brooks, and Amnon Shashua. A review, by Hassabis and three of his colleagues, of the effect of neuroscience on AI is in [78]. Applying Knuth’s criterion^{2}^{2}endnote: ^{2}
Notice that Knuth’s criterion is not satisfied by a person who develops a Machine Learning program to
teach itselfsomething from data. Often the reason Machine Learning programs are developed is because programmers realize they themselves do not fully understand something, such as exactly how they make decisions when classifying photographic images.
to the “understanding” of human cognition requires programming and comprehending algorithms that accurately simulate the input/output of actual human cognition; cf. [12, 14, 17, 18].A prominent attempt to show that Gödel’s wellknown theorem is applicable to human cognition was made by Gödel himself [19]. We review why such an attempt is unsuccessful because it fails to satisfy (I) applicability to actual human cognition and (II) definitive and robust rigor. The article also explains how the recent COAT Theorem achieves applicability to actual humans and robust rigor by its use of a novel conceptualization to recast a result of Gödel, without requiring new results in computability theory. After Background, we review an intriguing relevance of a conjecture of Gödel and explicate two paths toward a sharper account. The first led to recasting by Reinhardt of a result of Gödel, the second to the recasting by the COAT Theorem mentioned above. We end with Summary, Implications, Open Questions, and Conclusions.
Background
As mentioned above, requirements for a definitive answer to the Main Question include (I) applicability to actual human cognition and (II) definitive and robust rigor.
Clarifying requirement (I)
It is recognized that, when actively making decisions, human logical reasoning is not infallible. For decades cognitive scientists have used the prevalent fallibility of human cognition as a tool to investigate human cognitive strategies; e.g., [20, 21, 22, 23, 24]
. Human mathematical errors can be notoriously large; in experiments requesting magnitude estimation, a third of participants gave answers roughly 30000% different from the correct answer
[25, 26].In addition to the recognized lack of human infallibility, JohnsonLaird’s research team [24, 27] obtained empirical evidence strongly suggesting a “mental models” account of human deduction. That team showed how such an account is predictably different from the “mental logic” account [21, 22]
of using the kind of logical inference rules studied within the computability theory of Computer Science. JohnsonLaird summarizes, in part, as follows:
Human reasoning is not simple, neat, and impeccable. It is not akin to a proof in logic … Reasoning is more a simulation of the world fleshed out with all our relevant knowledge than a formal manipulation of the logical skeletons of sentences PNAS [2], p. 18249.
Although human cognitive abilities are diverse [28], fallibility extends to those with impressive abilities: Alan Turing, Gödel, and Albert Einstein made mathematical errors in their individual research,^{3}^{3}endnote: ^{3} As the main representatives for the fields of computer science, mathematics, and physics, the Time magazine issue on “The Century’s Greatest Minds” (Mar. 29, 1999) chose Turing, Gödel, and Einstein respectively. Here are examples of errors by each in their individual research. Turing’s seminal article in computer science [38] had technical errors; see [62], p. 546. A Gödel biography discusses incorrect mathematics by him [79], pp. 235236. Einstein erroneously calculated the extent to which the sun would bend a star’s light due to General Relativity; the fame he received from the 1919 observation of a solar eclipse might have been diminished had it not been for his correction made during a delay caused by the first World War[80], p. 133. and serious concern about the fallibility of mathematicians is summarized in [29]. For other observations emphasizing the prevalence of serious fallibility of human cognition, see [9], pp. 211212.
Clarifying requirement (II)
Neither abundant evidence for a result nor the result being intuitively obvious is sufficient for the result to have definitive rigor. “Definitive rigor” requires theorems about mathematicallyprecise concepts that in principle can be defined entirely in terms of sets [30]. Daily evidence for millennia supported the intuitively obvious claim that the Sun revolves around the Earth. The theorems of Newton’s Theory of Gravitation^{4}^{4}endnote: ^{4} Newton’s Theory of Gravitation received a rigorous foundation in the 19th century[81], was improved by Einstein’s General Relativity, and would benefit from further understanding of additional lowerlevel phenomena [10], Ch. 7. show how a Suncentered model provides a simpler explanation of that highlevel phenomenon – as well as supporting Kepler’s assertions^{5}^{5}endnote: ^{5} In 1609 Kepler published a table to support his assertion that a planet moves in an elliptical orbit. In 1988 very strong evidence was presented indicating he obtained the table entries from his assertion itself, rather than from astronomical observations. See [82, 83, 84]. about planetary motion – via reduction to the physics of preciselydefined lowerlevel point masses. Another example: Kepler in 1611, inspired by his search for a reductionist explanation of the shape of snow crystals [31], conjectured that the densest ways to stack equalsized spheres were the intuitively sufficient arrangements ubiquitously used for stacking cannonballs. Despite abundant evidence, Kepler’s Conjecture was unsettled for 394 years until a recent, celebrated theorem [32, 33]. Intuition also strongly supports the claim: Any nonselfintersecting curve within the plane that starts and ends at the same point – like a circle or the wobbly wall of an amoeba pressed into the nearly planarthin space within a microscope slide – divides the entire plane into two regions exactly one of which has finite area. Yet that claim was not definitive until a surprisingly difficult proof by Camille Jordan in 1893; that proof is “a benchmark of mathematical rigor [that] has continued to our day” [34], p. 882, and the resulting Jordan Curve Theorem has “fundamental importance … to geometry” [35], p. 46. Likewise, failure for decades to obtain a quick (“polynomialtime”) algorithm for solving any of a wellknown set of practical problems gives abundant evidence for the conjecture. Paraphrased, it states that not every problem having a corresponding algorithm that can quickly check the correctness of a potential solution also has a corresponding algorithm that can quickly find a correct solution. Some today might view that conjecture – when it is formulated precisely – as being intuitively obvious. The lack of a theorem definitively settling it (one way or the other) has been considered the leading open question within computer science [36], p. 253.
Requirement (II) also requires reasonably accurate robustness of the realworld application of a theorem. Fig. 1
gives a standard diagram relating applied and pure math. For example, the calculation of the length of a (perfectly) circular planetary orbit would give reasonably accurate results, when applied to a noncircular elliptical orbit whose eccentricity fails to be zero by one in 10 to the 30th. A more general example of the robustness of many mathematical theorems is the use of differentiable functions. Theorems about such functions typically have an underlying assumption that the set of decimal numbers occurring in measurements is infinitely divisible and “topologically complete”. Yet such theorems can give reasonably accurate results when applied to the realworld, even though the underlying assumption might not be perfectly satisfied because of possible discretization of space on the smallest scale [37].
The robust nature of hypotheses of many theorems contrasts sharply with the (perfectly) infallible hypothesis within theorems about mathematical logic obtained by Gödel. We shall use the redundant adjective “(perfectly)” to emphasize the extreme brittleness of such an infallibility hypothesis within such logic. We explain the nature of that brittleness after explicating an argument by Gödel to support a conjecture he made.
Gödel’s Gibbs Conjecture
As we soon explain, there is intriguing relevance to the Main Question of a conjecture made by Gödel. Gödel asserted his conjecture in his 1951 Gibbs Lecture to the American Mathematical Society.^{6}^{6}endnote: ^{6} Within his published Gibbs Lecture, here is the statement of Gödel’s Gibbs Conjecture: “Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified …” [19], p. 310, italics in original. Gödel’s lecture makes clear that by a “finite machine” Gödel means a Turing machine or its equivalent [19], pp. 304305. A “diophantine problem” is an arithmetical problem related to the existence of a solution in integers to one or more polynomial equations having integervalued coefficients and one or more variables. Earlier, Gödel proved that determining the truth value of (see Fig. 3) is equivalent to a diophantine problem. Gödel’s Gibbs Conjecture, abbreviated GC in this article, is paraphrased in Fig. 2.
There are deep, wellknown, relationships between the concepts of computer programs and arithmetic mentioned in Fig. 2. Given sufficient computer memory, any program – in programming languages like Fortran, C, C++, Java, and Python – is theoretically equivalent to a “Turing machine”, a preciselydefined mathematical notion investigated by Turing in 1936 [38]. The description of any Turing machine can be computationally encoded as a corresponding natural number, and arithmetical statements are sufficient for expressing fundamental properties of computable functions; e.g., see [39], p. 387. (The natural numbers are the nonnegative integers: 0, 1, 2, etc.) An “arithmetical statement” is a statement expressed in “formal” Peano Arithmetic (PA), a logical system whose language syntax is as precisely defined as that of a programming language, and which permits just a single variable type, intended to mean “natural number”. The syntax of PA includes standard symbols for logical operations and for zero, addition, multiplication, successor, and lessthan. The actual possible meanings of the symbols within PA are constrained by the axioms and the “inference rules” of PA. When this article mentions the meaning of a statement of PA without mentioning an interpretation, we intend the meaning according to the standard interpretation of the symbols of PA, so that the symbol + of PA is interpreted as addition of natural numbers, and so forth. Interpreted as such, each single statement of PA is exactly one of true or false, by conventional mathematics.
Intriguing Relevance of GC
We now explain that if one were willing to ignore failure related to both requirements (I) and (II) mentioned in the Introduction, GC suggests a “no” answer to the Main Question, because of Knuth’s criterion. On the one hand if GC’s Assertion 1 holds, then it is impossible for a computer program to exist that accurately simulates human cognition and so (by Knuth’s criterion) it is impossible for human cognition to be fully understood. On the other hand, if Assertion 1 fails to hold, then by the last paragraph of Fig. 2 it is impossible for human cognition to correctly master an understanding related to such a simulation program itself.
Importantly, that last paragraph of the paraphrased GC is more relevant to the Main Question than a claim that human cognition cannot correctly master all true arithmetical statements. That weaker claim would hold if the truth status of any specific arithmetical assertion, regardless of its relevance to mastering any simulation program, should elude humans. One candidate is an assertion that is the digit to the right of the decimal point of , where is a specific huge number[40], p. 149. Here might be taken to be the number concisely denoted by Knuth as , who suggested to scientists that in terms of magnitude it is “so large as to be beyond human comprehension” Science [41], p. 1236.
But what about requirements (I) and (II)? To see that Gödel’s argument for GC satisfies neither, we consider his argument.
Gödel’s Argument for GC
Gödel’s argument attempted to take his Second Incompleteness Theorem about formal axiomatic systems, summarized in Fig.3 and apply it to humans. Gödel’s rigorous theorem would fit on the right of Fig. 1, and his inherently nonrigorous GC on its left [40], pp. 141142.
Gödel’s Theorem is widely viewed as having a satisfiable hypothesis, since nearly all mathematicians consider the relatively simple arithmetical axioms and rules of inference of PA itself as providing a formal system having the property of being (perfectly) consistent. That is altogether different from asserting that human mathematical reasoning itself is (perfectly) consistent. To see the distinction, notice analogously that on the one hand a person might assert that the rules of the board game of chess ensure certain perfect properties of the game when those rules are followed, such as that no two pieces ever occupy the same board position after a legal move in the game. On the other hand, it would be altogether different to assert that humans who play chess never make mistakes; cf. [44], p. 466.
Gödel’s argument for GC assumed that, when actively making decisions, human logical reasoning is (perfectly) infallible.^{7}^{7}endnote: ^{7} Despite his Gibbs Conjecture, Gödel did recognize the fallibility of actual humans, according to perhaps his leading interpreter who knew him personally, Hao Wang: “… contrary to the general impression, Gödel affirms the fallibility of our mathematical intuition and investigates the different degrees of clarity and certainty that exist within mathematics” [85], p. 5. We shall call that assumption the Infallibility Hypothesis about human cognition. Gödel’s argument also assumed such reasoning was solely via a formal axiomatic system (which includes a system similar to PA) all of whose axioms are true and all of whose rules of inference preserve truth. The argument is that if such a system were not computably axiomatizable then Assertion 1 within Fig. 2 would be true, so GC would hold. Otherwise, GC would hold for the following reason. Since would be consistent (by the Infallibility Hypothesis), it would follow from the Second Incompleteness Theorem that the arithmetical statement mentioned in Fig. 3 would not be deducible within the system . Thus, by the “solely via” assumption earlier in this paragraph, Assertion 2 within Fig. 2 would be true. (The important last paragraph of Fig. 2 follows from the fact – known in the 1930s – that an appropriate Turing machine can take as input the mathematical description of any computably axiomatizable formal axiomatic system and then output the statement .)
Inapplicability of Gödel’s Theorem to Human Cognition
The following is wellknown about the kind of system that Gödel’s argument assumes underlies human cognition: if fails to be consistent, then there is no statement within unprovable within . For it follows from the definition of “inconsistent” that if were inconsistent, there would be a statement within such that it and its negation each have formal proofs within . One could then choose any arithmetical statement – such as the PA statement (or even the PA statement whose standard interpretation is the false assertion that ) – and within prove the chosen statement from and the negation of via a wellknown logical method; e.g., see [44], p. 467. To see the extremely brittle nature of the situation, suppose human cognition used a system similar to the kind Gödel is assuming, except that a single pair of inconsistent assertions is discovered within hugely many assertions by . Not only would such a discovery falsify the Infallibility Hypothesis, it would imply human cognition is totally incoherent^{8}^{8}endnote: ^{8} The fact that inconsistency of a standard logical system enables deduction of all statements within such a system is called the “principle of explosion”. The danger of an inconsistency within software such as databases and Artificial Intelligence programs has stimulated research into many kinds of “paraconsistent” logics[86, 87]. To avoid the principle of explosion’s effect, a paraconsistent logic is designed to be weaker than logical systems relevant to Gödel’s Theorem. For example – using boldface English here rather than formal logical notation and letting a, b, and c denote natural numbers – within some paraconsistent logics it is not possible to deduce the otherwiselogical conclusion (a equals b) from the combination of (a equals c) or (a equals b) with (a is not equal to c). in arithmetic, deducing both all false statements and all true statements!
Because Gödel’s argument crucially depends on not being deducible within , his argument cannot succeed to even the smallest extent unless the Infallibility Hypothesis holds. Also, as indicated in the preceding paragraph, an attempt to apply Gödel’s Theorem to human cognition is highly brittle. That Gödel’s Theorem is not applicable to human cognition then follows from abundant empirical evidence contradicting the Infallibility Hypothesis; recall section Clarifying requirement (I). The problem is not just with Gödel’s argument, but with any attempt to apply the incompleteness result within Gödel’s Theorem to human cognition.
Such an application also requires the additional assumption that human cognition pursues mathematical truth solely via the kind of inference rules studied within the computability area of Computer Science. That assumption is questionable; recall the quotation from PNAS in our section Clarifying requirement (I). Also, Cognitive Science experiments often reveal lack of infallibility of human cognition even when the participants within the experiment are given just a few short phrases to consider per trial. There is an infinite variety of arithmetical statements. It follows that a cognitive system with the computational power and inference rules of a system like PA could, for any prespecified natural number , deduce any selected one of the infinitely many formal theorems whose shortest proof requires more than applications of the symbolic inference rules. Here could be Knuth’s enormous number , mentioned earlier. Such processing exceeds the capacity of human brains; e.g., see [2] p. 18243, and [23] p. 249.
“Human cognition” above means the cognition of a human being. Extending the concept to encompass the collective cognition of any particular finite set of humans does not undermine the essential point made in the preceding paragraph.
We explain two paths toward a sharper account of the Main Question than that given by GC.
One Path: Extending Gödel’s Methodology
We will return to our focus on actual – rather than idealized – humans after this section. In 1986, William N. Reinhardt’s implication [45] recast the inclusiveor within Gödel’s GC. We state Reinhardt’s implication, after giving background information also needed after this section.
Five years after Gödel’s 1931 incompleteness theorems identified a definitive limitation to infallible logic, a theorem by Turing (stated in our next paragraph) identified a definitive limitation to infallible computation[38]. Turing’s 1936 limitative theorem used his mathematical definition of the theoretical machine now named after him, and he gave an argument claiming that definition rigorously captures the intuitive concept of computation. That claim, now called the ChurchTuring Thesis, is widelyaccepted among mathematicians and computer scientists. His 1936 article implies the existence of an algorithm that takes as input the description of any Turing machine M (with any stated input for it), and produces as output a statement of PA whose meaning is: M run with its stated input halts. To “halt” means not to run forever. Also, Gödel showed how to computationally encode as a natural number any syntactically defined finite concept (like a specific statement of PA or specific formal proof within PA or specific Turing machine), with decoding also being computational. Henceforth, “coding” and “code” refers to any preselected method of such coding.
Turing’s theorem, called the Unsolvability of the Halting Problem, indicates: It is impossible for a Turing machine P to take as input the code for any given Turing machine M (together with input for M) and infallibly produce corresponding output that is the code for the correct yesno answer to the question “Would M halt when run with the given input for M?”.
The following result about arithmetical truth is conventionally called Tarski’s Undefinability Theorem:^{9}^{9}endnote: ^{9} Gödel and Turing used different conceptualizations, and there are additional related conceptualizations such as those of Church, Kleene, Markov, Post, and Tarski. Often a result obtained using one such conceptualization can be obtained using an alternative one. Tarski made major discoveries related to the interpretation of formal languages. By convention, he is associated with what this article calls “Tarski’s Undefinability Theorem”; e.g., see [39], pp. 354, 390. Gödel actually discovered that result in 1930 (a few years before Tarski) and mentioned it in a 1931 letter he sent to Zermelo [88], p. 90. Our section title One Path: Extending Gödel’s Methodology can be viewed as referring to that Gödel discovery and also as referring to Gödel’s 1933 methodology of using a symbol like Bw [89]. No Turing machine has the property that the set of single natural number inputs for which it halts is exactly the set of codes of true statements of PA. Reinhardt’s implication is in Fig. 4.
It is correct, because otherwise its hypothesis Bw Tr could hold when its conclusion does not hold. But then, using a strike through to denote the negation of a relation, we would have Bw Tr, the existence of an such that Tr and Bw, and Bw Tr. Their combination, after producing Bw Tr, would contradict Tr), which is Tarski’s Undefinability Theorem.
Thus the conclusion of Reinhardt’s implication holds for any subset Bw of Tr. Reinhardt wanted to interpret Bw as symbolizing the set of codes of arithmetical statements that are ‘provable by the human mind’ [45]; henceforth, we put vague terms in single quotes. Taking Bw to symbolize that wellknown vague concept,^{10}^{10}endnote: ^{10} The concept of proof, within the context of a formal axiomatic system as discussed in Fig. 3, can be formulated in a rigorous way. But it is notoriously vague outside such a context (such as in Reinhardt’s phrase ‘provable by the human mind’). Outside such a context perhaps the only clearly stated, desirable consequence of an arithmetical statement being ‘provable’ is that the arithmetical statement would be true (according to the standard interpretation of the symbols of PA). In 1933, Gödel himself used a symbol like Bw and in the 1930s explained that such a vague ‘provable’ concept was “not accessible for mathematical treatment” [89, 90]. By that he presumably meant the concept could not be formulated in a way that would satisfy the standard criterion for rigor [30]. Use of ‘provable’ outside the context of a formal axiomatic system can quickly lead to incoherent paradox. That is demonstrated (using a “diagonal argument”) on pp.276277 of [50], which concludes as follows: “This paradox arises once we are willing to accept the idea that it is actually meaningful to talk about ‘the method mathematicians use to correctly decide problems’. Like ‘truth’, this phrase simply cannot be given an exact meaning which mirrors in every way its informal [i.e., nonrigorous] use, and assuming that it can leads one directly into the kind of incoherence familiar from the liar paradox.” The liar paradox occurs, for instance, when one realizes that the following English phrase, which uses selfreference in a nonrigorous way, is true if and only if it is false: “This phrase is false”. the inclusiveor in the second and third lines of his implication resembles the inclusiveor part of GC, in the same order. We return to Reinhardt’s implication in our Summary.
Another Path: Extending Turing’s Methodology
Over a halfdozen conceptualizations related to computability have been introduced; see Appendix NOTE 9. We now describe an extension of Turing’s conceptualization that we explain eliminates any need for the Infallibility Hypothesis.
The Conceptualization
The following novel multipart conceptualization assumes the widelyaccepted ChurchTuring Thesis. First, notice that the Main Question would be settled (with a “no” answer) if the entity of ‘human cognition’ fails to satisfy this finiteness property: each single input and output can be coded using finitely many bits. That is because an entity failing to satisfy that property^{11}^{11}endnote: ^{11} Our approach of showing how the Main Question is easily answered if ‘human cognition’ fails to satisfy the finiteness property suffices for handling that case. But here is broader background. First, some question whether it is possible for any algorithms to fully capture some internal brain processes essential for the I/O of ‘human cognition’, see opinions of Bray and Brooks in [77]
. Second, certain “neuromorphic” chips for building some massively parallel systems use energyefficient analog neuronsimulators rather than simulating neurons via algorithms in the Turingmachine sense
[91]. Third, some view the human brain as a continuous dynamical system [92, 93, 94]. (Accurately modeling such a system can require infinitely many bits to represent some internal variable values.) Fourth, that dynamical systems view has been criticized [52, 95]. Fifth, that view might not be incompatible with requiring that each single input and output be specifiable using finitely many bits; see [96], p. 106. (Also, for each input and output that will be most important to the COAT Theorem’s assertion (2) – presented in Fig. 5 – there exists a simple known encoding using finitely many bits.) cannot be understood (according to Knuth’s criterion), since it cannot be accurately simulated by a Turing machine [46], p. 323. Thus we can restrict our further consideration of realworld entities to those satisfying that finiteness property. Next, define an “agent” in pure mathematics simply as “a function from a subset of the set of natural numbers to the set of natural numbers”. Natural numbers are sufficient for coding, not just Turing machines and arithmetical statements, but (by the finiteness property) each single input or output of a realworld entity. Consider any realworld entity making decisions about inputs, such as a chess player deciding on the next move, where – like a chess player in an official match – it is permitted no outside help.^{12}^{12}endnote: ^{12} The entity examined during a Turing Test is also permitted no outside help. Turing devised that wellknown test – calling it the “imitation game” – as an entirely I/O based approach for investigating whether an entity could ‘think’ [97]. Whether passing such a test would be a sufficient demonstration is controversial[98]. The single entity could be a set of one or more realworld humans. Let “I/O” abbreviate “input/output, without regard to the timing of inputs and outputs”, except for stipulating that – as in chess – the entity’s first output for a given input counts as the entity’s official output for that input [9], pp. 461464. For that I/O concept, the (external) I/O by even a huge number of interacting internal algorithmic processes is known to be achievable by a single corresponding sequential algorithmic process [47].The above agent concept can model a realworld entity’s (coded) I/O behavior without requiring a mathematical definition of ‘mind’ or assumptions about the entity like any of these: it satisfies the Infallibility Hypothesis, it makes decisions solely via attempts to apply formal inference rules, it uses a ‘brain’ having unchanging physical architecture while making a decision, or it has ‘conscious awareness’. Such a realworld entity can make a mistake without being logically required to make further mistakes, since such an entity need not use formal inference rules like those studied within the computability area of Computer Science. Brittleness is not required. That is a key advantage to using a Turing machine approach in our conceptualization, rather than a formal axiomatic system approach (although the two approaches are often viewed as being equivalent). We also emphasize that a realworld entity modeled by a mathematical agent could output opposite assertions about exactly the same Turing machine when given two separate inputs that are different codes for the machine; a typical Turing machine has more than a single code[46], p. 369.
The Strategy
Since the above conceptualization avoids requiring the Infallibility Hypothesis, it might help answer the Main Question if it supports a theorem analogous to GC. But there is an impasse, serious enough to consider abandoning the quest for such a theorem, as explained in the rest of this paragraph. A goal is to apply the agent mentioned in such a theorem to model a realworld entity’s I/O without requiring that the entity use consistent reasoning. A straightforward analogue to GC would be for the second half of its inclusiveor to assert the existence of a particular true (coded) arithmetical statement input to for which could not correctly “decide” its truth value by outputting (the code for) true. But an agent not required to be logically consistent might “decide” that each (coded) arithmetical statement (including each such false statement) is true. Also, the first half of the inclusiveor in a straightforward analogue to GC would claim the impossibility of accurately simulating computationally. But there is a simple counterexample: a computational which outputs the code for true given any single natural number input. Thus such a straightforward strategy fails.
How to overcome that impasse? In essence, a successful strategy for achieving the desired kind of theorem is to ask an entity to output one of two opposite assertions, where (unlike “true” or “false”) each assertion mentions the same Turing machine, using the same code for that machine. As explained in The Conceptualization, it is the first output for a given input that counts, and one must emphasize the specific natural number code for a Turing machine when in a context that avoids assuming an agent is consistent. The following additional explanation about coding is needed before stating the resulting theorem. It is straightforward to define a computational way to obtain from a number encoding a Turing machine (and its input) three additional numbers. They are: the code of a statement, that we denote as , of PA asserting that the Turing machine with specific code halts; the code of the negation of ; and the code of the “binding” of the pair of codes just mentioned. which we call the “specific halting problem” asking whether or not the Turing machine with specific code halts. By conventional mathematics, exactly one of and the negation of is a true statement. For simplicity, henceforth we often avoid explicit mention of the coding by natural numbers, when mentioning a specific halting problem or arithmetical statement.
The COAT Theorem
Fig. 5 states the resulting recent COAT Theorem, whose applicability to actual humans is further explained in our next section. The above conceptualization and “binding” strategy enables its proof [9] to follow easily from Turing’s Unsolvability of the Halting Problem, which itself is easily proved when expressed in terms of computer programs; e.g., [48]. Because it is achieved by recasting Gödel’s Second Incompleteness Theorem using a result of Turing, without requiring new conventional theorems in computability theory, one might call it the “GödelTuring COAT Theorem”.
But the discussion of a 1947 Turing quote in our upcoming Robustness section indicates lack of awareness of the possibility of such a theorem in the past, by Gödel, Turing, and other mathematicians and computability specialists. Analogous to the importance of the last paragraph of GC, the COAT Theorem’s last paragraph is related to selfcomprehensibility. The treatment in [9] does not require an entity to support a decision with a proof attempt, unlike [44, 49]; answers to questions about the conceptualization appear in those articles and corresponding electronic supplements. Here are three such questions. Why is successfully proving – alternatively, disproving – any particular mathematical conjecture^{13}^{13}endnote: ^{13} For instance, the conjecture could be at any level of Kleene’s arithmetical hierarchy [54] p. 49, it need not be at the low level that contains the halting problems themselves. That is explained in Section 9 of the online Supplementary Material for [44]. according to the usual requirement for mathematical rigor [30] equivalent to successfully solving a halting problem^{14}^{14}endnote: ^{14} In the context of the standard criterion for mathematical rigor [30], the broad importance of halting problems is observed and explained in the 2014 article [44], p. 446. The breadth of such problems in that context might not be widelyknown, even among experts. For example, an article by other researchers in 2016 gives a similar observation and explanation, and suggests that the observation is contrary to the intuition of mathematicians and computer scientists [63], p. 298. directly related to the conjecture? Why are “reflection principles” not relevant here? How can nonrigorous use of selfreference be highly unreliable? Respectively, see [44] p. 451, [50] pp. 276277, and Russell’s paradox [39] p. 15.
It follows from our discussion of an apparent impasse that the following is not impossible for an agent satisfying the COAT Theorem’s assertion (2): Given as input just the true arithmetical statement mentioned in assertion (2) for which the agent cannot demonstrate full mastery, the agent would output (the code for) true.
Applicability of COAT Theorem
Recall that our section The Conceptualization justifies restricting consideration in the rest of the article to realworld entities satisfying the finiteness property. As also explained in that section, our model stipulates an entity’s first output for a given input to be that entity’s (sole) official output for that input. For generality, we use the word “entity” rather than “thing”, since the latter often refers just to inanimate objects [51]. When the COAT Theorem’s agent – which is merely required to be a function from a subset of the natural numbers to the set of natural numbers – is applied to a realworld entity, that entity E need not satisfy any infallibility property.
Here are illustrative examples. First, E could be the empty set or other entity that never produces output (a particular rock or comatose human might be examples). For such an entity the COAT Theorem’s assertion (2) holds and its assertion (1) fails to hold because the entity’s I/O can be simulated by a Turing machine producing no output. Second, E could be a particular human who incorrectly responds “false” when asked to give the truth value of . The function corresponding to E would map the code for that arithmetical statement to the numerical code, say 0, for “false”. There need be no brittleness because there is no restriction on any other value of that same function, if indeed the function is defined for other numeric codes. For example, it is possible that the same E – and corresponding – outputs only correct truefalse responses for many other individual arithmetical statements. Third, E could be a particular (nonempty) finite set of humans who work together to correctly give the truth value of for a single choice of a triple of natural numbers whose numerals have ten digits each, but who make an error for a single choice of a triple when the numerals have a thousand digits each. Fourth, E could be a computational AI system that is probabilistic, using pseudorandom numbers; then assertion (1) of the COAT Theorem fails to hold; thus assertion (2) holds and the probability is zero that the AI system would produce correct output for the input mentioned in assertion (2) because of the impossibility of its doing so. As a prelude to our fifth example, we note that according to cognitive neuroscientist Stanislas Dehaene “Some degree of chance may enter in a voluntary [human] choice, but this is not an essential feature” [11], p. 264, and according to Chris Eliasmith “Information processing in the brain … can be equally well described as continuous and noisy, or discrete” [52], pp. 423424, emphasis in original. Fifth, if there is a human whose mathematical cognition depends so crucially on genuine randomness of physical neural, synaptic, and/or neurotransmitter processing that the resulting I/O – for which the finiteness property holds – cannot possibly be accurately simulated using any sophisticated form of computational pseudorandomness, then E could be such a human and assertion (1) would hold for E. Sixth (mentioned again in the next section), E could be a particular human who gives correct answers to many mathematical problems but who, after much deliberation, incorrectly asserts that a certain complicated uncolored map lying on a flat surface would require more than four colors to enable adjacent map regions to have different colors. Any of the above examples that mentions one or more humans could be replaced by similar examples phrased in terms of other entities, including physically distant entities on which scientific experiments might never be feasible.
Robustness
Gödel’s handwritten 1951 Gibbs Lecture was edited and published after his 1978 death, with Stanford mathematician Solomon Feferman as EditorinChief [19]. Feferman’s Rolf Schock Prize lecture to the Swedish Academy of Sciences in 2003, published in [40], emphasized serious concerns about GC. The recasting by the COAT Theorem avoids all such concerns, including: using a “highly idealized concept of the human mind” [emphasis as in Feferman’s published lecture], containing an assertion that the human mind “infinitely surpasses” Turing machines, assuming human cognition pursues mathematical truth solely by applying inference rules, and assuming human cognition uses “evident” axioms that are not prespecified.
Concerning idealizations, Turing (the founder of AI) asserted in 1947 that
… if a machine is expected to be infallible, it cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility [53], emphasis added.
The most notable additional theorems in 1947 related to Gödel’s Second Incompleteness Theorem and Turing’s Unsolvability of the Halting problem were published by Kleene in 1943 and 1936 [54, 55]. Like Turing, Feferman had expert familiarity with such notable theorems; see [56], Section 2.3. Neither his 2003 nor Turing’s 1947 lecture suggested that any such theorem could overcome the “highly idealized” or “infallibility” concern. The same can be said for books on computability theory [57, 58, 59, 60], and numerous articles and books expressing concerns about how applying relevant theorems to actual humans requires assuming infallibility of the latter.^{15}^{15}endnote: ^{15} Seven separate quotes – by Davis, Dennett, Lutz, McDermott, Minsky, Barrow, and Russell and Norvig – on pp. 211212 of [9] question the basic assumption of the consistency of ‘human cognition’ that is crucial for applying Gödel’s Theorem to ‘human cognition’; also see Note 8. Feferman’s lecture expressed that same concern. Much of the published concern about inappropriate idealizations was stimulated by assumptions – most notably by J. R. Lucas and Roger Penrose – even stronger than that basic consistency assumption. That includes the seven quotes mentioned above, and also in “the concept of an idealized human mind” being “problematic” of [99] p. 154 and in the analysis of different ways to categorize idealizations of human cognition of [100]. (Among the strongest of such idealization assumptions is that an actual human correctly ‘knows’ that her/his reasoning is consistent; alternatively, correctly ‘knows’ that her/his reasoning about halting problems is correct.) Results by Reinhardt, and in 2016 by Peter Koellner, show that even if – solely as an investigative technique – one were to assume relevant idealization assumptions related to the claims of Lucas and Penrose, concerns can be demonstrated using logic itself about such claims [99]. Some claims by Lucas and Penrose are discussed briefly in the current author’s [49] pp. 591593; also see [50]. The focus of the current article is how all idealization and infallibility assumptions about ‘human cognition’ can be avoided in a result similar to GC, so we only briefly mention stronger idealization assumptions. In dozens of such insightful commentaries, including Feferman’s about GC, there is no discussion about whether the significant concern in the commentaries over infallibility assumptions about human cognition is avoidable using any theorem applicable to an entity making (as Turing put it) “no pretence at infallibility”.
That aligns with nonexistent awareness of even the possibility of such a theorem among specialists in computability theory. For instance, there appears to be no consideration of a question like: Could the need for infallibility assumptions in a particular such claim be removed by replacing with a true arithmetical statement like the one related to assertion (2) of the COAT Theorem? We defer further explanation of the latter arithmetical statement until the upcoming section Open Questions.
Perhaps the most relevant generalization of Gödel’s Theorem is Kleene’s Theorem VIII [57], p. 303, which Feferman emphasized in 1995 [56], Section 2.3. When stated in terms of Turing machines, Kleene’s Theorem contains an implication whose application requires the following “soundness” assumption: Each conclusion reached within a given system , about the nonhalting of a Turing machine, is a true arithmetical statement supported by a correct justification, where that justification can be checked computationally by a single such checking program for the entire system .^{16}^{16}endnote: ^{16}Although in general soundness is an even stronger assumption than consistency, it is known (see [56] Section 2.8) that the consistency assumption on the mentioned in Fig. 3 is equivalent to the assumption that such an satisfies soundness for its results about the nonhalting of Turing machines. Also, each statement asserting the nonhalting of a Turing machine is expressible in PA’s formal language, hence also in the formal language of the mentioned in Fig. 3. The in Kleene’s Theorem need not include PA, need only relate to the nonhalting of Turing machines, and the checking program need not relate to the specific details of PA. But the of Kleene’s Theorem lacks applicability to ‘human cognition’. Such applicability would require the assumption that the the I/O of ‘human cognition’ is correct whenever asserting the truth of what mathematicians call a statement, which is a claim about the nonhalting of a particular Turing machine. That infallibility assumption – ”soundness for statements” – conflicts with the empirical evidence about the fallibility of ‘human cognition’; recall section Clarifying requirement (I). Such an assumption contrasts with the lack of any infallibility assumption on the agent mentioned in the COAT Theorem.
It is wellknown that correctly deciding whether or not a particular statement is true can be highly challenging. One such statement is the unsolved Riemann Hypothesis, conjectured in 1859, which “has been the Holy Grail of mathematics for a century and a half” [61]. Settling it earns a milliondollar award from the Clay Mathematics Institute [36]. Others include the unsolved Goldbach Conjecture – over 275 years old – as well as Fermat’s Last Theorem (FLT) and the Four Color Theorem (4CT) (“four colors always suffice”, related to the sixth example immediately preceding the current section). That each of those three is equivalent to a corresponding statement is easily seen by considering, for each, whether a corresponding program that exhaustively seeks a counterexample would fail to halt[44] p. 443. Turing himself ran programs that systematically sought a counterexample to the Riemann Hypothesis [62], pp. 408, 409, 411. Turing presumably knew that the truth of that conjecture is equivalent to the nonhalting of such a corresponding program; also see [63], Section 5. FLT and the 4CT were perhaps the most widelypublicized new theorems of the past fifty years. Their full proofs greatly exceed their published proofs in [64, 65, 66]; see related observation in [67], p. 359. The assertion of FLT – based on the claimed existence of a proof too long to fit into the margin of a book – was made in 1637, yet its truth was not accepted until a proof by Andrew Wiles 358 years later [68]. The 4CT was asserted in 1879 and 1880 in separate journal articles (by different mathematicians), whose corresponding published proofs were each separately shown more than a decade after publication to be incorrect [69]. There is now a (computerassisted) proof of the 4CT satisfying the kind of computablycheckable criterion that Kleene’s Theorem requires [70]. That contrasts with the current situation regarding the proof of FLT; see the discussion of that proof, and of PA and Zermelo Fraenkel set theory (ZF) in [67]. Books like [39, 42] give solid introductions to the important example ZF of the described in Fig. 3; brief information about ZF is in the present author’s [44], pp. 445447.
The examples given above are just four of the infinitely many conjectures of the kind Kleene’s Theorem is about. The COAT Theorem mentions just a single such conjecture, where the conjecture is related to the agent mentioned in that theorem; further clarification is in our Open Questions section. When applying the COAT Theorem to a realworld entity, any other possible attempts by the entity to respond to inputs are irrelevant.^{17}^{17}endnote: ^{17}Also, it is wellknown in computability theory that computable checks are only important when there are multiple decisions (or proofs) to check. A computable check on the correctness of any single correct decision is trivially (and correctly, but vacuously) achieved by a computer program that ignores its input and prints “Correct!”.
Summary
Reinhardt’s recasting has a rigorous proof and resembles the nonrigorous inclusiveor part of GC. But, when interpreted in the realworld (recall Fig. 1), the interpretation ‘provable by the human mind’ [45] for its symbol Bw is recognized as being fatally flawed; see Appendix NOTE 10. Thus Reinhardt’s implication is not successful in the context of this article.
The recasting by the COAT Theorem satisfies requirement (I) mentioned in the Introduction. That is because, as illustrated in the section Applicability of COAT Theorem, its agent concept is not required to satisfy the Infallibility Hypothesis or any correctness property, and is not required to use the kind of inference rules studied within the computability area of Computer Science. (A general result, it is also compatible with such unsupported assumptions about ‘human cognition’, and in that context it makes rigorous the version of Gödel’s nonrigorous Gibbs Conjecture paraphrased as GC in Fig. 2.) The COAT Theorem also satisfies requirement (II), because its statement uses conventional mathematical concepts (in a novel way), its proof uses known mathematical results (in a novel way), and its robustness is illustrated in the Applicability of COAT Theorem section.
The COAT Theorem bridges a gap between Cognitive Science and Computer Science. Mathematical logic books that laid a foundation for computability explain how the concept of rigor within mathematics was sharply enhanced in reaction to discoveries in the early twentieth century about human fallibility, particularly mistakes in logic and set theory by leading mathematicians. Chapter III of Kleene’s book [57] lucidly describes that history. But the rest of his book (like other such books) is focused on idealized mathematics, without evident concern for providing a rigorous result that could be applied to notnecessarily infallible humans.^{18}^{18}endnote: ^{18} Here is a sketch of relevant history. In 1854, George Boole published a logic book The Laws of Thought, showing how symbolic manipulations similar to those of current highschool algebra could be used within logic. In 1879, Gottlob Frege published another logic book, part of whose German title translated to English could be paraphrased as A Formal Language for Pure Thought Modeled on Arithmetic. Although both titles mention [human] thought, those authors lacked knowledge of the more recent cognitive science research mentioned in our section Clarifying requirement (I). Bertrand Russell discovered a devastating inconsistency (not a mere typo) in the 1903 preprint of the second edition of that Frege book. That inconsistency, now known as Russell’s Paradox, is mentioned shortly before our section Applicability of COAT Theorem. At least four other leading mathematicians in the early twentiethcentury also separately proposed serious systems of logic having devastating inconsistency; see the Davis quote on pp. 211212 of [9]. That is why our abstract begins with “A century ago, discoveries of a serious kind of logical error made separately by several leading mathematicians led to acceptance of a sharply enhanced standard for rigor …”. That “enhanced standard for rigor”, based in set theory [30], would be used on the right side of Fig. 1. The relation between applied math and pure math depicted in that figure can be viewed as a highlysuccessful revolution in the early twentieth century, similar in some ways – but less widelyknown – than revolutions within the physical sciences in the late nineteenth and early twentieth century [101]. In the decades after Frege, mathematicians who carried out research in logic – which led to the modern foundation for both mathematics and computer science – had two primary interests. First, protecting the field of mathematics from collapse caused by discovery at some time of an inconsistency within the field. Second, proving properties related to systems like the in Fig. 3, including properties of various ways of interpreting the symbols in such a system. Such mathematicians seem not to have been concerned about obtaining a rigorous result that could be applied to actual human cognition (including the cognition of the leading mathematicians who had themselves committed serious errors of inconsistent reasoning). Also see Note 8. (Recall the analogous distinction made between chess and chess players, in our section Intriguing Relevance of GC.) Had he such concern – and with obtaining a perspicuous result that scientists could apply to ‘human cognition’ without having to make highly questionable assumptions about humans – he should have aimed for a theorem that achieves such applicability. (Our Robustness section discusses the nonexistent awareness of even the possibility of such a result among specialists in computability theory.) Doing so would also have required him to answer relevant questions. (See the questions answered in [44], including its section Applying the Comprehensibility Theorem to RealWorld Agents and its online supplementary material.)
When expressing concern about infallibility assumptions on human cognition in claims like GC, mathematicians and computability specialists – including Turing and Feferman – have been correct in not suggesting any (previous) theorem provides an analogue to GC that avoids dependence on such assumptions. Conventional computability theorems lack the applicability of the conceptualization supporting the recasting by the COAT Theorem, and the rigorous version of GC obtained via its “binding” strategy and “mastery” concept.
Implications
Concerning the COAT Theorem’s assertion (1), notice that if everything within an entity (see Appendix NOTE 11), including the timings of internal events, can be accurately simulated by a Turing machine, so can its I/O. It follows – using the contrapositive^{19}^{19}endnote: ^{19} The implication “p implies q” can be written “if p, then q”; that implication is logically equivalent to its contrapositive “not q implies not p”. The following two implications are not logically equivalent: “p implies q” and its converse “q implies p”. of the juststated implication – that if it is impossible for any Turing machine to accurately simulate the entity’s I/O, then it is impossible for any Turing machine to accurately simulate the entity itself.^{20}^{20}endnote: ^{20} At this point we can make these observations: First, the COAT Theorem can be applied to an agent that is the (external) I/O function of a realworld entity composed of one or more humans and one or more computer programs. The computer programs might be used, for instance, to handle coding related to natural numbers, and to assist with mathematical proofs [29] when such proofs are possible, to help ensure many of the entity’s output assertions are correct. Second, if assertion (1) holds for such an agent, then the ‘human cognition’ part of the entity could not be accurately simulated computationally, since the other part(s) of the entity could be accurately simulated computationally. Third, if assertion (1) does not hold, then assertion (2) holds, so even using the computer programs (and correct hardware) ‘human cognition’ cannot give the correct output for the input mentioned in that assertion. (The converse need not hold; analogously a human chessplayer could use a memorized algorithm, a fact independent of whether ‘human cognition’ can be accurately simulated computationally.)
Due to the universal nature of the Main Question, a single situation can justify a “no” answer. To see how the COAT Theorem indicates such an answer, let E be any realworld entity and let the COAT Theorem’s agent be E’s I/O function. On the one hand, if assertion (1) holds, then by the preceding paragraph (and Knuth’s criterion) it is impossible for E to be fully comprehensible by any entity. Among other possibilities, relevant here is a possible assertion that genuine randomness within neural circuitry – that is impossible to simulate even using extremely sophisticated computational pseudorandomness – is essential for human cognition. Some, such as Dehaene and Eliasmith, would question that assertion, as mentioned in the discussion preceding the fifth example in section Applicability of COAT Theorem. On the other hand, if assertion (1) does not hold, then E is not fully comprehensible by E because, by the COAT Theorem’s last paragraph, there is a specific true arithmetical statement A related to E for which E cannot have full mastery. In either case E cannot fully comprehend E ^{21}^{21}endnote: ^{21} The following two comments are relevant here. First, whenever that particular application of E is made, there is of course an implicit assumption that at least one actual human exists; otherwise the Main Question is unimportant. Second, for an interesting treatment of the Main Question, the entity E1 that is to be understood should have a high level of ‘human cognition’; as a simple extreme contrast, it might be easy to build and understand a computer simulation of the I/O of a human if that human were comatose. Likewise the entity E2 attempting to do the understanding should have a high level of ‘human cognition’; as a simple extreme contrast, it might be unimportant to say that a comatose human might not be able to demonstrate an understanding. Also, it is simplifying to focus on the case in which entity E1 and entity E2 are the same entity, E. Doing so is sufficient, since a “no” answer to the Main Question when E is used twice in that way implies that ‘human cognition’ cannot fully comprehend ‘human cognition’. There is no need to ignore Feferman’s concerns about GC or abundant evidence contradicting the Infallibility Hypothesis.
Here are three additional observations about a situation when the COAT Theorem’s assertion (2) holds and the agent in the theorem is the I/O function of a realworld entity E. First, assertion (2) indicates E has a “blind spot” for recognizing a specific arithmetical truth related to E itself, thus a metacognitive blind spot more fundamental than the perceptionrelated blindness of humans mentioned in the Introduction. Cognitive Science research on metacognition was stimulated by a 1970 Science article about the first nonhuman animal passing the mirror selfrecognition test [71], and now extends beyond investigating selfrecognition.^{22}^{22}endnote: ^{22} Research on metacognition includes investigating an entity’s own confidence in its answers [11] p. 244ff, even when such confidence requires no separate mechanism. There is experimental evidence that neurons in the parietal cortex of rhesus monkeys encode confidence level as an integral part of decision making [102]
. We also note that a 1990 mathematical technique by J. S. Bridle can be viewed as encoding confidence level as an integral part of the decision making of artificial neural networks, by using “softmax” (also called “the normalized exponential”) to construct the activation function for each output layer unit
[103]. Second, phrasing the first observation somewhat differently: the truth of the arithmetical statement in assertion (2) is an example of an observerindependent truth, if one considers the observer to be E. Third – related to neuroscience when E is human – if E’s output for the input mentioned in assertion (2) were fully determined by some internal state of E, such a state could not produce an output that is correct; that would mean the comprehensibility limitation given by assertion (2) would not just be a limitation on the external behavior of E.^{23}^{23}endnote: ^{23} In 2005, the year before he died, the logician Torkel Franzén published the generally insightful book Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse [104]. It has two passages we realize should have been written differently. First, an argument on its p. 125 applies selfreference in a nonrigorous way to intuition about human behavior, without reminding the reader that on its p. 86 the book had pointed out the highly unreliable nature of arguments using selfreference in a nonrigorous way. (See the related discussion, shortly before our Applicability of COAT Theorem section, and also see the end of Appendix NOTE 10. Also recall from our Clarifying requirement (II) section that strong intuition is insufficient for achieving a definitive result.) Second, the middle of its p. 126 implicitly mentions both the assumption of the Infallibility Hypothesis and the assumption that humans reason solely via formal inference rules, without pointing out that, within empirical cognitive science, the first assumption is wellknown to be false and there is strong empirical evidence questioning the second assumption. That passage is further evidence of the gap between Cognitive Science and Computer Science mentioned in the Introduction and explicated in our Robustness section.Here is a summary, when the COAT Theorem is applied to humans.
Summary: Attempts at computationallyreductionist accounts of ‘human cognition’ have an inherent specific limitation in which one or both of the following hold: such simulation software must always fall short of full accuracy; ‘human cognition’ could not fully master a comprehension of such simulation software, regardless of how such software was obtained^{24}^{24}endnote: ^{24} Here, “regardless of how such software was obtained” means even if the software were (somehow!) obtained from an infallible intelligence. In recent decades, the discipline of Software Engineering has been substantially revised, to better accommodate the repeated fallibility of human software developers that is widely recognized by such developers themselves; e.g., see [105]. and – by the ChurchTuring Thesis – regardless of advances in programming languages and software engineering.
The limitation does not just apply to separate individuals. For example, the limitation is applicable to an entity consisting of any particular finite set – say H – of humans, which could include specialists in Cognitive Science and Computer Science and other disciplines, with the restriction that H (like a set of chess players on a single chess team) is not permitted external help. (Also, in such a set’s attempt to master an understanding of software that accurately simulates the set’s collective cognition – if such software should exist – a computer system within the set could help the humans by carrying out lengthy and tedious details; see Appendix Note 20.) It might not be surprising if H could not understand a computational AI system superior to H’s cognition. But note that it follows from the Summary that H could not master an understanding of an AI system that accurately simulates the cognition of H itself.^{25}^{25}endnote: ^{25}It is recognized that specialists today lack satisfactory understanding of systems substantially weaker than a highly nontrivial fullyhuman level AI. In 2018, Science journal’s Web page reported strong support at a recent AI research conference, for a speaker’s comparison of Machine Learning research with alchemy[106]. In a 2011 panel discussion at MIT, Noam Chomsky “derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world” but do not “try to understand the meaning of that behavior”, asserting he does not “know of anything like it in the history of science”[1]. Also see [107].
The COAT Theorem, like , is on the rigorous pure mathematics side of Fig. 1. The above Summary is on the applied mathematics side, because the Summary lacks full precision due to its use of such concepts as ‘human cognition’. The same lack of full precision occurs with other applications of pure mathematics such as “2 golfers plus 2 golfers equals 4 golfers” (since ‘golfer’ lacks full precision). That assertion about a number of golfers, like the Summary, is intended to be taken literally rather than metaphorically.
Occasionally one might see a suggestion that Gödel’s Second Incompleteness Theorem provides an adequate applicable foundation (in the context of interest to this article). But such a suggestion is known to be erroneous. Douglas Hofstadter’s 1980 Pulitzer Prize winning book Gödel, Escher, Bach introduced many readers to Gödel’s Theorem. Just before discussing whether Gödel’s Theorem has relevance to minds and brains, that author writes “it can have suggestive value to translate Gödel’s Theorem into other domains, provided one specifies in advance that the translations are metaphorical and are not intended to be taken literally”[72], emphasis added. See our section Inapplicability of Gödel’s Theorem to Human Cognition.
Open Questions
As a prelude to mentioning open questions, we give brief relevant history and also compare GC with its recasting by the COAT Theorem. Merely for expository simplicity, to avoid repetition throughout this explanation we assume – as do most mathematicians – the consistency of PA. Gödel’s Second Incompleteness Theorem was successful in showing that a goal (proving the consistency of mathematics) announced in the 1920s by the prominent mathematician David Hilbert could not be achieved using a method similar to the highly reliable one prescribed by Hilbert. Assuming the ChurchTuring Thesis, Turing’s Unsolvability of the Halting Problem was also successful, in showing that another goal announced in the 1920s by Hilbert and his colleague Wilhelm Ackermann could not be achieved (that of finding a computational way to correctly decide whether or not any given statement within a formal system like PA is a formal theorem). Although both results were fully successful, when initially obtained they both might have seemed narrowly limited to selfreference decisions about what can be viewed as halting problems (our next paragraph explains how can be viewed as such a problem). But there have been surprising generalities discovered related to both. It suffices here to focus just on Turing’s halting problem result. One surprise: Rice’s Theorem [73] identifies a huge variety of algorithmically unsolvable (and non selfreference) decision problems; it is proved using the fact that programs (in any standard programming language, including that of Turing machines) can be written that contain infinite loops [46] p. 389. Another surprise: theorems in 2002 prove the existence of important algorithmically unsolvable (non selfreference) decisions about programs whose programming language syntax makes infinite loops impossible to write [74]. Other historical surprises include the infinitelymany different levels of the arithmetical hierarchy [54] p. 49, and the extreme complexity of the uncountably large partiallyordered set of Turing degrees [59]. See descriptions of additional algorithmically unsolvable problems in [75, 76] and in computability books; e.g., [58, 60].
Now we compare the nonrigorous GC with the COAT Theorem. Assertion 2 of GC refers to an arithmetical statement. In Gödel’s argument, that statement is , where is the (perfectly) consistent formal system presumed in his argument to underlie ‘human cognition’. Equivalently, states that the following algorithm would fail to halt: an exhaustive search for two blatantly contradictory formal theorems (i.e., a formal theorem and also its formal negation) within . Because of its supporting conceptualization, including its “binding” strategy and “mastery” concept, the COAT Theorem’s last paragraph gives the following analogue to GC’s last paragraph, without needing any correctness assumption about an entity corresponding to . If its assertion (1) is false, then the true arithmetical statement mentioned in its assertion (2) can be constructed to state exactly one of the following about a specific algorithmicallyobtainable variant of the software simulating (with corresponding specific input): it halts, it does not halt. [Correctness of the preceding sentence is easily seen by specialists in computability theory; e.g., there is an easy proof of Lemma 7.15 on p. 601 of [49], which is restated more simply on p. 455 of [44], and used near the end of the COAT Theorem’s proof atop p. 220 of [9].]
The true arithmetical statements mentioned in both Assertion (2) of GC and assertion (2) of the COAT Theorem are very specific. Although the second paragraph of our Implications section shows the COAT Theorem is successful in indicating a “no” answer to the Main Question, there are open questions about the breadth of the phenomenon it identifies, just as there would have been such questions about Gödel’s Theorem and Turing’s Unsolvability of the Halting Problem when they first appeared. One open question is: What additional theorems – like the COAT Theorem – can be discovered that are about an agent, are applicable to realworld entities, and permit avoiding all infallibility requirements on the agent? Another open question: What theorems can be discovered, if one is willing to place some limited correctness requirements on the agent? One answer to that question is the Comprehensibility Theorem [44], p.460.
Conclusions
Rather than a definitive limitation like Gödel’s Theorem about infallible logic or a definitive limitation like Turing’s Unsolvability of the Halting Problem about infallible computation, the COAT Theorem is a definitive limitation about notnecessarily infallible and notnecessarily purelydeductive comprehensibility. Applicable to any realworld entity, it was obtained by recasting computability theory within Computer Science. The Implications section explains how the limitation is related to AI, to Cognitive Science, and to neuroscience.
Identifying limitations applicable in the natural world clarifies science’s boundaries. Significant such limitations include Heisenberg’s Uncertainty Principle and the speedoflight limit given by Special Relativity. But, just as that speed limit never diminished support for faster particle accelerators, the COAT Theorem does not diminish the importance of continuing scientific progress in cognitive neuroscience and simulations of the mind and brain [12, 14, 17, 18]. An openended question is whether additional applicable and definitive results can be obtained about notnecessarily infallible human cognition that is conceptually above the level of neural interactions.
Appendix
Notes
 ^{1} In the Nature article “Is the brain a good model for machine intelligence” [77], Hassabis states “To advance AI [Artificial Intelligence], we need to better understand the brain’s workings at the algorithmic level … Conversely, from a neuroscience perspective, attempting to distil intelligence into an algorithmic construct may prove to be the best path to understanding some of the enduring mysteries of our minds.” That article includes contrasting opinions by Dennis Bray, Rodney Brooks, and Amnon Shashua. A review, by Hassabis and three of his colleagues, of the effect of neuroscience on AI is in [78].
 ^{2} Notice that Knuth’s criterion is not satisfied by a person who develops a Machine Learning program to teach itself something from data. Often the reason Machine Learning programs are developed is because programmers realize they themselves do not fully understand something, such as exactly how they make decisions when classifying photographic images.
 ^{3} As the main representatives for the fields of computer science, mathematics, and physics, the Time magazine issue on “The Century’s Greatest Minds” (Mar. 29, 1999) chose Turing, Gödel, and Einstein respectively. Here are examples of errors by each in their individual research. Turing’s seminal article in computer science [38] had technical errors; see [62], p. 546. A Gödel biography discusses incorrect mathematics by him [79], pp. 235236. Einstein erroneously calculated the extent to which the sun would bend a star’s light due to General Relativity; the fame he received from the 1919 observation of a solar eclipse might have been diminished had it not been for his correction made during a delay caused by the first World War[80], p. 133.
 ^{4} Newton’s Theory of Gravitation received a rigorous foundation in the 19th century[81], was improved by Einstein’s General Relativity, and would benefit from further understanding of additional lowerlevel phenomena [10], Ch. 7.
 ^{5} In 1609 Kepler published a table to support his assertion that a planet moves in an elliptical orbit. In 1988 very strong evidence was presented indicating he obtained the table entries from his assertion itself, rather than from astronomical observations. See [82, 83, 84].
 ^{6} Within his published Gibbs Lecture, here is the statement of Gödel’s Gibbs Conjecture: “Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified …” [19], p. 310, italics in original. Gödel’s lecture makes clear that by a “finite machine” Gödel means a Turing machine or its equivalent [19], pp. 304305. A “diophantine problem” is an arithmetical problem related to the existence of a solution in integers to one or more polynomial equations having integervalued coefficients and one or more variables. Earlier, Gödel proved that determining the truth value of (see Fig. 3) is equivalent to a diophantine problem.
 ^{7} Despite his Gibbs Conjecture, Gödel did recognize the fallibility of actual humans, according to perhaps his leading interpreter who knew him personally, Hao Wang: “… contrary to the general impression, Gödel affirms the fallibility of our mathematical intuition and investigates the different degrees of clarity and certainty that exist within mathematics” [85], p. 5.
 ^{8} The fact that inconsistency of a standard logical system enables deduction of all statements within such a system is called the “principle of explosion”. The danger of an inconsistency within software such as databases and Artificial Intelligence programs has stimulated research into many kinds of “paraconsistent” logics[86, 87]. To avoid the principle of explosion’s effect, a paraconsistent logic is designed to be weaker than logical systems relevant to Gödel’s Theorem. For example – using boldface English here rather than formal logical notation and letting a, b, and c denote natural numbers – within some paraconsistent logics it is not possible to deduce the otherwiselogical conclusion (a equals b) from the combination of (a equals c) or (a equals b) with (a is not equal to c).
 ^{9} Gödel and Turing used different conceptualizations, and there are additional related conceptualizations such as those of Church, Kleene, Markov, Post, and Tarski. Often a result obtained using one such conceptualization can be obtained using an alternative one. Tarski made major discoveries related to the interpretation of formal languages. By convention, he is associated with what this article calls “Tarski’s Undefinability Theorem”; e.g., see [39], pp. 354, 390. Gödel actually discovered that result in 1930 (a few years before Tarski) and mentioned it in a 1931 letter he sent to Zermelo [88], p. 90. Our section title One Path: Extending Gödel’s Methodology can be viewed as referring to that Gödel discovery and also as referring to Gödel’s 1933 methodology of using a symbol like Bw [89].
 ^{10} The concept of proof, within the context of a formal axiomatic system as discussed in Fig. 3, can be formulated in a rigorous way. But it is notoriously vague outside such a context (such as in Reinhardt’s phrase ‘provable by the human mind’). Outside such a context perhaps the only clearly stated, desirable consequence of an arithmetical statement being ‘provable’ is that the arithmetical statement would be true (according to the standard interpretation of the symbols of PA). In 1933, Gödel himself used a symbol like Bw and in the 1930s explained that such a vague ‘provable’ concept was “not accessible for mathematical treatment” [89, 90]. By that he presumably meant the concept could not be formulated in a way that would satisfy the standard criterion for rigor [30]. Use of ‘provable’ outside the context of a formal axiomatic system can quickly lead to incoherent paradox. That is demonstrated (using a “diagonal argument”) on pp.276277 of [50], which concludes as follows: “This paradox arises once we are willing to accept the idea that it is actually meaningful to talk about ‘the method mathematicians use to correctly decide problems’. Like ‘truth’, this phrase simply cannot be given an exact meaning which mirrors in every way its informal [i.e., nonrigorous] use, and assuming that it can leads one directly into the kind of incoherence familiar from the liar paradox.” The liar paradox occurs, for instance, when one realizes that the following English phrase, which uses selfreference in a nonrigorous way, is true if and only if it is false: “This phrase is false”.
 ^{11} Our approach of showing how the Main Question is easily answered if ‘human cognition’ fails to satisfy the finiteness property suffices for handling that case. But here is broader background. First, some question whether it is possible for any algorithms to fully capture some internal brain processes essential for the I/O of ‘human cognition’, see opinions of Bray and Brooks in [77]. Second, certain “neuromorphic” chips for building some massively parallel systems use energyefficient analog neuronsimulators rather than simulating neurons via algorithms in the Turingmachine sense [91]. Third, some view the human brain as a continuous dynamical system [92, 93, 94]. (Accurately modeling such a system can require infinitely many bits to represent some internal variable values.) Fourth, that dynamical systems view has been criticized [52, 95]. Fifth, that view might not be incompatible with requiring that each single input and output be specifiable using finitely many bits; see [96], p. 106. (Also, for each input and output that will be most important to the COAT Theorem’s assertion (2) – presented in Fig. 5 – there exists a simple known encoding using finitely many bits.)
 ^{12} The entity examined during a Turing Test is also permitted no outside help. Turing devised that wellknown test – calling it the “imitation game” – as an entirely I/O based approach for investigating whether an entity could ‘think’ [97]. Whether passing such a test would be a sufficient demonstration is controversial[98].
 ^{13} For instance, the conjecture could be at any level of Kleene’s arithmetical hierarchy [54] p. 49, it need not be at the low level that contains the halting problems themselves. That is explained in Section 9 of the online Supplementary Material for [44].
 ^{14} In the context of the standard criterion for mathematical rigor [30], the broad importance of halting problems is observed and explained in the 2014 article [44], p. 446. The breadth of such problems in that context might not be widelyknown, even among experts. For example, an article by other researchers in 2016 gives a similar observation and explanation, and suggests that the observation is contrary to the intuition of mathematicians and computer scientists [63], p. 298.
 ^{15} Seven separate quotes – by Davis, Dennett, Lutz, McDermott, Minsky, Barrow, and Russell and Norvig – on pp. 211212 of [9] question the basic assumption of the consistency of ‘human cognition’ that is crucial for applying Gödel’s Theorem to ‘human cognition’; also see Note 8. Feferman’s lecture expressed that same concern. Much of the published concern about inappropriate idealizations was stimulated by assumptions – most notably by J. R. Lucas and Roger Penrose – even stronger than that basic consistency assumption. That includes the seven quotes mentioned above, and also in “the concept of an idealized human mind” being “problematic” of [99] p. 154 and in the analysis of different ways to categorize idealizations of human cognition of [100]. (Among the strongest of such idealization assumptions is that an actual human correctly ‘knows’ that her/his reasoning is consistent; alternatively, correctly ‘knows’ that her/his reasoning about halting problems is correct.) Results by Reinhardt, and in 2016 by Peter Koellner, show that even if – solely as an investigative technique – one were to assume relevant idealization assumptions related to the claims of Lucas and Penrose, concerns can be demonstrated using logic itself about such claims [99]. Some claims by Lucas and Penrose are discussed briefly in the current author’s [49] pp. 591593; also see [50]. The focus of the current article is how all idealization and infallibility assumptions about ‘human cognition’ can be avoided in a result similar to GC, so we only briefly mention stronger idealization assumptions.
 ^{16} Although in general soundness is an even stronger assumption than consistency, it is known (see [56] Section 2.8) that the consistency assumption on the mentioned in Fig. 3 is equivalent to the assumption that such an satisfies soundness for its results about the nonhalting of Turing machines. Also, each statement asserting the nonhalting of a Turing machine is expressible in PA’s formal language, hence also in the formal language of the mentioned in Fig. 3.
 ^{17} Also, it is wellknown in computability theory that computable checks are only important when there are multiple decisions (or proofs) to check. A computable check on the correctness of any single correct decision is trivially (and correctly, but vacuously) achieved by a computer program that ignores its input and prints “Correct!”.
 ^{18} Here is a sketch of relevant history. In 1854, George Boole published a logic book The Laws of Thought, showing how symbolic manipulations similar to those of current highschool algebra could be used within logic. In 1879, Gottlob Frege published another logic book, part of whose German title translated to English could be paraphrased as A Formal Language for Pure Thought Modeled on Arithmetic. Although both titles mention [human] thought, those authors lacked knowledge of the more recent cognitive science research mentioned in our section Clarifying requirement (I). Bertrand Russell discovered a devastating inconsistency (not a mere typo) in the 1903 preprint of the second edition of that Frege book. That inconsistency, now known as Russell’s Paradox, is mentioned shortly before our section Applicability of COAT Theorem. At least four other leading mathematicians in the early twentiethcentury also separately proposed serious systems of logic having devastating inconsistency; see the Davis quote on pp. 211212 of [9]. That is why our abstract begins with “A century ago, discoveries of a serious kind of logical error made separately by several leading mathematicians led to acceptance of a sharply enhanced standard for rigor …”. That “enhanced standard for rigor”, based in set theory [30], would be used on the right side of Fig. 1. The relation between applied math and pure math depicted in that figure can be viewed as a highlysuccessful revolution in the early twentieth century, similar in some ways – but less widelyknown – than revolutions within the physical sciences in the late nineteenth and early twentieth century [101]. In the decades after Frege, mathematicians who carried out research in logic – which led to the modern foundation for both mathematics and computer science – had two primary interests. First, protecting the field of mathematics from collapse caused by discovery at some time of an inconsistency within the field. Second, proving properties related to systems like the in Fig. 3, including properties of various ways of interpreting the symbols in such a system. Such mathematicians seem not to have been concerned about obtaining a rigorous result that could be applied to actual human cognition (including the cognition of the leading mathematicians who had themselves committed serious errors of inconsistent reasoning). Also see Note 8.
 ^{19} The implication “p implies q” can be written “if p, then q”; that implication is logically equivalent to its contrapositive “not q implies not p”. The following two implications are not logically equivalent: “p implies q” and its converse “q implies p”.
 ^{20} At this point we can make these observations: First, the COAT Theorem can be applied to an agent that is the (external) I/O function of a realworld entity composed of one or more humans and one or more computer programs. The computer programs might be used, for instance, to handle coding related to natural numbers, and to assist with mathematical proofs [29] when such proofs are possible, to help ensure many of the entity’s output assertions are correct. Second, if assertion (1) holds for such an agent, then the ‘human cognition’ part of the entity could not be accurately simulated computationally, since the other part(s) of the entity could be accurately simulated computationally. Third, if assertion (1) does not hold, then assertion (2) holds, so even using the computer programs (and correct hardware) ‘human cognition’ cannot give the correct output for the input mentioned in that assertion.
 ^{21} The following two comments are relevant here. First, whenever that particular application of E is made, there is of course an implicit assumption that at least one actual human exists; otherwise the Main Question is unimportant. Second, for an interesting treatment of the Main Question, the entity E1 that is to be understood should have a high level of ‘human cognition’; as a simple extreme contrast, it might be easy to build and understand a computer simulation of the I/O of a human if that human were comatose. Likewise the entity E2 attempting to do the understanding should have a high level of ‘human cognition’; as a simple extreme contrast, it might be unimportant to say that a comatose human might not be able to demonstrate an understanding. Also, it is simplifying to focus on the case in which entity E1 and entity E2 are the same entity, E. Doing so is sufficient, since a “no” answer to the Main Question when E is used twice in that way implies that ‘human cognition’ cannot fully comprehend ‘human cognition’.
 ^{22} Research on metacognition includes investigating an entity’s own confidence in its answers [11] p. 244ff, even when such confidence requires no separate mechanism. There is experimental evidence that neurons in the parietal cortex of rhesus monkeys encode confidence level as an integral part of decision making [102]. We also note that a 1990 mathematical technique by J. S. Bridle can be viewed as encoding confidence level as an integral part of the decision making of artificial neural networks, by using “softmax” (also called “the normalized exponential”) to construct the activation function for each output layer unit [103].
 ^{23} In 2005, the year before he died, the logician Torkel Franzén published the generally insightful book Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse [104]. It has two passages we realize should have been written differently. First, an argument on its p. 125 applies selfreference in a nonrigorous way to intuition about human behavior, without reminding the reader that on its p. 86 the book had pointed out the highly unreliable nature of arguments using selfreference in a nonrigorous way. (See the related discussion, shortly before our Applicability of COAT Theorem section, and also see the end of Appendix NOTE 10. Also recall from our Clarifying requirement (II) section that strong intuition is insufficient for achieving a definitive result.) Second, the middle of its p. 126 implicitly mentions both the assumption of the Infallibility Hypothesis and the assumption that humans reason solely via formal inference rules, without pointing out that, within empirical cognitive science, the first assumption is wellknown to be false and there is strong empirical evidence questioning the second assumption. That passage is further evidence of the gap between Cognitive Science and Computer Science mentioned in the Introduction and explicated in our Robustness section.
 ^{24} Here, “regardless of how such software was obtained” means even if the software were (somehow!) obtained from an infallible intelligence. In recent decades, the discipline of Software Engineering has been substantially revised, to better accommodate the repeated fallibility of human software developers that is widely recognized by such developers themselves; e.g., see [105].
 ^{25} It is recognized that specialists today lack satisfactory understanding of systems substantially weaker than a highly nontrivial fullyhuman level AI. In 2018, Science journal’s Web page reported strong support at a recent AI research conference, for a speaker’s comparison of Machine Learning research with alchemy[106]. In a 2011 panel discussion at MIT, Noam Chomsky “derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world” but do not “try to understand the meaning of that behavior”, asserting he does not “know of anything like it in the history of science”[1]. Also see [107].
References
 [1] S. Cass, Unthinking machines: Artificial intelligence needs a reboot, say experts. MIT Technology Review (2011).
 [2] P. N. JohnsonLaird, Mental models and human reasoning. Proceedings of the National Academy of Sciences 107, 18243 (2010).
 [3] J. R. Busemeyer, Z. Wang, Hilbert space multidimensional theory. Psychological Review 125, 572 (2018).
 [4] W. de Bruine, B. Fischhoff, Eliciting probabilistic expectations: Collaborations between psychologists and economists. Proceedings of the National Academy of Sciences 114, 3297 (2017).
 [5] N.E. Sahlin, Trust and Confidence in Scientific Research, H. Göran, K. Sahlin, N.E. Sahlin, eds. (Stockholm: The Royal Swedish Academy of Letters, History and Antiquities, 2013), pp. 80–87.
 [6] A. Mack, I. Rock, Inattentional Blindness (Cambridge, MA: MIT Press, 1998).
 [7] J. E. Raymond, K. L. Shapiro, K. M. Arnell, Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performance 18, 849 (1992).
 [8] S. Marti, J. Sakur, S. Mariano, S. Dehaene, Mapping introspection’s blind spot: Reconstruction of dualtask phenomenology using quantified introspection. Cognition 115, 303 (2010).

[9]
A. Charlesworth, A theorem about computationalism and “absolute” truth.
Minds and Machines 26, 205 (2016).
Freely accessible within Computing Methodologies section of
www.computingreviews.com/recommend/bestof/notableitems.cfm?bestYear=2016
due to its selection for ACM Computing Reviews 21st Annual Best of Computing: Notable Books and Articles. ACM abbreviates “Association for Computing Machinery”.  [10] R. P. Feynman, R. B. Leighton, M. Sands, Feynman Lectures on Physics, Vol I. (Reading, Massachusetts: Addison Wesley, 1963).
 [11] S. Dehaene, Consciousness and the Brain (New York: Viking, 2014).
 [12] J. R. Anderson, How Can the Human Mind Occur in the Physical Universe? (New York:Oxford University Press, 2009).
 [13] C. Eliasmith, et al., A largescale model of the functioning brain. Science 338, 1202 (2012).
 [14] G. Marcus, A. Marblestone, T. Dean, The atoms of neural computation. Science 346, 551 (2014).
 [15] D. E. Knuth, Selected Papers on Computer Science (New York: Cambridge University Press, 1996). See pp. 910.
 [16] D. E. Knuth, Computer Science and Mathematics: How a new discipline presently interacts with an old one, and what we may expect in the future. American Scientist 61, 707 (1973).
 [17] S. Theil, Why the human brain project went wrong  and how to fix it. Scientific American (2015).
 [18] J. E. Laird, C. Lebiere, P. S. Rosenbloom, A Standard Model of the Mind: Toward a common computational framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. AI Magazine 38, 13 (2017).
 [19] K. Gödel, Kurt Gödel’s Collected Works, Vol. III, S. Feferman, ed. (New York: Oxford University Press, 1995), pp. 304–323.
 [20] A. Newell, H. A. Simon, Human Problem Solving (Englewood Cliffs, NJ: PrenticeHall, 1972).
 [21] M. D. S. Braine, On the relation between the natural logic of reasoning and standard logic. Psychological Review 85, 1 (1978).
 [22] L. J. Rips, The Psychology of Proof (Cambridge, MA: MIT Press, 1994).
 [23] S. Dehaene, The Number Sense (New York: Oxford University Press, 1997).
 [24] P. N. JohnsonLaird, How We Reason (New York: Oxford University Press, 2006).
 [25] D. Landy, N. Silbert, A. Goldin, Estimating large numbers. Cognitive Science 37, 775 (2013).
 [26] D. Landy, A. Charlesworth, E. Ottmar, Categories of large numbers in line estimation. Cognitive Science 41, 326 (2017).
 [27] P. N. JohnsonLaird, P. Legrenzi, V. Girotto, M. S. Legrenzi, Illusions in reasoning about consistency. Science 288, 531 (2000).
 [28] S. C. Levinson, The original sin of cognitive science. Topics in Cognitive Science 4, 396 (2012).
 [29] J. Avigad, J. Harrison, Formally verified mathematics. Communications of the ACM 57, 66 (2014). See “Correctness Concerns” section.
 [30] S. Mac Lane, Mathematics: Form and Function (New York: SpringerVerlag, 1986). See p. 377.
 [31] J. Kepler, The SixCornered Snowflake (translated by Colin Hardie) (New York:Oxford University Press, 2014).
 [32] T. C. Hales, A proof of the Kepler conjecture. Annals of Mathematics. Second Series 162, 1065 (2005).
 [33] T. Hales, et al., A formal proof of the Kepler conjecture. Forum of Mathematics, Pi 5 (2017).
 [34] T. C. Hales, The Jordan Curve Theorem, formally and informally. American Mathematical Monthly 114, 882 (2007).
 [35] T. Hales, Jordan’s proof of the Jordan Curve Theorem. Studies in Logic, Grammar, and Rhetoric 10, 45 (2007).
 [36] A. M. Jaffey, The millennium grand challenge in mathematics. Notices of the American Mathematical Society 53, 652 (2006).
 [37] A. Ashtekar, J. Lewandowski, Background independent quantum gravity: A status report. Classical and Quantum Gravity 21 (2004).
 [38] A. M. Turing, On computable numbers with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society 42, Ser. 2, 230 (1936). A correction 43 (1937), 544546.
 [39] R. E. Hodel, An Introduction to Mathematical Logic (Boston: PWS Publishing Company, 1995).
 [40] S. Feferman, Are there absolutely unsolvable problems? Gödel’s dichotomy. Philosophia Mathematica 14, 134 (2006). Page 134 of article is page 1 of free preprint at: doi=10.1.1.156.6181.
 [41] D. E. Knuth, Mathematics and computer science: Coping with finiteness. Science 194, 1235 (1976).
 [42] J. R. Shoenfield, Mathematical Logic (Reading, MA: AddisonWesley, 1967).
 [43] A. Charlesworth, A proof of Gödel’s Theorem using computer programs. Mathematics Magazine 54, 109 (1981).

[44]
A. Charlesworth, The Comprehensibility Theorem and the foundations of
Artificial Intelligence.
Minds and Machines 24, 439 (2014).
Freely accessible within Computing Methodologies section of
www.computingreviews.com/recommend/bestof/notableitems.cfm?bestYear=2014
due to its selection for ACM Computing Reviews 19th Annual Best of Computing: Notable Books and Articles.  [45] W. N. Reinhardt, Epistemic theories and the interpretation of gödel’s incompletelness theorem. Journal of Philosophical Logic 15, 427 (1986). See 5.
 [46] J. E. Hopcroft, R. Motwani, J. D. Ullman, Introduction to Automata Theory, Languages, and Computation, Third Ed. (Boston: Addison Wesley, 2007).
 [47] S. Aaronson, Electronic Colloquium on Computational Complexity, Report No. 26 (Weizmann Institute of Science Computational Complexity Foundation, 2005). See p. 5.
 [48] A. Charlesworth, Mathematics: People, Problems, Results, Vol. III Computer Science, D. M. Campbell, J. C. Higgins, eds. (Belmont, CA: Wadsworth International, 1984), pp. 15–23.
 [49] A. Charlesworth, Comprehending software correctness implies comprehending an intelligencerelated limitation. ACM Transactions on Computational Logic 7, 590 (2006).
 [50] G. LaForte, P. J. Hayes, K. M. Ford, Why Gödel’s Theorem cannot refute computationalism. Artificial Intelligence 104, 265 (1998).
 [51] W. Morris, ed., The American Heritage Dictionary of the English Language: New College Edition (Boston: Houghton Mifflin, 1976).
 [52] C. Eliasmith, Attractive and indiscrete. Minds and Machines 11, 417 (2001).
 [53] A. M. Turing, A. M. Turing’s ACE Report of 1946 and Other Papers (Cambridge, MA: MIT Press, 1986), pp. 106–124.
 [54] S. C. Kleene, Recursive predicates and quantifiers. Transactions of the American Mathematical Society 53, 41 (1943).
 [55] S. C. Kleene, General recursive functions of natural numbers. Mathematische Annalen 112, 727 (1936).
 [56] S. Feferman, Penrose’s Gödelian argument. Psyche 2 (1995).
 [57] S. C. Kleene, Introduction to Metamathematics (Princeton, NJ: Van Nostrand, 1952).
 [58] P. G. Odifreddi, Classical Recursion Theory (New York: Elsevier, 1992).
 [59] P. G. Odifreddi, Classical Recursion Theory Volume II (New York: Elsevier, 1999).
 [60] R. I. Soare, Turing Computability: Theory and Applications (Berlin: SpringerVerlag, 2016).
 [61] P. Borwein, S. Choi, B. Rooney, A. Weirathmueller, The Riemann Hypothesis: A Resource for the Afficionado and Virtuoso Alike (New York: Canadian Mathematical Society, 2008). See p. 3.
 [62] A. Hodges, Alan Turing: The Enigma (New York: Simon and Schuster, 1983).
 [63] A. Yedidia, S. Aaronson, A relatively small Turing machine whose behavior is independent of set theory. Complex Systems 25 (2016).
 [64] A. J. Wiles, Modular elliptic curves and Fermat’s Last Theorem. Annals of Mathematics 131, 443 (1995).
 [65] K. Appel, W. Haken, Every planar map is four colorable. Part I: Discharging. Illinois Journal of Mathematics 21, 429 (1977).
 [66] K. Appel, W. Haken, J. Koch, Every planar map is four colorable. Part II: Reducibility. Illinois Journal of Mathematics 21, 491 (1977).
 [67] C. McClarty, What does it take to prove Fermat’s Last Theorem? Grothendieck and the logic of number theory. Bulletin of Symbolic Logic 16, 359 (2010).
 [68] S. Singh, Fermat’s Enigma (New York: Walker and Company, 1997).
 [69] R. Thomas, An update on the FourColor Theorem. Notices of the American Mathematical Society 45, 848 (1998).
 [70] G. Gonthier, Formal proof – the FourColor Theorem. Notices of the American Mathematical Society 55, 1382 (2008).
 [71] G. G. Gallup Jr., Chimpanzees: Self recognition. Science 167, 86 (1970).
 [72] D. Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (New York: Basic Books, 1979). See p. 696.
 [73] H. G. Rice, Classes of recursively enumerable sets and their decision problems. Transactions of the American Mathematical Society 74, 358 (1953).
 [74] A. Charlesworth, The undecidability of associativity and commutativity analysis. ACM Transactions on Programming Languages and Systems 24, 554 (2002).
 [75] M. Davis, Handbook of Mathematical Logic, J. Barwise, ed. (Amsterdam: North Holland, 1977), pp. 567–594.
 [76] B. Poonen, Interpreting Gödel: Critical Essays, J. Kennedy, ed. (Cambridge UK: Cambridge University Press, 2014), pp. 211–241.
 [77] R. Brooks, D. Hassabis, D. Bray, A. Shashua, Is the brain a good model for machine intelligence? Nature 482, 462 (2012).
 [78] D. Hassabis, D. Kumaran, C. Summerfield, M. Botvinick, Neuroscienceinspired artificial intelligence. Neuron 95, 245 (2017).
 [79] J. W. Dawson Jr., Logical Dilemmas: The Life and Work of Kurt Gödel (Wellesley, MA: A. K. Peters, 1997).
 [80] B. Hofffman, Albert Einstein: Creator and Rebel (New York: Viking, 1972).
 [81] C. B. Boyer, The History of the Calculus and its Conceptual Development (New York: Dover, 1949).
 [82] W. H. Donahue, Kepler’s fabricated figures: Covering up the mess in the new astronomy. Journal for the History of Astronomy 19 (1988).
 [83] C. Wilson, Kepler’s laws, socalled. Newsletter of the Historical Astronomy Division of the American Astronomical Society 31, 1 (1994).
 [84] W. L. Vanderburg, Empirical equivalence and approximate methods in the new astronomy: A defence of Kepler against the charge of fraud. Journal for the History of Astronomy 28 (1997).
 [85] H. Wang, A Logical Journey (Cambridge, MA: MIT Press, 2001).
 [86] G. Priest, K. Tanaka, Z. Weber, Paraconsistent logic. Stanford Encyclopedia of Philosophy (2018). Access is provided at http://plato.stanford.edu/entries/logicparaconsistent.
 [87] O. Arieli, A. Avron, A. Zamansky, Ideal paraconsistent logics. Studia Logica 99, 31 (2011).
 [88] H. Wang, Reflections on Kurt Gödel (MIT Press, 1987).
 [89] K. Gödel, Kurt Gödel’s Collected Works, Vol. I, S. Feferman, ed. (New York: Oxford University Press, 1986), pp. 301–303.
 [90] K. Gödel, Kurt Gödel’s Collected Works, Vol. III, S. Feferman, ed. (New York: Oxford University Press, 1995), pp. 164–175.
 [91] D. Monroe, Neuromorphic computing gets ready for the (really) big time. Communications of the ACM 57, 13 (2014). Doi:10.1145/2601069.
 [92] T. Van Gelder, Cognitive Architectures in Artificial Intelligence: The Evolution of Research Programs, A. Clark, J. Toribio, eds. (New York: Garland, 1998), pp. 257–381.
 [93] H. Siegelmann, Computation beyond the Turing limit. Science 268, 545 (1995).
 [94] R. Poznanski, The importance of continuity: A reply to Chris Eliasmith. Minds and Machines 12, 435 (2002).
 [95] C. Eliasmith, Discreteness and relevance: A reply to Roman Poznanski. Minds and Machines 12, 437 (2002).
 [96] H. T. Siegelmann, Neural and superTuring computing. Minds and Machines 13, 103 (2003).
 [97] A. M. Turing, Computing machinery and intelligence. Mind 59, 433 (1950).
 [98] J. Searle, Is the brain’s mind a computer program? Scientific American 262, 26 (1990).
 [99] P. Koellner, Gödel’s Disjunction: The Scope and Limits of Mathematical Knowledge, L. Horsten, P. Welch, eds. (New York: Oxford University Press, 2016), pp. 148–188.
 [100] S. Shapiro, Gödel’s Disjunction: The Scope and Limits of Mathematical Knowledge, L. Horsten, P. Welch, eds. (New York: Oxford University Press, 2016), pp. 189–207.
 [101] F. Quinn, A revolution in mathematics? What really happened a century ago and why it matters today. Notices of the American Mathematical Society 59, 31 (2012).
 [102] R. Kiani, M. N. Shadlen, Representation of confidence associated with decisions by neurons in the parietal cortex. Science 324, 759 (2009).

[103]
C. M. Bishop,
Neural Networks for Pattern Recognition
(New York:Oxford University Press, 1995).  [104] T. Franzén, Gödel’s Theorem: An Incomplete Guide to Its Use and Abuse (Wellesley, MA: A. K. Peters, 2005).
 [105] S. R. Schach, ObjectOriented and Classical Software Engineering (New York: McGraw Hill, 2011), eighth edn.
 [106] M. Hutson, AI researchers allege that machine learning is alchemy. Science (2018). doi:10.1126/science.aau0577.
 [107] A. Charlesworth, Neural networking (letter). American Scientist 102, 323 (2014). Access is provided at https://www.americanscientist.org/article/neuralnetworking.
Acknowledgments
The author is grateful to Jonathan D. Charlesworth, James A. Davis, Gary R. Greenfield, Peter Koellner, Donald W. Loveland, William T. Ross, and Jack E. Singal for helping to improve the exposition. This work is dedicated to the author’s Duke University graduate school professors Richard E. Hodel and the late Joseph R. Shoenfield.
Comments
There are no comments yet.