The Mimicry Game: Towards Self-recognition in Chatbots

02/06/2020 ∙ by Yigit Oktar, et al. ∙ 0

In standard Turing test, a machine has to prove its humanness to the judges. By successfully imitating a thinking entity such as a human, this machine then proves that it can also think. However, many objections are raised against the validity of this argument. Such objections claim that Turing test is not a tool to demonstrate existence of general intelligence or thinking activity. In this light, alternatives to Turing test are to be investigated. Self-recognition tests applied on animals through mirrors appear to be a viable alternative to demonstrate the existence of a type of general intelligence. Methodology here constructs a textual version of the mirror test by placing the chatbot (in this context) as the one and only judge to figure out whether the contacted one is an other, a mimicker, or oneself in an unsupervised manner. This textual version of the mirror test is objective, self-contained, and is mostly immune to objections raised against the Turing test. Any chatbot passing this textual mirror test should have or acquire a thought mechanism that can be referred to as the inner-voice, answering the original and long lasting question of Turing "Can machines think?" in a constructive manner.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In 1950, Alan Turing investigated the question, ’Can machines think?’. However, instead of providing a concrete definition of ’thinking’, he replaced the question with a test called the Imitation Game claiming that a machine passing this test can then be regarded as a thinking entity [31].

Originally being a party game, the Imitation Game (IG) is played with a man, a woman, and a judge whose gender is not important as depicted in Fig. 0(a). Through written communication, both subjects aim to convince the judge that he/she is the woman and the other is not, hence the man needs to imitate a woman. Turing then replaces the man with a machine as in Fig. 0(b) and wonders whether the success rate would change or not. In the final version of the game, a man now takes the place of the woman as in Fig. 0(c).

It is not very clear which version (Fig. 0(b) or Fig. 0(c)) is meant by IG throughout the literature, but it is generally assumed that the Turing Test (TT) in its standard interpretation takes the form of Fig. 0(d) in which the machine’s ability to imitate a human, instead of its ability to imitate a woman is measured. It is still in question why IG Turing originally suggested is gender-based or whether IG is totally equivalent to TT. Readers might refer to [21] for a sound discussion on these issues. Note that, Turing himself later drops gender-related issues and poses the question ’Can machines communicate in natural language in a manner indistinguishable from that of a human being?’, thus places the Turing Test as in Fig. 0(d) into central attention [31]. Therefore, throughout this article, by TT this standard form will be assumed.

(a) Preliminary of the Imitation Game
(b) First version of the Imitation Game
(c) Second version of the Imitation Game
(d) Standard version: The Turing Test
Fig. 1: The distinction between the Imitation Game and the Turing Test

I-a Analysis of Turing Test

Human Intelligence Rationality
Reasoning Thinking humanly (TH) Thinking rationally (TR)
Behavior Acting humanly (AH) Acting rationally (AR)
TABLE I: Summary of the extent of mainstream AI

The analysis of TT presented here is not by any means a complete one. In fact, an analysis based mostly on the objections raised against TT is given. In Table I, TT mostly refers to behavioral aspect of human intelligence, but in fact targets the reasoning domain. In other words, TT claims that fulfilling AH implies the existence of a thinking mechanism, namely TH or TR or both. This table is visited often throughout this subsection [26].

I-A1 Consistent machines, Gödel, and paraconsistent logic

As built on logic, conventional machines are bound by the Gödel’s Theorem which states that in consistent logical systems of enough power, certain statements cannot be proved or disproved within the system [10]. Thus, they are bound to be limited to an extent. On the other hand, humans with some room for inconsistency or irrationality, have different characteristics. Implications of these issues on machine thought is further discussed by Lucas [16]

. However, with introduction and mechanical realization of paraconsistent logic, which allows inconsistencies in a controlled and discriminating way, it is probable that such machines can then reason much like humans do 

[23, 4]. Note that, modeling generic human reasoning mechanism exactly in terms of paraconsistent logic is perhaps still an open problem. Then assuming it can be solved, referring to Table I, for artificial machines both TH and TR seem satisfiable. This conclusion is important with respect to TT as TT itself claims that TH and TR are satisfiable (i.e. by fulfilling AH).

I-A2 Machines that cheat, Chinese room, and reanimation

In Turing Test, there is no restriction on the design of the involved machines. Therefore, machines can cheat (i.e. hide their default behavior) and disguise themselves in a human-like appearance. In Table I, this corresponds to the fact that although AH does not hold in reality, the machine can trick the judge believing in that AH holds. Then by fulfilling AH, the machine can erroneously pass TT. A related argument is given by John Searle through his Chinese room example, claiming that external behavior cannot be used to determine whether a machine is actually thinking in real-time or reanimating an already thought and saved process (created by someone else) [27]. This translates to the fact that, although neither of TH, TR, AH, or AR hold directly for the machine in question, it can simulate such processes (acquired from someone else) without actually having created them in the first place, working much like a virtual machine. Note that the capacity of such a machine is then bounded by the processes it has or can acquire, and thus its intelligence is not defined by its own mental abilities, but through the abilities of its acquaintances.

I-A3 Uncreative and hardcoded machines

Having roots in Lady Lovelace’s objection, machines are generally assumed to be incapable of originating anything, doing anything new, or surprising us. Such acts most probably require learning and creativity besides keeping a set of logical rules. However, at current stage of AI research, machines (or artificial agents) are capable of learning and also undergoing tasks that require creativity  [30, 25]. On the other extreme, Ned Block proposes a machine that can pass TT, without any significant information processing, but through being extensively hardcoded [3]. This hypothetical machine stores all the possible sensible conversations in its memory, and then answers the judges just by simple lookups. Although such a machine may not be possible in practice, it is theoretically possible. This machine’s intelligence is analogously equivalent to that of a jukebox, but it can pass TT, thus TT cannot be a proper test to measure intelligence. Block in fact claims the opposite of Turing, namely acting humanly does not necessarily imply the existence of any thinking activity, as in theory generic human behavior can be hardcoded into the machine. In short, TT is deemed to be a behaviorist approach not capable of detecting the extent of internal information processing. Therefore, alternatives that can detect the existence of sophisticated internal mechanisms (such as capability of learning and general problem solving) must be preferred.

I-A4 Human intelligence vs. general intelligence

It is asserted that the IG (or the TT) examines machines in terms of human-specific intelligence, instead of on the grounds of a general one [18]. This issue is investigated in detail when the concept of subcognition is introduced in Sect. I-B1 and is also revisited when information theoretic alternatives of TT are discussed in Sect. I-B3.

I-A5 Impairment of judges and confederate effect

Apart from the philosophical aspects of TT, involvement of judges is another issue from a practical perspective. As human beings, judges can make mistakes, cannot be totally objective and can even be manipulated towards an unexpected decision [31, 28, 29]. Moreover, human participants may frequently be demotivated to act as themselves, causing them to be incorrectly labeled as machines, such peculiarity being named as the confederate effect [32]. In short, these all are repercussions of TT not being a self-contained test. There exists an external dependency on the performance of judges and also human subjects, thus TT cannot truly be accurate or objective.

I-B Alternatives to Turing Test

Alternatives to TT are investigated under three headings. Firstly, alternatives that provide valuable analytical insight are given. Then, higher-order generalizations of TT are discussed. Finally and most importantly, more formal information theoretic alternatives are listed, leading the way to self-recognition as an alternative.

I-B1 Alternatives as analytic probes

Considering the possibility of a random state finite automaton (FSA) to generate proper English sentences (by extreme luck) enough to pass the Turing test, Kugel introduced a theoretical game consisting of infinitely many rounds [15]

. The main motivation behind is the fact that not only FSAs but even Turing machines should definitely be regarded inferior compared to mental capabilities of humans. Therefore, the possibility of a random FSA being able to pass the Turing test in theory is disturbing to many.

Through introduction of another hypothetical test, called the Seagull Test, the limits of Turing test is further challenged. A Seagull test measures a subject’s capability of flight. The subject will pass the test if its flying characteristic is indistinguishable from that of a seagull in the radar. The Seagull test cannot be fulfilled by helicopters, bats, or beetles, or many other flying things alike. Thus, this is directly analogous to Turing test’s detection of intelligence practiced by a human being. Further through the introduction of the subcognitive questions, French claims that to imitate a human, a machine needs to experience the world as a human does (i.e. through sensing organs) for a considerable period of time, otherwise it will not be able to answer questions related to special physical experience that can only be acquired by a human being. This is due to the fact that, while TT is a test for human-like intelligence and experience, just as the Seagull test is for detecting Seagull-like flight characteristics [6].

I-B2 Generalizations

Based on the subcognition concept, it is natural to extend TT into the physical domain as proposed by Harnad [11] referred to as Total Turing Test. In this physical version, judge can also directly, visually, tactically examine the two candidates. Harder extensions referred to as T4 and T5 are also discussed in [7], but all these generalizations aim at testing human-specific intelligence or functionality of machines, not providing a universal measure for intelligence or capability. Therefore, information theoretic approaches to defining intelligence are discussed next to provide a domain independent discussion.

I-B3 Information theoretic alternatives

A conventional perspective in this domain favors inductive learning capacity as a general and fundamental intelligence sign. Through certain analogies it can be claimed that inductive learning is tightly related to compression ability, thus it is possible to draw parallels between intelligence and algorithmic information theory [5].

In a similar study, comprehension (as the outcome of a successful inductive inference process) is chosen as the fundamental sign of intelligence. Formalizing this ability, authors arrive at what is called the C-test, defined in pure computational terms applicable to both humans and non-humans. In this way, they are also able to establish a connection between information theoretic concepts and classical IQ tests [13].

Recent understanding suggests that a compression or induction test is possibly limited to define the standards of general intelligence. The key idea is to see intelligence as the mean (or weighted average) performance of an agent in all the possible environments [12], including active environments. In this regard, the agent does not only require inductive abilities to understand the environment, but also needs planning abilities to use such understanding effectively. These modifications put focus on concepts such as perception, attention, and memory besides inductive skills, and thus generalizes C-test. Following this logic, a universal definition of intelligence is given as ”the ability to adapt to a wide range of environments”, both referring to internal and external mechanisms of an agent. Although theoretically sound, developing a practical test to satisfy such an extensive measure for all possible intelligence forms accurately and objectively is nearly impossible.

Ii Self-recognition as an alternative

At this point, it is now most appropriate to introduce our perspective on this issue. Perhaps, instead of trying to attain a definition for the supreme form of intelligence and an ultimate test for it, trying to catch the glimpse of general intelligence will be far more beneficial as a cornerstone. It is a common convention to regard human beings as the most intelligent forms known to exist. Forcing machines to reach this (highest-known) intelligence threshold without setting any other reasonable lower milestones seems to be unreasonable. Assuming that species observed as the most intelligent ones of all have the ability of self-recognition in a mirror [24, 22, 2], such tests and their variants can then be proper candidates for catching glimpses of general intelligence in a self-contained and a truly objective manner, immune to most of the objections raised against the Turing Test and its generalizations, setting a more reasonable milestone for machines to reach.

Ii-a Self-recognition in living beings

Self-recognition and idea of self is special and unique for some mammals like great apes, humans, orcas, dolphins and elephants. Also, there is only one non-mammal animal, “European Magpies”, that is capable of self-recognition. First usage of ”the Mirror Test” as a tool to test self-recognition appears in 1970, introduced by Gordon Gallup [8]. Basically, a test subject is marked with an odorless dye, where it cannot see directly (forehead, ear etc.). Then, its behavior is observed to see whether it will be aware that the dye is on its own body part. For instance, one of the common behavior that indicates self-recognition is poking the marking on its own body. Of course, this behavior needs to happen while observing the reflection. However, the test raised some questions and received critics. Readers may refer to  [20] for more detailed information on the upcoming discussion about self-recognition and interpreting others’ behaviors.

The test was not robust enough. Gorillas, for instance, failed on the test because their basic instincts dictate that eye contact is an aggressive gesture. Hence, they avoid looking at the reflection in the face. Other than this, some of the primates need a transitional period before self-recognizing themselves in the mirror (like systematically exploring the body parts that cannot be seen directly). These types of problems complicate the application procedure of the test. Nevertheless, the main question was “Is Mirror Self-Recognition enough to say the subject is intelligent?”. An early attempt of an answer came from Gordon Gallup again. According to Gallup, there was a link between MSR and ability to interpret others’ mental states. In short, he predicted a developmental correlation between presence of mirror self-recognition and social strategies based on the idea of self, such as empathy, pretending, and deception, to be reached with some cognitive development after presence of mirror self-recognition. To best of our knowledge, it can be thought as a door that leads to more complex cognitive abilities. It should also be noted that, mirror self-recognition presence is needed for more complex cognitive abilities but it is not a certainty that an animal with such ability will develop more complex cognitive abilities ever [9]. Most relevant to the methodology that will be presented in Section III, self-recognition studies in robots are to be discussed next to gain further insight.

Ii-B Robotic self-recognition

In one of the earliest studies towards robotic self-recognition, through learning a characteristic time window between the initiation of motor movement and the perception of actual motion, authors demonstrate a certain level of self-recognition in robots, reminiscent of a rather incomplete model for self-awareness present in human infants [17]. However, in case of co-occurence of very similar time delay characteristics for two different agents, such system will fail as there is then little chance of discrimination. After all, motion time delay models are certainly not as unique as fingerprints, namely two robots of the same kind from the same manufacturer will surely have very similar time delay models.

In a follow-up study from the same research group, the robot is now able to learn a more formal Bayesian model that relates its own motor activity with perceived motion. The importance of this study lies in the fact that mirror self-recognition is performed through a purely statistical kinesthetic-visual matching mechanism, without any significant social aspect. This is a challenge to the viewpoint that a certain level of social understanding is necessary for mirror self-recognition. Authors further claim that mirror test may not be about self-awareness or theory of mind at all, but merely a test of an agent’s ability to adapt to new kinds of visual feedback, but such an ability might be related to intelligence, mind, self-awareness concepts in a self-referential way (recalling the rather synchronous definition of general intelligence as the ability to adapt to a wide range of environments as given earlier).

Perhaps the most promising way to investigate self-recognition issue, both in practical and theoretical terms, is through studying mirror neurons that can identify actions of either self or others 

[14]. Namely, such audiovisual mirror neurons activate when the corresponding action is performed, its related sound is heard, or it is seen. A recent study successfully incorporates such findings and develops a brain-inspired model for robotic self-consciousness [33]. However, common conception is that mirror neurons alone are not sufficient, but necessary for the ability of self-recognition in a mirror, as monkeys possess such neurons but cannot pass the mirror test. Furthermore, a simple consideration suggests that mirror neurons can be used for implementing a communication system based on gestures, as in a sign language, thus introducing language into our research domain. With all these considerations, it is time to present our proposed methodology which conceptualizes the mirror test by providing it in a textual form.

Iii Proposed Methodology

Fig. 2: Visual version of the proposed methodology

The main contribution of this paper lies in the conceptual generalization of the conventional mirror test, namely going from Fig. 2 to Fig. 3. To be able to make meaningful analysis, a chatbot replaces a robot in this context. Performed actions now correspond to output sent, and observing performed actions then correspond to the input received in the textual/conceptual version. According to this conceptual generalization, there is no restriction on the form of the agent as long as it has an input/output interface and an embodiment in any form. The need for an embodiment besides an input/output interface is discussed later on in the text.

The proposed methodology consists of 4 stages as depicted in Fig. 3. For simplicity, a turn-based chatting session is assumed as usual for all stages. The first (leftmost) stage is the default (and conventionally the only) case considered in a chatting session. Chatbot A sends its output to chatbot B and receives B’s output through its input. As conventionally the only case, chatbot A is aware of the fact that it is communicating with another entity (whether it be another chatbot or a human), and possibly has no motivation to think otherwise. Note that, in nearly all of the cases relating two distinct entities, there exists an intermediate communication channel, but is discarded here for simplicity. Then in simplistic terms, in Stage 1, A and B depict instances of two distinct programs with a corresponding input/output relation as necessary.

As the first effective stage of the proposed methodology, the next stage replaces B with another instance of A, called the mimicker of A. Therefore, in this case there are two instances of the same program talking to each other. So the question then is: Will the agent A be able to recognize such a case and figure out (in an unsupervised manner) that the entity contacted is in fact an instance of itself instead of being an instance of a distinct program?

At this point, referring back to the visual version will be helpful. Since implementing such a stage for living beings is impossible in practice, a theoretical consideration is taken. Assume that there is an exact replica of the subject, and such replica can mimic any action taken in real-time without any delay. What will be the consequences when these two entities face each other? Interestingly, if the subject contains non-determinism, such mimicry will break at some point. Will such mimicry continue on till infinity otherwise? Not necessarily. Although these are instances created from the same source, they may be subject to change as interaction goes on and such change may differ for each. After all they are not exactly the same, at least they have different coordinates in the system they belong to (consider a system in which a change mechanism that depends on the coordinates of the individuals exists)

Fig. 3: Textual/Conceptual generalization

Going back to the conceptual version, if determinism and no-change policies hold, A will receive responses from its mimicker, exactly the responses it would give. Hypothetically, A can first send the query to itself (an ability to be formalized later in Stage 4) and then to the mimicker, and can check for the equality of those two responses. However, it seems as if such procedure should be repeated till infinity for a perfect mimicker to be detected and this dilemma paves way to the next stage.

The third stage is the stage that conceptualizes the mirror test as depicted in the third column of Fig. 3, in which an input/output redirector is conceptually used as a mirror. In fact, from A’s perspective, such stage initially seems indistinguishable from Stage 2. In visual version, a perfect mimicker behind a glass, and a reflection in a mirror rather sound as two observably equivalent scenarios. Then, how is it possible that a human or a capable animal is able to definitely grasp the concept of reflection without needing to test till infinity as Stage 2 demands? When and how does the necessary ’click’ happen?

Equally applicable to both visual and textual, such ’click’ happens when the subject figures out that in fact there are not two separate entities, but these two entities actually refer to the same physical space, namely to the subject itself, rather it be the body or the address space respectively. This is rather equivalent to figuring out the principles of a mirror or an input/output redirection. Namely in textual version, it is equivalent to figuring out that the contacted entity is not an actual entity but a reference that rather redirects to subject’s own input. Note that, how this realization takes place is rather a deeper issue that has partly been addressed in Section II for the visual version. Through Stage 4, the final stage, these deeper issues are to be introduced for the textual version this time.

Note that, up to this stage, input/output relations are assumed to be established through enforcement instead of choice. As an example, an online chatbot is hopelessly forced to chat with anyone that connects to its interface and has to respond somehow to each query that stranger sends. Similarly, in Stage 2, a mimicker is instantiated and the necessary input/output relations are established without question, and such procedure also holds for Stage 3. In that sense, Stage 4 is actually a symbolic stage in which the agent A is depicted to have redirected its output to its own input, without needing an additional redirector as in Stage 3. A deep question then is: Will an agent passing Stage 3 be able to configure itself as depicted in Stage 4? Possible implications of this and related issues are to be discussed next.

Iii-a Implications of textual/conceptual self-recognition

The original visual version has partially been discussed in Sect. II, but there is more to be mentioned. Passing the mirror test for the first time possibly grants the subject the ability to virtually place the observer (i.e. the virtual eye) in a person manner, providing a form of perspective-taking. In other words, a mirror (theoretically directable towards anywhere) provides a different viewing perspective as if the subject is virtually somewhere else, or equivalently granting the subject to both observe and also transmit its actions to otherwise inaccessible portions of the world. Similarly, through an input/output redirector it is possible to bind the input/output of the subject theoretically anywhere in memory.

It is still in question whether passing Stage 3 automatically grants the agent the ability to redirect its output as desired without needing any tool, whether it be to its own input or to somewhere else. The validity of such a possible implication should be carefully considered in a formal manner. However, assuming that the agent can redirect its output to its own input, whether it be through an internal mechanism or an external one as depicted in Stage 4, then this can serve as a formal geometrical definition of the inner-voice concept. The implications of having an inner-voice are truly far-reaching in return [19, 1].

In a related manner, if an agent is able to consider actions without being actually performed and how they would be observed even in the absence of a mirror, this rather translates to a form of imagination. In the most general setting, there happens to be the ability to imagine physically non-existing (and possibly dynamic) scenes from an arbitrary perspective. Then in the textual version, this would correspond to the ability of virtually creating non-existing entities and having hypothetical encounters/conversations with them. This can then possibly be tied to the well-entertained theory of mind concept, in which non-existent entities are then models, previously created by the subject, of actual entities. Note that, there still exists an ongoing debate on whether self-recognition and theory of mind concepts can be regarded as highly correlated or not [19], but a gradual connection can definitely be made as apparent in our example.

Iii-B A comparative analysis

A comparative analysis is now given with respect to Turing Test referring back to Section I-A. It is not possible to fully address each item presented in that section, as a more formal and rigorous version of our methodology should be devised for such a detailed consideration. Addressing the first item, it is questionable whether consistent logical systems will be capable of self-recognition or not or whether paraconsistency is a requirement for self-recognition, but such related claims need to be considered rigorously as a future perspective. However, as a self-contained test devoid of external dependency, our methodology is capable of detecting not just human-specific but a general form of intelligence, perhaps being a precursor for higher forms. There is no external judge that specifies the outcome, but the testee judges oneself in a truly objective manner. An observer is in fact needed to record the success or failure of the test, but the outcome of the test is independent of whether an observer exists. As a final related note, there might be a reasonable time limit (or number of turns) for the testee to be deemed successful or not.

Iv Discussion and Conclusion

Although studies based on the visual version exist as mention previously, to our knowledge there has been no research on the textual version of the mirror test as proposed here. It is also a big question whether chatbots, which have previously passed the Turing test, will also be able to fulfill all of the proposed stages depicted here.

Two approaches come to attention as possible solutions to this textual version. A conventional solution can be to integrate concepts that mirror neurons provide with natural language processing tools, thus providing solutions parallel to the ones mentioned in

II-B. As a mirror neuron is sensitive to a form of action, in textual version, this corresponds to a neuron sensitive to a pattern of text (whether it be an input or output). This way, an agent may able to detect that it is talking to itself if it ’has a neuron’ that is sensitive to patterns of text that are specific to the agent.

Another solution for the chatbot may be to figure out a way to query the address of the memory cells it occupies in an unsupervised way. That way, by adequate querying it can then differentiate whether the entity contacted is another physical entity or in fact itself. This way a solution for even detecting a perfect mimicker is possible. However, as noted the chatbot needs to seek this address querying behavior itself. If such address querying mechanism is hardcoded then it defies the goal of the test to begin with. It would then correspond to implanting a perfectly working mirror self-recognition chip into a monkey’s brain and watch it recognize itself in the mirror.

References

  • [1] B. Alderson-Day, K. Mitrenga, S. Wilkinson, S. McCarthy-Jones, and C. Fernyhough (2018) The varieties of inner speech questionnaire–revised (visq-r): replicating and refining links between inner speech and psychopathology. Consciousness and cognition 65, pp. 48–58. Cited by: §III-A.
  • [2] J. R. Anderson and G. G. Gallup (2015) Mirror self-recognition: a review and critique of attempts to promote and engineer self-recognition in primates. Primates 56 (4), pp. 317–326. Cited by: §II.
  • [3] N. Block (1995) The mind as the software of the brain. In An Invitation to Cognitive Science, Second Edition, Volume 3, D. N. Osherson, L. Gleitman, S. M. Kosslyn, S. Smith, and S. Sternberg (Eds.), pp. 377–425. Cited by: §I-A3.
  • [4] N. C. A. da Costa, L. J. Henschen, J. J. Lu, and V. S. Subrahmanian (1990) Automatic theorem proving in paraconsistent logics: theory and implementation. In 10th International Conference on Automated Deduction, M. E. Stickel (Ed.), Berlin, Heidelberg, pp. 72–86. External Links: ISBN 978-3-540-47171-4 Cited by: §I-A1.
  • [5] D. L. Dowe and A. R. Hajek (1997) A computational extension to the turing test. In Proceedings of the 4th conference of the Australasian cognitive science society, University of Newcastle, NSW, Australia, Vol. 1. Cited by: §I-B3.
  • [6] R. M. French (1990) Subcognition and the limits of the turing test. Mind 99 (393), pp. 53–65. Cited by: §I-B1.
  • [7] R. M. French (2000) The turing test: the first 50 years. Trends in cognitive sciences 4 (3), pp. 115–122. Cited by: §I-B2.
  • [8] G. G. Gallup (1970) Chimpanzees: self-recognition. Science 167 (3914), pp. 86–87. Cited by: §II-A.
  • [9] G. G. Gallup Jr (1998) Self-awareness and the evolution of social intelligence. Behavioural Processes 42 (2-3), pp. 239–247. Cited by: §II-A.
  • [10] K. Gödel (1931) Über formal unentscheidbare sätze der principia mathematica und verwandter systeme i. Monatshefte für mathematik und physik 38 (1), pp. 173–198. Cited by: §I-A1.
  • [11] S. Harnad (1989) Minds, machines and searle. Journal of Experimental & Theoretical Artificial Intelligence 1 (1), pp. 5–25. Cited by: §I-B2.
  • [12] J. Hernández-Orallo and D. L. Dowe (2010) Measuring universal intelligence: towards an anytime intelligence test. Artificial Intelligence 174 (18), pp. 1508–1539. Cited by: §I-B3.
  • [13] J. Hernandez-Orallo (2000) Beyond the turing test. Journal of Logic, Language and Information 9 (4), pp. 447–466. Cited by: §I-B3.
  • [14] E. Kohler, C. Keysers, M. A. Umilta, L. Fogassi, V. Gallese, and G. Rizzolatti (2002) Hearing sounds, understanding actions: action representation in mirror neurons. Science 297 (5582), pp. 846–848. Cited by: §II-B.
  • [15] P. Kugel (1990) Is it time to replace turing’s test?’. In 1990 Workshop Artificial Intelligence: Emerging Science or Dying Art Form. Sponsored by SUNY Binghamton’s Program in Philosophy and Computer and Systems Sciences and AAAI, Cited by: §I-B1.
  • [16] J. R. Lucas (1996) Minds, machines and gödel: a retrospect. Cited by: §I-A1.
  • [17] P. Michel, K. Gold, and B. Scassellati (2004) Motion-based robotic self-recognition. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566), Vol. 3, pp. 2763–2768. Cited by: §II-B.
  • [18] P. H. Millar (1973) On the point of the imitation game. Mind 82 (328), pp. 595–597. Cited by: §I-A4.
  • [19] A. Morin (2011) Self-recognition, theory-of-mind, and self-awareness: what side are you on?. Laterality 16 (3), pp. 367–383. Cited by: §III-A, §III-A.
  • [20] S. T. Parker, R. W. Mitchell, and M. L. Boccia (2006) Self-awareness in animals and humans: developmental perspectives. Cambridge University Press. Cited by: §II-A.
  • [21] A. Pinar Saygin, I. Cicekli, and V. Akman (2000-11-01) Turing test: 50 years later. Minds and Machines 10 (4), pp. 463–518. External Links: ISSN 1572-8641, Document, Link Cited by: §I.
  • [22] J. M. Plotnik, F. B. De Waal, and D. Reiss (2006) Self-recognition in an asian elephant. Proceedings of the National Academy of Sciences 103 (45), pp. 17053–17057. Cited by: §II.
  • [23] G. Priest (2002) Paraconsistent logic. In Handbook of philosophical logic, pp. 287–393. Cited by: §I-A1.
  • [24] D. Reiss and L. Marino (2001) Mirror self-recognition in the bottlenose dolphin: a case of cognitive convergence. Proceedings of the National Academy of Sciences 98 (10), pp. 5937–5942. Cited by: §II.
  • [25] A. Roberts, C. Hawthorne, and I. Simon (2018)

    Magenta. js: a javascript api for augmenting creativity with deep learning

    .
    Cited by: §I-A3.
  • [26] S. Russell and P. Norvig (2009) Artificial intelligence: a modern approach. 3rd edition, Prentice Hall Press, Upper Saddle River, NJ, USA. External Links: ISBN 0136042597, 9780136042594 Cited by: §I-A.
  • [27] J. R. Searle (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3), pp. 417–424. External Links: Document Cited by: §I-A2.
  • [28] H. Shah and K. Warwick (2010) Hidden interlocutor misidentification in practical turing tests. Minds and Machines 20 (3), pp. 441–454. Cited by: §I-A5.
  • [29] S. M. Shieber (1994-06) Lessons from a restricted turing test. Commun. ACM 37 (6), pp. 70–78. External Links: ISSN 0001-0782, Link, Document Cited by: §I-A5.
  • [30] H. Toivonen and O. Gross (2015)

    Data mining and machine learning in computational creativity

    .
    Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 5 (6), pp. 265–275. Cited by: §I-A3.
  • [31] A. M. TURING (1950-10) I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind LIX (236), pp. 433–460. External Links: ISSN 0026-4423, Document, Link, http://oup.prod.sis.lan/mind/article-pdf/LIX/236/433/9866119/433.pdf Cited by: §I-A5, §I, §I.
  • [32] K. Warwick and H. Shah (2015) Human misidentification in turing tests. Journal of Experimental & Theoretical Artificial Intelligence 27 (2), pp. 123–135. Cited by: §I-A5.
  • [33] Y. Zeng, Y. Zhao, J. Bai, and B. Xu (2018) Toward robot self-consciousness (ii): brain-inspired robot bodily self model for self-recognition. Cognitive Computation 10 (2), pp. 307–320. Cited by: §II-B.