From Cognitive Binary Logic to Cognitive Intelligent Agents

06/18/2011 ∙ by Nicolaie Popescu-Bodorin, et al. ∙ IEEE 0

The relation between self awareness and intelligence is an open problem these days. Despite the fact that self awarness is usually related to Emotional Intelligence, this is not the case here. The problem described in this paper is how to model an agent which knows (Cognitive) Binary Logic and which is also able to pass (without any mistake) a certain family of Turing Tests designed to verify its knowledge and its discourse about the modal states of truth corresponding to well-formed formulae within the language of Propositional Binary Logic.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The relation between self awareness and intelligence is an open problem these days. Despite the fact that self awarness is usually related to Emotional Intelligence, this is not the case here. The problem described in this paper is how to model an agent which knows (Cognitive) Binary Logic [1] and which is also able to pass (without any mistake) a certain family of Turing Tests [2] designed to verify its knowledge and its discourse about the modal states of truth [3] (necessary truth - denoted , contextual truth - denoted , impossibly truth / necessary false / contradiction - denoted , [1]) corresponding to well-formed formulae within the language of Propositional Binary Logic ().

The context of this paper is given by [4] and [1]. More precisely, in order to improve a complex software platform for iris recognition [4], an inference engine is needed. The computational model of this engine will be derived from Computational formalization of Cognitive Binary Logic () introduced in [1]. First step in this direction is to extend up to an intelligent agent enabled to pass some Turing tests, and this is the subject of the present paper.

Ii Preparing for the Turing Test

In order to pass the Turing test, the agent must have conversational capacities. Let us assume that the agent gets the input which looks like a well-formed formula of . An example of this kind is the Liar Paradox discussed in [1]. The problem is that a deductive discourse [1] depends on the given input string but it also depends on the given goal which is obvious for a human agent, but not for a software agent.

In other words, for a software agent, the input string can be translated into one of the following sentences: ‘ is (always) false’, ‘ is (always) true’, ‘ is a contextual truth’, ‘ is false and well-formed’, ‘ is true and well-formed’, or into one of the following queries: ‘is it a theorem?’, ‘is it a contradiction?’, ‘is it a contextual truth?’.

Ii-a The cognitive dialect

The introduction of two semantic markers denoted ‘(!):’ and ‘(?):’ is mandatory in order to differentiate between assertions (affirmations) and queries (questions), respectively. With these notations, in the cognitive dialect, the deductive discourses are derived from the deductive discourses written in by adding ‘(!):’ or ‘(?):’ prefix to each vertices.

The second reason for introducing these markers is that in order to prove a certain degree of self awareness, an agent must be able to understand the difference between the assertions like ‘I ask’ and ‘I say’ and also between ‘I ask myself’, ‘I say to myself’ (‘I found’, ‘I proved’, ‘I know’), ‘I ask you/someone’, ‘I say to you/someone’.

When it comes to imagining a logical human-machine dialog, the most important thing is that if the human tells something to the agent, then what is told can be true or false, but anything said by the agent must be true (or else, it is inevitably that the agent is inconsistent and, sooner or later, it will fail to pass a certain Turing test).

Also, to keep the design of our agent as simple as possible, we will consider that all assertions are positive, i.e. all of them declare that something is true:
‘it is true that ’:

(1)

or ‘it is true that is false’,

(2)

or ‘it is true that is a contextual truth’:

(3)

By analogy, any query will ask for something true:
‘is it true?’:

(4)

or ‘is it true that is false?’:

(5)

or ‘is it true that is a contextual truth?’:

(6)

The third convention allows the agent to manipulate all three states of modal truth using a purely binary vocabulary. We achieve this by introducing the dialog function (N. Popescu-Bodorin):

(7)

where: and are two reserved labels (meaning that the input assertion/query is logical or nonsense, respectively), is the class of all tautologies, is the class of all contradictions, and is defined as follows: if is a well-formed formula of (), then:

The output of the dialog function is computed using the following rules:

  1. is a tautology if and only if the full deductive discourse [1] of the input assertion

    is a deductive proof [1]. In this case, the dialog function outputs the doublet:

    I.e. the agent proves that is always true and the input assertion is (logically) well-formed.

  2. is a tautology if and only if the full deductive discourse of the input assertion

    is a deconstruction [1]. In this case, the dialog function outputs the doublet:

    I.e. the agent proves that is always true and founds that asserting falsity for a tautology is a logical nonsense.

  3. is a contradiction if and only if the full deductive discourse of the input assertion

    is a deconstruction. In this case, the dialog function outputs the doublet:

    I.e. the agent proves that is always false and founds that asserting truth for a contradiction is a logical nonsense.

  4. is a contradiction if and only if the full deductive discourse of the input assertion

    is a deductive proof. In this case, the dialog function outputs the doublet:

    I.e. the agent proves that is always false and the input assertion is well-formed.

  5. is a tautology if and only if the full deductive discourse of the input query

    is a deductive proof. In this case, the dialog function outputs the doublet:

    I.e. the input question is well-formed, the agent proves that is always true and gives a pozitive answer to the input query.

  6. is a tautology if and only if the full deductive discourse of the input assertion

    is a deconstruction. In this case, the dialog function outputs the doublet:

    I.e. the input question is well-formed, the agent proves that is always true and gives a negative answer to the input query.

  7. is a contradiction if and only if the full deductive discourse of the input query

    is a deconstruction. In this case, the dialog function outputs the doublet:

    I.e. the input question is well-formed, the agent proves that is always false and gives a negative answer to the input query.

  8. is a contradiction if and only if the full deductive discourse of the input query

    is a deductive proof. In this case, the dialog function outputs the doublet:

    I.e. the input question is well-formed, the agent proves that is always false and gives a pozitive answer to the input query.

  9. is a contextual truth if and only if none of the full deductive discourses of the input assertions

    or of the input queries

    is a deductive proof. In this case, the dialog function outputs the doublet:

    I.e. the agent find a context which makes the formula satisfiable and also finds a context which makes the formula satisfiable.

By analyzing the outputs of the dialog function it can be seen that, in the cognitive dialect is legal to ask anything but it is illegal to assert falsity for a tautology or to assert truth for a contradiction.

Ii-B Simple examples

Let us consider that the formula is given to be studied. If the input query is , a full deductive discourse written in cognitive dialect would be:

The context which makes the formula satisfiable is:

If the input query is , a full deductive discourse written in cognitive dialect would be:

The context which makes the formula satisfiable is:

The output of the dialog function is the following doublet:

Hence, is a contextual truth. Also, and describe the solutions of Boolean satisfiability problems and , respectively.

Fig. 1: A deductive discourse for Modus Ponens written in the cognitive dialect.
Fig. 2: A deductive discourse written in cognitive dialect for the tautology .

In the second example, Modus Ponens is analyzed. For the input query:

a full deductive discourse written in cognitive dialect would be the deductive proof presented in Fig.1 and the output of the dialog function is the doublet .

In the third example, the contradiction is analyzed. For the input query:

a full deductive discourse written in cognitive dialect would be the following deductive proof:

The output of the dialog function is the doublet . Hence, the input assertion will be recognized as a logical nonsense and the output of the dialog function will be the doublet .

The forth example analyze the Modus Tollens argument. For the input query:

a full deductive discourse written in cognitive dialect would be the following deductive proof:

Hence, the output of the dialog function is the doublet .

In the fifth example, we consider the tautology:

A deductive discourse written in cognitive dialect is presented in Fig.2 and the output of the dialog function is the doublet .

Iii The Agent

The basic functionality of the proposed Cognitive Intelligent Agent () is described in Fig.3. Let us consider the Turing tests containing the following type of problems: for an arbitrary formula , is required to find if the input assertion/query is or isn’t a logical nonsense, and also if is a tautology, a contextual truth, or a contradiction.

Since the theory is sound and complete [1], will give the correct answer for any input query written in cognitive dialect. Also, if the input assertion is a logical nonsense, will correctly recognize it. Therefore, will pass with a success rate of 100% all sessions of Turing tests designed to verify its knowledge and its discourse about the modal states of truth corresponding to formulae within the language of . Hence, there is no doubt that as a software agent, demonstrate the highest possible degree of intelligence. Still, the agent is not enabled to be aware of itself and of its environment or to simulate self-awareness.

Fig. 3: The Cognitive Intelligent Agent

Iv Conclusion and Future Work

The basic design of an inteligent agent was proposed in this paper. It is an example of a fully intelligent agent which is not at all aware of itself. Still, it is enabled to engage in simple conversations about the modal states of truth of well-formed formulae of , without doing any mistakes.

Future developments will include self-awarnes which is needed in order to enable the agent to supervize complex computations and to engage direct communication with humans on specific subjects, other than the modal truth state of the formulae of PBL. For example, we plan to gather humans‘ opinions about some particular iris recognition results in a similar manner with that described in [5] where customers‘ emotive responses to a product are collected using a questionnaire. The goal is corelate iris recognition results obtained automaticaly with human feedback and to explore the limitations that could apear in iris recognition when using eye images captured in insufficiently constrained aquisition conditions.

On the other hand, we know that some sub-problems of iris recognition are in NP (the class of problems solvable in nondeterministic polynomial time), and consequently heuristic algorithms must be used in order to achieve some speed. The problem is that quantifying the quality of their results is a

matter of degree [6] (in fact, the results of these heuristic algorithms are near solutions, not exact solutions). Therefore, future developments in the direction of fuzzy logic are not excluded at all.

References

  • [1] N. Popescu-Bodorin, L. State, Cognitive Binary Logic - The Natural Unified Formal Theory of Propositional Binary Logic, accepted in The European Computing Conference (ECC 2010, Bucharest).
  • [2] A. M. Turing, Computing machinery and intelligence, Mind, 59, 433-460, 1950.
  • [3] P. Blackburn, M. de Rijke, Y. Venema, Modal Logic Cambridge University Press, 2000.
  • [4] N. Popescu-Bodorin, Exploring New Directions in Iris Recognition, Proc. International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC‘ 09,Conference Publishing Services - IEEE Computer Society, pp. 384-391, 2010.
  • [5] A. Mohais, A. Nikov, A. Sahai,S. Nesil, A tunable swarm-optimization-based approach for affective product design, Proc. 9th WSEAS Int. Conf. on Mathematical and Computational Methods in Science and Engineering, MACMESE’07, pp.254-258, 2007.
  • [6] L. A. Zadeh, Test-Score Semantics for Natural Languages, Proc. of the conference on Computational linguistics, Vol. 1, pp. 425 - 430, 1982.