Learning Ex Nihilo

03/04/2019 ∙ by Selmer Bringsjord, et al. ∙ 4

This paper introduces, philosophically and to a degree formally, the novel concept of learning ex nihilo, intended (obviously) to be analogous to the concept of creation ex nihilo. Learning ex nihilo is an agent's learning "from nothing," by the suitable employment of schemata for deductive and inductive reasoning. This reasoning must be in machine-verifiable accord with a formal proof/argument theory in a cognitive calculus (i.e., roughly, an intensional higher-order multi-operator quantified logic), and this reasoning is applied to percepts received by the agent, in the context of both some prior knowledge, and some prior and current interests. Learning ex nihilo is a challenge to contemporary forms of ML, indeed a severe one, but the challenge is offered in the spirt of seeking to stimulate attempts, on the part of non-logicist ML researchers and engineers, to collaborate with those in possession of learning-ex nihilo frameworks, and eventually attempts to integrate directly with such frameworks at the implementation level. Such integration will require, among other things, the symbiotic interoperation of state-of-the-art automated reasoners and high-expressivity planners, with statistical/connectionist ML technology.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper introduces, philosophically and to a degree logico-mathematically, the novel concept of learning ex nihilo, intended (obviously) to be analogous to the concept of creation ex nihilo.111No such assumption as that creation ex nihilo is real or even formally respectable is made or needed in the present paper. Learning ex nihilo is an agent’s learning “from nothing,” by the suitable employment of schemata for deductive and inductive reasoning. This reasoning must be in machine-verifiable accord with a formal proof/argument theory in a cognitive calculus, and this reasoning is applied to percepts received by the agent, in the context of both some prior knowledge, and some prior and current interests. Roughly, cognitive calculi include inferential components of intensional higher-order multi-operator quantified logics, in which expressivity far outstrips off-the-shelf modal logics and possible-worlds semantics, and a number of such calculi have been introduced as bases for AI that is unrelated to learning; e.g. see Govindarajulu & Bringsjord (2017a). The very first cognitive calculus, replete with a corresponding implementation in ML, was introduced in Arkoudas & Bringsjord (2009).)

Learning ex nihilo is a challenge to contemporary forms of ML, indeed a severe one, but the challenge is offered in the spirt of seeking to stimulate attempts, on the part of non-logicist ML researchers and engineers, to collaborate with those in possession of learning ex nihilo frameworks, and eventually attempts to integrate directly with such frameworks at the implementation level. Such integration will require, among other things, the symbiotic use of state-of-the-art automated reasoners (such as ShadowReasoner, the particular reasoner that for us powers learning ex nihilo) with statistical/connectionist ML technology.

2 A Starting Parable

Consider, for instance, Robert, a person of the human variety222The concept of personhood is a mental one that rides well above such merely biological categories as Homo sapiens sapiens. In a dash of insight and eloquence, Charniak & McDermott (1985) declare that “the ultimate goal of AI is to build a person” (p. 7) — from which we can deduce that personhood is in no way inseparably bound up with the particular carbon-based human case. The logico-computational modeling of reasoning at the level of persons, crucial for learning ex nihilo, along with a synoptic account of personhood itself, is provided in Bringsjord (2008). who has just arrived for a black-tie dinner party at a massive and manicured stone mansion to which he has never been, hosted by a couple (who have told him they own the home) he has never met, and is soon seated at an elegant table, every seat of which is occupied by a diner Robert is now meeting for the very first time.333Robert does know himself (and in fact self-knowledge is essential for learning ex nihilo), but, again, he doesn’t know any of the other diners. A thin, tall, crystal glass of his (arrayed among three others, each of a different shape, that are his as well) is gradually filled with liquid poured from a bottle that he notices carries the words ‘Louis Roederer,’ which have no particular meaning for him; as the pour unfolds, Robert notices tiny bubbles in the liquid in his glass, and the white-tuxedoed server says, “Your apertif, sir.” At this point, Robert is in position to learn an infinite number of propositions ex nihilo. He has received no large dataset, and the only direct communication with him has been composed solely of rather empty pleasantries and the one perfunctory declaration that he has been served an apertif. Yet as Robert takes his first (stunning) sip of what he has already learned ex nihilo is expensive Champagne,444Robert perceives that his beverage is sparkling wine, that it’s likely quite dear, and knows enough about both the main countries that produce such a thing (viz. USA, Spain, France, and Italy), and linguistics, to reason to the belief that his beverage’s origin is French, and hence that it’s Champagne. and as he glances at the other five guests seated at the table, he is poised to learn ex nihilo without bound. How much new knowledge he acquires is simply a function of how much time and energy he is willing and able to devote to the form of learning in question. As his water glass is filled, first with a wafer-thin slice of lemon dropped in deftly with silver tongs, and then with the water itself, he gets started:

For example, Robert now knows that his hosts find acceptable his belief that they are quite wealthy. [They may not in fact be wealthy (for any number of reasons), but they know that Robert’s perceiving what they have enabled him to perceive will lead to a belief on his part that they are wealthy, and Robert knows that they know this.] Robert now also knows that the menu, on the wine side, includes at least two additional options, since otherwise his array of empty glasses wouldn’t number three, one of which he knows is for water.

3 Learning Ex Nihilo is Ubiquitous

Of course, where there is one parable, countless others can be found: Herman isn’t the black-tie kind of person. Given a choice between the dinner Robert had versus one under the stars in the wilderness, prepared on an open fire, Herman will take the latter, every time. Having just finished such a meal, Herman is now supine beside the fire, alone, many miles from civilization in the Adirondack Park, on a very chilly but crystal-clear summer evening. Looking up at the heavens, he gets to thinking — and specifically gets to learning (ex nihilo, of course). Herman knows next to nothing about astronomy. As a matter of fact, in general, Herman doesn’t go in much for academics, period. He sees a light traveling smoothly, steadily, and quickly across his field of vision. Herman asks himself: What is this thing? He hears no associated sound. He isn’t inclined to take seriously that this is an alien spacecraft — unless what he is seeing is a total anomaly. Is it? he asks himself. He waits and looks. There is another. This seems routine, but if so, and if this is a UFO, the papers would routinely be filled with UFO “sightings,” and so on; but they aren’t. So, no, not a UFO. The light, he next notes, is traveling too quickly to be a jet at high altitude, and in the dark like this, no light pollution whatsoever, jets at high altitude are hard to see. Herman notes that the object, as it moves, blocks out his view of stars behind it. Ah! Herman now knows that he has just seen a satellite in orbit, and with that done once, before the night is out he will see two more. Herman never knew that you could just lay down under these conditions and see satellites; he also never knew that there are a lot of satellites up there, going around Earth, but he reasons that since his experience is from one particular spot on the surface of Earth, it is likely to be representative of any number of other locations, and hence there must be many of these satellites in orbit. Herman has now come to learn many things, and the night is still young.

Robert and Herman are at the tip of an infinite iceberg of cases in which agents learn ex nihilo, both in rather mundane fashion at fancy dinners and campfire dinners, and in the more exotic cases seen in fiction (witness e.g. the eerie ability of Sherlock Holmes to quickly learn ex nihilo myriad things when meeting people for the first time, a recurring and prominent phenomena in PBS’ Sherlock.). Moreover, it turns out that learning ex nihilo is not only ubiquitous, but is also — upon empirical investigation — a very powerful way to learn in the academic sphere, where forcing oneself to be interested enough to ask oneself questions, and then attempt to reason to their answers, can pay demonstrable academic dividends Chi et al. (1994); VanLehn et al. (1992).

4 Learning Ex Nihilo Produces Knowledge

Please note that while it may seem to the reader that learning ex nihilo is rather relaxed, free-wheeling, and epistemically risky, the fact is that we have very high standards for declaring some process, whether implemented in a person or a machine, to be learning. Put with brutal simplicity here, genuine learning of by an agent, for us, must result in the acquisition of knowledge by the agent, and knowledge in turn consists in the holding of three conditions, to wit: (1) the agent must believe that holds; (2) must have cogent, expressible, surveyable justification for this belief; and (3) must in fact hold. This trio constitute the doctrine that knowledge consists of justified true belief; we shall abbreviate this doctrine as ‘k=jtb.’ By k=jtb

, which reaches back at least to Plato, most of what is called “learning” in AI today (e.g. so-called “deep learning”) is nothing of the sort.

555

For an argument, with which we are somewhat sympathetic, that contemporary “machine learning” fails to produce knowledge for the agent that machine-“learns,” see (Bringsjord et al. 

*do_machine-learning_machines_learn).
But in the case of our Robert and Herman, conditions (1)–(3) obtain with respect to all the new knowledge we have ascribed to them, and this would clearly continue to be true even if we added ad infinitum propositions that they can come to learn ex nihilo, stationary physically, but moving mentally.

5 Learning Ex Nihilo Includes a Novel Solution to the Vexing Gettier Problem

Since Plato it was firmly held by nearly all those who thought about the nature of human knowledge that k=jtb — until the sudden, seismic publication of Gettier (1963), which appeared to feature clear examples in which jtb holds, but not k. It would be quite fair to say that since the advent of Gettier’s piece, to this very day, defenders of k=jtb have been rather stymied; indeed, it wouldn’t be unfair to say that not only such defenders, but in fact all formally inclined epistemologists, have since the advent of Gettier-style counter-examples been scurrying and scrambling about, trying to pick up the pieces and somehow build up again a sturdy edifice. Our account of learning ex nihilo includes a formal-and-computational solution to the Gettier problem, which in turn allows AIs built with our automated-reasoning technology (described below) to acquire knowledge in accord with k=jtb. But first, what is the Gettier problem?

Gettier (1963) presents a scenario in which Smith has “strong evidence” for the proposition

  • Jones owns a Ford.

The evidence in question, Gettier informs us, includes that “Jones has at all times in the past within Smith’s memory owned a car, and always a Ford, and that Jones has just offered Smith a ride while driving a Ford.” In addition, Smith has another friend, Brown, whose whereabouts are utterly unknown to Smith. Smith randomly picks three toponyms and “constructs the following three propositions.”

  • Either Jones owns a Ford, or Brown is in Boston.

  • Either Jones owns a Ford, or Brown is in Barcelona.

  • Either Jones owns a Ford, or Brown is in Brest-Litovsk.

Of course, , , .666We are here by the single turnstyle of course denoting some standard provability relation in a standard, elementary extensional collection of inference schemata, such as that seen in first-order logic = , a logical system discussed below. The disjunction is of course inclusive. “Imagine,” Gettier tells us, “that Smith realized the entailment of each of these propositions he has constructed by” , and on that basis is “completely justified in believing each of these three propositions.” Two further facts in the scenario yield the apparent counter-example, to wit: Jones doesn’t own a Ford, and is currently driving a rented car; and, in a complete coincidence, Brown is in fact in Barcelona. Gettier claims, and certainly appears to be entirely correct in doing so, that Jones doesn’t know , yet is true, Smith believes , and Smith is justified in believing — which is to say that jtb appears to be clearly instantiated!

Learning ex nihilo includes an escape from Gettier: Encapsulated to a brutal degree here, we gladly allow that the characters like Smith in Gettier’s 1963 cases do have knowledge on the basis of a k=jtb-style account, but the knowledge in question can be cast at any number of five levels, 1 (more likely than not) the weakest and 5 (certain) the strongest. Specifically, we hold that Smith knows at a level of 1, because belief in these cases is itself at a strength level of 1, and that’s because the argument serving as justification for belief in these cases only supports belief at that level. To our knowledge, this proposed solution to the counter-examples in question is new, though there are echoes of it in Chisholm (1977).777Echoes only. Chisholm’s main moves are flatly inconsistent with ours. E.g., his definition of the longstanding jtb-based concept of knowledge includes not merely that the agent is justified in believing , but the stipulation that is evident for the agent (Chisholm, 1977, p. 102). And his modifications of the j condition in the jtb triad are internalist, whereas ours are externalist, inhering as they do in formal structures and methods. Somewhat amazingly, the learning-ex nihilo diagnosis and resolution of Gettier cases is assumed in the literature to be non-existent. E.g., here is what’s said about Gettier cases in what is supposed to be the non-controversial and comprehensive SEP: Epistemologists who think that the JTB approach is basically on the right track must choose between two different strategies for solving the Gettier problem. The first is to strengthen the justification condition to rule out Gettier cases as cases of justified belief. This was attempted by Roderick Chisholm; The other is to amend the JTB analysis with a suitable fourth condition, a condition that succeeds in preventing justified true belief from being “gettiered.” Thus amended, the JTB analysis becomes a JTB+X account of knowledge, where the ‘X’ stands for the needed fourth condition. (Ichikawa & Steup, 2012, §3, “The Gettier Problem”) Yet the learning ex nihilo-solution, while retaining the jtb kernel, is based on neither of these two different strategies. An AI-ready inductive logic that allows Gettier to be surmounted in this fashion is presented in Govindarajulu & Bringsjord (2017b).

6 On Logico-Mathematics of Learning ex Nihilo

Is there a logico-mathematics of learning ex nihilo? If so, what is it, at least in broad strokes? The answer to the first of these questions is a resounding affirmative — but in the present paper, intended to serve as an introduction to a new form of human learning driven by reasoning, and concomitantly as a challenge to learning-focused AI researchers (incl. and perhaps esp. those in AI who pursue machine learning in the absence of reasoning carried out in confirmable conformity with inference schemata), the reader is accordingly asked to be be willing to rest content (at least provisionally) with but an encapsulation of the logico-mathematics in question, and references (beyond those in the previous §) to prior work upon which the formal theory of learning ex nihilo is based. This should be appropriate, given that via the present paper we seek to place before the community a chiefly philosophical introduction to learning ex nihilo. We present the core of the relevant logico-mathematics, starting with the next paragraph. Our presentation presumes at least some familiarity with formal computational logic (both extensional and intensional888Roughly, extensional logic invariably assigns a semantic value to formulae in a purely compositional way, and is ideal for formalizing mathematics itself; this is why the logical systems traditionally used in mathematical logic are such things as first-order and second-order logic. Such logical systems, in their elementary forms, are of course covered in the main AI textbooks of today, e.g. Russell & Norvig (2009); Luger (2008). In stark contrast, the meaning of a formula in an intensional logic can’t be computed or otherwise obtained from it and what it’s composed of. For a simple example, if is , and we’re in (extensional) zeroth-order logic (in which is the material conditional), and we know that if false, then we know immediately that is true. But if is instead , where B is an agent-indexed belief operator in epistemic logic, and is what agent believes, the falsity of doesn’t at all guarantee any truth-value for . Here, the belief operator is an intensional operator, and is likely to specifically be a modal operator in some modal logic.) and late 20th- and 21st-century automated deduction/theorem proving. (Learning ex nihilo

is, as we shall soon see, explicitly based upon automated reasoning that is non-deductive as well, but automated non-deductive reasoning is something we can’t expect readers to be familiar with.) For readers in the field of AI who are strong in statistical/connectionist ML, and/or reinforcement learning and Bayesian approaches/reasoning, but weak in formal computational logic, in either or both of its deductive and inductive forms, we recommend

Benzmüller & Woltzenlogel-Paleo (2016); Govindarajulu & Bringsjord (2017a), and then working backwards through readily available textbooks and papers cited in this earlier IJCAI-venue work, and in the next two sub-sections.

6.1 Logical Systems and Learning Ex Nihilo

The concept of a logical system, prominent in the major result known as Lindström’s Theorem,999Elegantly covered in Ebbinghaus et al. (1994). provides a detailed and rigorous way to treat logics abstractly and efficiently, so that e.g. we can examine and (at least sometimes) determine the key attributes that these logics have, relative to their expressive power. A logical system is a triple

whose elements are, in turn, a formally specified language (which would customarily be organized in ways that are familiar in programming languages; e.g. types would be specified); an inference theory (which would be a proof theory in the deductive case, an argument theory in the inductive case, and best called an inference theory when inference schemata from both categories are mixed) that allows for precise and machine-checkable proofs/arguments, composed of inference schemata; and some sort of semantics by which the meaning of formulae in are to be interpreted.

Each of the elements of the abstract triple the individuates a given logical system can be vast and highly nuanced, and perhaps even a substantive part of a branch of formal logic in its own right. For example, where the logical system is standard first-order logic , will include all of established model theory for first-order logic. Lindström’s Theorem tells us (roughly) that any movement to an extensional logical system whose expressive power is beyond will lack certain meta-attributes that many consider desirable. For instance, second-order logic isn’t complete, whereas is. This is no way stops AI researchers from working on and with higher-order extensional logics.101010The formal verification of Gödel’s famous ontological argument for God’s existence, an argument that employs , has been verified by AI researchers; see e.g. Benzmüller & Woltzenlogel-Paleo (2014).

For present learning ex nihilo, the most important element in the triple that makes for a logical system is , which can be viewed as a set of inference schemata.111111For simple logical systems, the phrase ‘inference rules’ is often used instead of the more accurate ‘inference schemata,’ and in fact there is a tradition in places of using the former. Because even an individual inference schema can be quite complex, and can involve meta-logical constructs and computation, talk of schemata is more accurate. For instance, a common inference schema in so-called natural deduction is where is a constant in formula but all sorts of further restrictions can be (and sometimes are) placed on , such as that it must be a formula. As such things grow more elaborate, it quickly makes precious little sense to call them “rules,” and besides which many think of them as programs. The reason is that learning ex nihilo is based on reasoning in which each inference is sanctioned by some , and on the automation of this reasoning, including when the inference schemata are non-deductive. In computer science and AI, a considerable number of people are familiar with automated deductive reasoners; after all, Prolog is based on automated deductive reasoning, using only one inference schema (resolution), involving formulae in a fragment of . Learning ex nihilo, in contrast, is based on automated reasoning over any inference schemata — not only deductive ones, but inductive ones, e.g. ones that regiment analogical reasoning, abductive reasoning, enumerative inductive reasoning, and so on. All the reasoning patterns seen in inductive logic, in their formal expressions, are possible as inference schemata employed in learning ex nihilo.121212For a particular example of formal, automated reasoning that blends deduction with analogical reasoning, see Licato et al. (2013). For a readable overview of inference patterns in inductive logic that we formalize and automate, see Johnson (2016).

6.2 From Logical Systems to Cognitive Calculi

Because learning ex nihilo frequently involves the mental states of other agents (as seen e.g. in the parable regarding Robert), we employ a novel class of logical systems called cognitive calculi, and they form part of the singular basis of this new kind of learning. A cognitive calculus, put simply, is a logical system in which includes intensional operators (e.g. for such things as belief, desire, intention, emotional states, communication, perception, and attention); includes at least one inference schema that involves such operators; and the meaning of formulae, determined by some particular , because they can in their expressive power far outstrip any standard, off-the-shelf semantics (such as possible-worlds semantics), is generally proof-theoretic in nature.

6.3 The Learning Ex Nihilo Loop

Learning ex nihilo happens when an agent loops through time, as follows in broad strokes: Identify Interest/Goal Query Discover Argument/Proof to Answer Query Learn Identify Interest/Goal, etc. This cycle is at work in the parables with which we began. We do not have space to detail this persistent process. In particular, the management of the agent’s interests (or goals) requires planning — but the emphasis in the present paper, for economy, is on reasoning. Below we do discuss not only the AI technology that brings this loop to life, but some simulation of the process in our earlier parables.

7 The Automation of Learning Ex Nihilo

But how do we pass from the abstract logico-mathematics of learning ex nihilo to an artificial agent that can bring such a thing to life? The answer should be quite obvious, in general: We build an AI that can find the arguments that undergird the knowledge obtained by learning ex nihilo. In turn, this means that we need an automated reasoner of sufficient power and reach that can pursue epistemic interests, and a planner that can at least manage (e.g. prioritize) interests. This brings us to the following progression, in which we now briefly describe one such reasoner (ShadowReasoner), and then give an illustrative simulation made possible by this AI technology.

7.1 Automated Reasoner: ShadowReasoner

A large amount of research and engineering has gone into building first-order theorem provers in the last few decades. ShadowReasoner leverages this progress by splitting any logic into a first-order core and the “remainder,” and then calls a first-order theorem prover when needed. Briefly, ShadowReasoner splits the inference schemata for a given into two parts and . The first part consists of inference schemata that can be carried out by a first-order theorem prover when the input expressions are shadowed down into first-order logic. The second part consists of inference schemata that cannot be reduced to first-order reasoning. Given any problem in a logic, ShadowReasoner alternates between trying out and calling a first-order theorem prover to handle .

The core algorithm for ShadowReasoner has a theoretical justification based on the following theorem (which arises from the fact that first-order logic can be used to simulate Turing machines

Boolos et al. (2003)):

[linecolor=white, frametitle=Theorem 1,frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10] Given a collection of Turing-decidable inference schemata , for every inference , there is a corresponding first-order inference .

We illustrate how ShadowReasoner works in the context of a first-order modal logic employed in Govindarajulu & Bringsjord (2017a). Please note though there are some extant first-order modal-logic theorem provers, built upon reductions to first-order theorem provers, they have some deficiencies. Such theorem provers achieve their reduction to first-order logic via two methods. In the first method, modal operators are represented by first-order predicates. This approach is computationally fast but can quickly lead to well-known inconsistencies, as demonstrated in Bringsjord & Govindarajulu (2012). In the second method, the entire proof theory is implemented in first-order logic, and the reasoning is carried out within first-order logic. Here, the first-order theorem prover simply functions as a programming system. The second method, while accurate, can be excruciatingly slow.

ShadowReasoner uses the different approach alluded to above — one in which it alternates between calling a first-order theorem prover and applying non-first-order inference schemata. When we call the first-order prover, all non-first-order expressions are converted into propositional atoms (i.e., shadowing), to prevent substitution into non-first-order contexts, as such substitutions lead to inconsistencies Bringsjord & Govindarajulu (2012). This approach achieves speed without sacrificing consistency. The algorithm is briefly described below.

First we define the syntactic operation of atomizing a formula, denoted by . Given any arbitrary formula , is a unique atomic (propositional) symbol. Next, we define the level of a formula: .

Given the above definition, we can define the operation of shadowing a formula to a level.

[linecolor=white, frametitle=Shadowing, frametitlebackgroundcolor=gray!25, backgroundcolor=gray!10, roundcorner=8pt] To shadow a formula to a level , replace all sub-formulae in such that with simultaneously. We denote this by .

For a set , the operation of shadowing all members in the set is simply denoted by .

Assume we have access to a first-order prover . For a set of pure first-order formulae and a first-order , gives us a proof for if such a first-order proof exists; otherwise returns NO.

Input: Input Formulae , Goal Formula
Output: A proof of if such a proof exists, otherwise fail
initialization;
while goal not reached do
        ;
        if  then
               return ;
              
       else
               expand by using any applicable ;
               if  then
                      /* The input cannot be expanded further */
                      return fail
              else
                     set
               end if
              
        end if
       
end while
Algorithm 1 ShadowReasoner Core Algorithm

7.2 Illustrative Simulation

Figures 1 and 2 illustrate the dinner-party parable simulated in the deontic cognitive event calculus () using ShadowReasoner within a graphical argument-construction system; see Bringsjord et al. (2008) for a similar, but less intelligent, system. See the appendix for a description of syntax and inference schemata of . Figure 1 simulates in pure first-order logic Robert’s learning that his drink is an aperitif. Figure 2 is a proof in a cognitive calculus, viz. the one described in Govindarajulu & Bringsjord (2017a), of Robert learning the following propositions: “Robert believes that his host is wealthy”, “The host believes Robert believes that his host is wealthy”, and “Robert believes that his host believes Robert believes that his host is wealthy.”131313 With background information . The figures illustrate first-order and cognitive-calculus reasoners (shown as and , resp.) being employed to derive these statements. Automated discovery of the proofs in 1 took , and the proofs in 1 took . We briefly explain the figures now. The two figures show assumptions that are fed to the reasoner and output formulae that are proved by the reasoner, as denoted by boxes containing the symbols and the directions of the arrows. Each formula also has a human readable label displayed with a purple background. The boxes with shaded backgrounds are the outputs, the shading is not necessary but makes it visually easier to see the outputs. The text below each formula shows what assumptions the formula is derived from or dependent upon.

Figure 1: Dinner Party Example Part 1
Figure 2: Dinner Party Example Part 2

8 ++++ Objections, Rebuttals

We now anticipate and rebut ++++ objections likely to come from skeptics, including specifically those immersed in forms of learning far removed from any notion of machine-verifiable proof or argument enabled by inference schemata.

8.1 Objection 1: This isn’t learning from nothing!

The first objection is that ‘learning ex nihilo’ is a misnomer. The rationale for this complaint is simply the reporting of an observation that should be clear to all: viz. that the learning in question undeniably makes use of pre-existing stuff; hence we’re not dealing with learning from nothing. In the parable of the dinner party, e.g., Robert has brought his pre-existing command of elementary arithmetic to the table; ditto for much other pre-known propositional content. So how then is it fair to speak of learning ex nihilo? It’s fair because obviously learning ex nihilo trades on the pre-existing concept of creation ex nihilo, and that millennia-old conception allows that the Almighty (by definition!) was around before the creation in question occurred. And of course this is just one part of pre-creation stuff in creation ex nihilo: God presumably needed to have a mental concept of a planet to create a planet. We generally suspect that learning ex nihilo begins to kick into “high gear” in the human sphere when children are sophisticated enough to exploit their prior knowledge of content that requires, for its underlying representation, and basic modal logic. From the standpoint of logicist cognitive science, rather than AI, this means that learning ex nihilo would be aligned with the views of Piaget and colleagues [e.g. Inhelder & Piaget (1958)], Stenning and colleagues [e.g. Stenning & van Lambalgen (2008)], Bringsjord and colleagues [e.g. Bringsjord et al. (1998); Bringsjord (2008)], and Rips and colleagues [e.g. Rips (1994, 1989)]. The goal of full formalization and implementation of learning ex nihilo would likely be regarded by these cognitive scientists as a welcome one.

8.2 Objection 2: Isn’t this just reasoning?

The second objection we anticipate is that learning ex nihilo isn’t learning at all; it’s just a form of reasoning. In reply, any process, however much it relies on reasoning, that does enable an agent running that process to acquire genuine knowledge (and our j=tb

definition of knowledge, note, is a very demanding one) would seem to be quite sensibly classified as a learning process. In fact, probably it strikes many as odd to say that one has a form of machine learning that does

not result in the acquisition of any knowledge.

8.3 Objection 3: What about inductive logic programming?

The objection here can be encapsulated thus: “What about inductive logic programming (ILP)? Surely this established family of techniques both uses formal logic, and results in new knowledge.”

ILP is along the general lines of learning ex nihilo, agreed; but ILP is acutely humble and highly problematic when measured against LEN, for reasons we give momentarily. Before giving these reasons, without generaity we fix a general framework Mooney (2000) to fix abduction and induction in supposedly representative logicist fashion:

Given:

Background knowledge, , and observations (data), , both represented as sets of formulae in first-order predicate calculus, where is restricted to ground formulae.

Find:

An hypothesis (also a set of logical formulae) such that .

From one can derive both induction and abduction, according to Mooney: For the latter, is restricted to atomic formulae or simple formulae of the general form , and is — as Mooney says — “significantly larger” than . For induction, Mooney says that is to consist of universally quantified Horn clauses, and is “relatively small” and may even be the null set.

From an explain-the-extant-literature point of view, is at least somewhat attractive. For as Mooney (2000) points out, this framework captures what many logic-oriented learning researchers in AI have taken induction and abduction to be; this includes, specifically, ILP researchers ilp_book,ilp_lavrace.g.. Unfortunately, despite its explanatory virtue, , when measured against human cognition of the sort involved in learning ex nihilo, is embarrassingly inadequate. As we have said, this can be shown effortlessly via many reasons. We quickly mention just four here.

8.3.1 Reason 1: Runs Afoul of Non-Deductive Reasoning

Why should the combination of background knowledge and a candidate hypothesis need to deductively entail some observation, as says (via its use of )? Suppose that as Smith sits in his home office looking through a window he perceives () that a uniformed man with a FedEx cap is approaching Smith’s house, a small white box in hand. Smith has no doubt learned that () a delivery is about to be attempted, but does it really need to be the case that, where is background knowledge, can be used to prove ? No, it doesn’t. Maybe it’s Halloween, Smith forgot that it is, and the person approaching is in costume and playacting. Maybe the approaching man is a criminal in disguise, merely casing Smith’s domicile. And so on. All that is needed is a fairly strong argument in support of . And that argument can make use of inferences that are deductively invalid, but valid as reasoning that is analogical, adbuctive, inductive, etc.141414Indeed, these inferences can even be formally valid in the inductive logics that will undergird ALML; see below.

8.3.2 Reason 2: Leaves Aside Other Non-Deductive Reasoning

This reason was revealed in the previous sentence: that sentence refers to not just to abduction and induction, but also to reasoning that is analogical in nature, and such reasoning isn’t included in ILP. In fact, learning via analogical reasoning is often left aside in coverage of logicist learning of any textbook variety.151515E.g. learning by analogy isn’t included in AI’s definitive, otherwise encyclopedic textbook: Russell & Norvig (2009). And as we have seen above, even Pollock mentions only abduction and induction; he leaves analogical reasoning aside. But if in an earlier case Smith had encountered not a FedEx man, but rather a USPS mailman making a delivery to his house, he may well have believed that the man with the FedEx cap was analogous, and hence was making a delivery.

8.3.3 Reason 3: Runs Afoul of Robust Expressivity

A quick but decisive third reason Mooney’s explodes in the face of real human cognition is that any expressivity restriction on and/or is illogical, and certainly any specific restriction that be restricted to ground formulae and/or that be confined to Horn-clause logic (or even for that matter full FOL) is patently illogical. This can be seen in middle-school classes that cover arithmetic, where even very young students cook up and affirm hypotheses that are expressed using infinitary constructions beyond even full FOL. For instance: Student Johnny is reminded of the definition of a prime number, and then shown that 4 is equal to 3 plus 1, that 6 is equal to , that , etc. Johnny is asked to consider whether the next two even numbers continue the pattern. He observes that and that , and is now inclined to hypothesize () that every even integer greater than 2 is the sum of two primes. A natural form for , where is 2 and the even numbers from there continue is simply a list like:

Yet cannot be expressed in finitary formulae,161616It is naturally represented by an infinite conjunction, which the logic allows. and even if one squeezed into finitary FOL, that would be done by employing the same trick as is used for instance in axiomatic set theory, where FOL is augmented with schematic formula that denote an infinite number of instantiations thereof.171717E.g., witness the Axiom Schema of Separation as a replacement for Frege’s doomed Axiom V, shot down violently and decisively by the famous Russell’s Paradox; see Suppes (1972) for wonderfully elegant coverage. Regardless, even if by some miracle could be expressed in some finitary extensional logic at or beneath FOL, the classroom in question wouldn’t exactly operate well without the teacher’s believing Johnny’s believing that , and certainly nothing like this fact is expressible in extensional logic (let alone Horn-clause logic!).

8.3.4 Reason 4: Ignores Thinking

The dictum that truth is stranger than fiction, alas, is frequently confirmed by the oddities of contemporary AI research. The fourth reason is inadequate is a confirming example. For while the framework projects an air of being about thinking, in actuality it leaves thinking aside. Indeed there’s a rather common fallacy at work in , and its promotion: the fallacy of composition. For while does include some forms of reasoning, these forms (and even, for that matter, all forms of reasoning put together) don’t comprise thinking; thinking includes the reasoning called out in as merely a proper part. To be more specific, in real and powerful thinking, an hypothesis can be wisely discarded when there is evidence against it. (The rejection of the aether drag hypothesis is a firm case in point.)

The fourth flaw infecting can be easily unpacked: Real learning is intertwined with the full gamut of human-level thinking: with planning, decision-making, and communication. If there are no observations to learn from, an agent can plan to get more observations. An agent can decide when to learn and what to learn. Precious little substantive learning takes place without communication between teacher and learner, including written communication. And finally, at the highest end of the spectrum of powerful learning, learners formalize learning itself, and learn more by doing so.

8.4 Objection 4: Does ShadowReasoner really conform to the K=jtb thesis?

“It is not clear for me how and why the approach produces true beliefs. Indeed, it seems to me that the reasoning that is performed by ShadowReasoner is internal to it and that propositions that are ’produced’ are not necessarily true. In other words, why does ShadowReasoner conform to the K=jtb thesis?”

There is a tradition in agent-based modeling in which there is a difference between a system S (e.g. ShadowReasoner) and the system S modeling other agents. So truth is what the system knows, and so that knowledge that P on the part of an agent is inferred from that agent believing P, knowing that some proof/argument for P exists, and the system’s knowing that P.

8.5 Objection 5: Isn’t this just deduction

Cognitive calculi have been used to capture and model non-deductive inference systems. See Bringsjord & Licato (2015) for one such example.

8.6 Objection 6: What about Lewis’ “Effusive Knowledge”?

“You claim to have a solution to the Gettier problem, one seemingly based on introducing different levels of knowledge of (why five?), based on the different levels of justification one may have for . This idea echoes Lewis’ 1996 epistemic contextualism, but does it really solve the Gettier problem? Anbd I don’t see any justification for why, say, level-5 jtb is to be equated with full knowledge?”

Actually, Lewis’ “Effusive Knowledge” paper espouses a conception that is the opposite of what underlies learning ex nihilo. Lewis holds that there is no knowledge in the Gettier scenarios, because his definition for knowledge (which marks a rejection of k=jtb) isn’t satisfied in these cases. Learning ex nihilo entails that there is knowledge in these scenarios, but reduced knowledge. If the degree of belief is , then knowledge partaking of this belief is of degree as well. In Gettier’s original scenarios, knowledge is at the level of 2 (for reasons too far afield to articulate under current space constraints). It seems never to have occurred to Lewis that if belief comes in degrees, then knowledge (which surely includes belief181818Actually, Lewis asserts that there can be knowledge without belief, because a timid student can know the answer, but has “no confidence that he has it right, and so does not believe what he knows” (p. 556). In our formal framework, the situation is that the student believes, at a low level (1, say) that he knows (at some level ) the answer, and as a matter of fact he does know the answer at level 1. This scenario is provably consistent in our relevant cognitive calculi. Not only that, but as far as we can tell, since in point of fact timidity of this type often prevents successful performance, our formal diagnosis has “real-worl” plausibility.) must itself come in degrees. In the context of formal intensional logic (e.g. formal epistemic logic), Lewis position/paper is from our formalist point of view weak, because it hinges on the repeatedly asserted-without-argument claim that in the case of a single-ticket lottery of size , even when the number of tickets is aribrarily large (eg = 1 quadrillion), one cannot know that a given ticket () will not win. From the standpoint of learning ex nihilo

, this is bizarre, because surely one knows that in the next moment one’s computer will not spontaneously combust, because such an event is preposterously unlikely. (And looking back in time, surely one knows that the computer sitting here now was there 10 seconds ago.) Moreover, Lewis rejects the very concept of justification as part and parcel of knowledge; learning

ex nihilo by contrast is an AI-driven conception based on automated reasoning (automated deduction and automated analogical, abductive, enumerative inductive etc reasoning).

As to levels of belief, in the case of an 11-valuded inductive logic we use to undergird learning ex nihilo, 5 = certain, 4 = overwhelmingly likely, 3 = beyond reasonable doubt, 2 = likely, 1 = more likely than not. 0 is counterbalanced, and then we have the symmetrical negative values. A belief at level 5 corresponds to knowing things that Lewis (and everyone else) agree that we know: e.g. knowing that 2+2=4.

8.7 Objection 7: What about QMLTP and isn’t shadowing similar to existing schemes?

Note: we deal with /indexed/ modal operators (agent, time and formulae for dyadic obligations). We can have aribtary expressions for these indices. Our goals and systems are much different from QMLTP Raths & Otten (2012). Schemes such as the Standard Translation for modal logics are semantic in nature and apply only for propositional logics.

8.8 Objection 8: Examples are too simple and the discussion is too philosophical

Our desire was to introduce, mostly philosophically, and to a degree formally, LEN. We also point out connections to famous fictional detection, and cognitive psychology (learning by posing questions to oneself, which is seminal work by Chi et al). A purely formal treatment can we believe follow quite naturally after this introduction, and fit the formal elements we already give. Such a treatment includes further specification of cognitive calculi where the inference schemata run the gamut of inductive logic; mechanisms for question-generation; mechanisms for argument-discovery over these inference schemata by automated-reasoning technques that cover all major forms of reasoning; and an argument-checker to validate discovered arguments. ShadowReasoner is the current implemented system used for argument- and proof-discovery, and checking. The full logico-mathematics of LEN, and corresponding implementation, is something we now feel we should give more of (thank you), but we are concerned that jumping further in that direction, without seeking to communicate to those in ML who don’t do formal logic, might retard cross-paradigm collaboration on learning. In our experience, many stat ML folks don’t know e.g. what a logical system is even in the standard extensional sense we start with (Lindstrom), and don’t know how it might connect to everyday human activity. Since in the general case argument- or proof-discovery in a cognitive calculus is far above what’s Turing-computable, our strategy was to try to connect to ”everyday” parables first. As far as we can tell, the functions learned by systems doing ML are Turing-computable; we feel a special need to begin with such parables in order to show the learning by reasoning in question isn’t rare, and apologize if we have miscalculated here.

LEN comes with a novel solution to the Gettier problem that has plagued (if not eviscertated) larges parts of epistemology in philosophy for decades — and at the same time insists upon a form of ML that produces propositional knowledge. This is a form of knowledge that Gettier cases imperil.

Perhaps more importantly, we are concerned that ML folks of the statistical/connectionist variety will not engage more straightforwardly technical content. In our experience, while you know and deal deeply with higher-order logics, stat/conn ML folks don’t even know why zero-order logic might be relevant to ML, or AI in general. We may be miscalculating, but we are seeking to provide informal, philosophical toeholds to bring somme of these people into collaboration.

8.9 Objection n: Frivolity?

Finally, some will doubtless declare that learning ex nihilo is frivolous. What good is it, really, to sit at a dinner table and learn useless tid-bits? This attitude is most illogical. The reason is that, unlike what is currently called “learning,” only persons at least at the level of humans can learn ex nihilo, and this gives such creatures power, for the greatest of what human persons have learned (and, we wager, will learn) comes via learning ex nihilo. In support of this we could cite countless confirmatory cases in the past, but rest content to but point out that armchair learning ex nihilo regarding simultaneity (Einstein) and infinitesimals (Leibniz) was rather productive.191919And for those readers with a literary bent, it should also be pointed out that the great minds of detection, not only the aforementioned Sherlock Holmes but e.g. Poe’s seminal Le Chevalier C. Auguste Dupin, achieve success primarily because of their ability to learn ex nihilo.

9 Conclusion and Next Steps

We have provided an introduction, philosophical in nature, to the new concept of learning ex nihilo, and have included enough information re. its formal foundation to allow those conversant with logicist AI to better understand this type of learning. In addition, we have explained that learning ex nihilo can be automated via sufficiently powerful automated-reasoning technology. Of course, this is a very brief paper. Accordingly, next steps include dissemination of further details, obviously. But more importantly, what is the relationship between learning ex nihilo

and types of machine learning that are based on artificial neural networks, Bayesian reasoning, reinforcement learning, and so on? These are other currently popular types of learning are certainly

not logicist, and hence nothing like a logical system, let alone a cognitive calculus, are present. In fact, it’s far from clear that it’s even possible to construct the needed machinery for learning ex nihilo out of the ingredients that go into making these non-logicist forms of learning.

Acknowledgements

We are deeply grateful for current support from the U.S. Office of Naval Research to invent, formalize, and implement new forms of learning based on automated reasoning. Prior support from DARPA of logicist learning has also proved to be helpful, and for this too the first author expresses thanks. Four anonymous reviewers provided insightful feedback, for which we are deeply grateful.

References

  • (1)
  • Arkoudas & Bringsjord (2009) Arkoudas, K. & Bringsjord, S. (2009), ‘Propositional Attitudes and Causation’, International Journal of Software and Informatics 3(1), 47–65.
    202020http://kryten.mm.rpi.edu/PRICAI_w_sequentcalc_041709.pdf
  • Benzmüller & Woltzenlogel-Paleo (2014) Benzmüller, C. & Woltzenlogel-Paleo, B. (2014), Automating Gödel’s Ontological Proof of God’s Existence with Higher-order Automated Theorem Provers, in

    T. Schaub, G. Friedrich & B. O’Sullivan, eds, ‘Proceedings of the European Conference on Artificial Intelligence 2014 (ECAI 2014)’, IOS Press, Amsterdam, The Netherlands, pp. 93–98.


    http://page.mi.fu-berlin.de/cbenzmueller/papers/C40.pdf
  • Benzmüller & Woltzenlogel-Paleo (2016) Benzmüller, C. & Woltzenlogel-Paleo, B. (2016), The Inconsistency in Goödel’s Ontological Argument: A Success Story for AI in Metaphysics, in S. Kambhampati, ed., ‘Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16)’, AAAI Press, pp. 936–942.
    https://www.ijcai.org/Proceedings/16/Papers/137.pdf
  • Boolos et al. (2003) Boolos, G. S., Burgess, J. P. & Jeffrey, R. C. (2003), Computability and Logic (Fifth Edition), Cambridge University Press, Cambridge, UK.
  • Bringsjord (2008) Bringsjord, S. (2008), Declarative/Logic-Based Cognitive Modeling, in R. Sun, ed., ‘The Handbook of Computational Psychology’, Cambridge University Press, Cambridge, UK, pp. 127–169.
  • Bringsjord et al. (1998) Bringsjord, S., Bringsjord, E. & Noel, R. (1998), In Defense of Logical Minds, in ‘Proceedings of the 20 Annual Conference of the Cognitive Science Society’, Lawrence Erlbaum, Mahwah, NJ, pp. 173–178.
  • Bringsjord et al. (2018) Bringsjord, S., Govindarajulu, N., Banerjee, S. & Hummel, J. (2018), Do Machine-Learning Machines Learn?, in V. Müller, ed., ‘Philosophy and Theory of Artificial Intelligence 2017’, Springer SAPERE, Berlin, Germany, pp. 136–157. This book is Vol. 44 in the book series. The paper answers the question that is its title with a resounding No. A preprint of the paper can be found via the URL given here.
  • Bringsjord & Govindarajulu (2012) Bringsjord, S. & Govindarajulu, N. S. (2012), ‘Given the Web, What is Intelligence, Really?’, Metaphilosophy 43(4), 361–532. This URL is to a preprint of the paper.
    http://kryten.mm.rpi.edu/SB NSG Real Intelligence 040912.pdf
  • Bringsjord & Licato (2015) Bringsjord, S. & Licato, J. (2015), ‘By Disanalogy, Cyberwarfare is Utterly New’, Philosophy and Technology 28(3), 339–358.
    http://kryten.mm.rpi.edu/SB_JL_cyberwarfare_disanalogy_DRIVER_final.pdf
  • Bringsjord et al. (2008) Bringsjord, S., Taylor, J., Shilliday, A., Clark, M. & Arkoudas, K. (2008), Slate: An Argument-Centered Intelligent Assistant to Human Reasoners, in F. Grasso, N. Green, R. Kibble & C. Reed, eds, ‘Proceedings of the 8th International Workshop on Computational Models of Natural Argument (CMNA 8)’, University of Patras, Patras, Greece, pp. 1–10.
  • Charniak & McDermott (1985) Charniak, E. & McDermott, D. (1985), Introduction to Artificial Intelligence, Addison-Wesley, Reading, MA.
  • Chi et al. (1994) Chi, M., Leeuw, N., Chiu, M. & Lavancher, C. (1994), ‘Eliciting self-explanations improves understanding’, Cognitive Science 18, 439–477.
  • Chisholm (1977) Chisholm, R. (1977), Theory of Knowledge 2nd ed, Prentice-Hall, Englewood Cliffs, NJ.
  • Ebbinghaus et al. (1994) Ebbinghaus, H. D., Flum, J. & Thomas, W. (1994), Mathematical Logic (second edition), Springer-Verlag, New York, NY.
  • Gettier (1963) Gettier, E. (1963), ‘Is Justified True Belief Knowledge?’, Analysis 23, 121–123.
    http://www.ditext.com/gettier/gettier.html
  • Govindarajulu & Bringsjord (2017a) Govindarajulu, N. & Bringsjord, S. (2017a), On Automating the Doctrine of Double Effect, in C. Sierra, ed., ‘Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17)’, International Joint Conferences on Artificial Intelligence, pp. 4722–4730.
    https://doi.org/10.24963/ijcai.2017/658
  • Govindarajulu & Bringsjord (2017b) Govindarajulu, N. S. & Bringsjord, S. (2017b), Strength Factors: An Uncertainty System for Quantified Modal Logic, in V. Belle, J. Cussens, M. Finger, L. Godo, H. Prade & G. Qi, eds, ‘Proceedings of the IJCAI Workshop on “Logical Foundations for Uncertainty and Machine Learning (LFU-2017)’, Melbourne, Australia, pp. 34–40. .
  • Ichikawa & Steup (2012) Ichikawa, J. & Steup, M. (2012), The Analysis of Knowledge, in E. Zalta, ed., ‘The Stanford Encyclopedia of Philosophy’.
  • Inhelder & Piaget (1958) Inhelder, B. & Piaget, J. (1958), The Growth of Logical Thinking from Childhood to Adolescence, Basic Books, New York, NY.
  • Johnson (2016) Johnson, G. (2016), Argument & Inference: An Introduction to Inductive Logic, MIT Press, Cambridge, MA.
  • Lavrac̆ & Dz̆eroshki (1994) Lavrac̆, N. & Dz̆eroshki, S. (1994), Inductive Logic Programming: Techniques and Applications, Ellis Horwood, Hemel, UK.
  • Lewis (1996) Lewis, D. (1996), ‘Elusive Knowledge’, Australasian Journal of Philosophy 70(4), 549–567.
  • Licato et al. (2013) Licato, J., Govindarajulu, N. S., Bringsjord, S., Pomeranz, M. & Gittelson, L. (2013), Analogico-Deductive Generation of Gödel’s First Incompleteness Theorem from the Liar Paradox, in F. Rossi, ed., ‘Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI–13)’, Morgan Kaufmann, Beijing, China, pp. 1004–1009. Proceedings are available online at http://ijcai.org/papers13/contents.php. The direct URL provided below is to a preprint. The published version is available at http://ijcai.org/papers13/Papers/IJCAI13-153.pdf.
  • Luger (2008) Luger, G. (2008), Artificial Intelligence: Structures and Strategies for Complex Problem Solving (6th Edition), Pearson, London, UK.
  • McNamara (2010) McNamara, P. (2010), Deontic Logic, in E. Zalta, ed., ‘The Stanford Encyclopedia of Philosophy’. McNamara’s (brief) note on a paradox arising from Kant’s Law is given in an offshoot of the main entry.
    https://plato.stanford.edu/entries/logic-deontic
  • Mooney (2000) Mooney, R. (2000), Integrating Abduction and Induction in Machine Learning, in P. Flach & A. Kakas, eds, ‘Abduction and Induction’, Kluwer Academic Publishers, Dordrecht, The Netherlands, pp. 181–191.
  • Mueller (2006) Mueller, E. (2006), Commonsense Reasoning: An Event Calculus Based Approach, Morgan Kaufmann, San Francisco, CA. This is the first edition of the book. The second edition was published in 2014.
  • Muggleton (1992) Muggleton, S., ed. (1992), Inductive Logic Programming, Academic Press, London, UK.
  • Raths & Otten (2012) Raths, T. & Otten, J. (2012), The qmltp problem library for first-order modal logics, in ‘Proceedings of the 6th International Joint Conference on Automated Reasoning’, IJCAR 12, Springer-Verlag, Berlin, Heidelberg, pp. 454–461.
  • Rips (1989) Rips, L. (1989), ‘The Psychology of Knights and Knaves’, Cognition 31(2), 85–116.
  • Rips (1994) Rips, L. (1994), The Psychology of Proof, MIT Press, Cambridge, MA.
  • Russell & Norvig (2009) Russell, S. & Norvig, P. (2009), Artificial Intelligence: A Modern Approach, Prentice Hall, Upper Saddle River, NJ. Third edition.
  • Stenning & van Lambalgen (2008) Stenning, K. & van Lambalgen, M. (2008), Human Reasoning and Cognitive Science, MIT Press, Cambridge, MA.
  • Suppes (1972) Suppes, P. (1972), Axiomatic Set Theory, Dover Publications, New York, NY.
  • VanLehn et al. (1992) VanLehn, K., Jones, R. & Chi, M. (1992), ‘A Model of the Self-Explanation Effect’, Journal of the Learning Sciences 2(1), 1–60.

Appendix A Deontic Cognitive Event Calculus

 is a quantified multi-modal sorted calculus and a cognitive calculus. A sorted system can be regarded analogous to a typed single-inheritance programming language. We show below some of the important sorts used in . Among these, the Agent, Action, and ActionType sorts are not native to the event calculus.

Sort Description
Agent Human and non-human actors.
Time The Time type stands for time in the domain. E.g. simple, such as , or complex, such as .
Event Used for events in the domain.
ActionType Action types are abstract actions. They are instantiated at particular times by actors. Example: eating.
Action A subtype of Event for events that occur as actions by agents.
Fluent Used for representing states of the world in the event calculus.

Note: actions are events that are carried out by an agent. For any action type and agent , the event corresponding to carrying out is given by . For instance if is “running” and is “Jack” , denotes “Jack is running”.

a.1 Syntax

The syntax has two components: a first-order core and a modal system that builds upon this first-order core. The figures below show the syntax and inference schemata of . The syntax is quantified modal logic. The first-order core of  is the event calculus Mueller (2006). Commonly used function and relation symbols of the event calculus are included. Other calculi (e.g. the situation calculus) for modeling commonsense and physical reasoning can be easily switched out in-place of the event calculus.

The modal operators present in the calculus include the standard operators for knowledge , belief , desire , intention , etc. The general format of an intensional operator is , which says that agent knows at time the proposition . Here can in turn be any arbitrary formula. Also, note the following modal operators: for perceiving a state, for common knowledge, for agent-to-agent communication and public announcements, for belief, for desire, for intention, and finally and crucially, a dyadic deontic operator that states when an action is obligatory or forbidden for agents. It should be noted that  is one specimen in a family of easily extensible cognitive calculi.

The calculus also includes a dyadic (arity = 2) deontic operator . It is well known that the unary ought in standard deontic logic lead to contradictions. Our dyadic version of the operator blocks the standard list of such contradictions, and beyond.212121A overview of this list is given lucidly in McNamara (2010).

[linecolor=white, nobreak, frametitle=Syntax, frametitlebackgroundcolor=gray!15, backgroundcolor=gray!05, roundcorner=8pt]

a.2 Inference Schemata

The figure below shows the inference schemata for . and are inference schemata that let us model idealized agents that have their knowledge and belief closed under the  proof theory. While normal humans are not dedcutively closed, this lets us model more closely how deliberate agents such as organizations and more strategic actors reason. (Some dialects of cognitive calculi restrict the number of iterations on intensional operators.) and state respectively that it is common knowledge that perception leads to knowledge, and that it is common knowledge that knowledge leads to belief. lets us expand out common knowledge as unbounded iterated knowledge. states that knowledge of a proposition implies that the proposition holds. to provide for a more restricted form of reasoning for propositions that are common knowledge, unlike propositions that are known or believed. states that if an agent communicates a proposition to , then believes that believes . dictates how obligations get translated into intentions.

[linecolor=white, nobreak, frametitle=Inference Schemata, frametitlebackgroundcolor=gray!15, backgroundcolor=gray!05, roundcorner=8pt]