The ILASP system for Inductive Learning of Answer Set Programs

05/02/2020 ∙ by Mark Law, et al. ∙ 0

The goal of Inductive Logic Programming (ILP) is to learn a program that explains a set of examples in the context of some pre-existing background knowledge. Until recently, most research on ILP targeted learning Prolog programs. Our own ILASP system instead learns Answer Set Programs, including normal rules, choice rules and hard and weak constraints. Learning such expressive programs widens the applicability of ILP considerably; for example, enabling preference learning, learning common-sense knowledge, including defaults and exceptions, and learning non-deterministic theories. In this paper, we first give a general overview of ILASP's learning framework and its capabilities. This is followed by a comprehensive summary of the evolution of the ILASP system, presenting the strengths and weaknesses of each version, with a particular emphasis on scalability.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The ability to declaratively specify real-world problems and efficiently generate solutions from such specifications is of particular interest in both academia and industry [10, 9]. A typical paradigm is Answer Set Programming (ASP) [15, 3], which allows a problem to be described in terms of its specification, rather than requiring a user to define an algorithm to solve the problem. Its solvers are capable of constructing solutions from the specifications alone and, where needed, ranking solutions according to optimisation criteria. The interpretable nature of the ASP language also enables the generation of explanations, which is particularly relevant in AI-driven applications. Due to its rich language and efficient solving, ASP has been applied to a wide range of classical areas in AI – including planning, scheduling and diagnosis – and is increasingly being applied in industry [9]; for example, in decision support systems, automated product configuration and configuration of safety systems [10]. On the other hand, developing ASP specifications of real-world problems can be a difficult task for non-experts. Furthermore, the dynamic nature of the contexts in which real-world AI applications tend to operate can require the ASP specification of a problem to be regularly updated or revised. To widen the dissemination of ASP in practice, it is therefore crucial to develop methods that can automatically learn ASP specifications from examples of (partial) solutions to real-world problems. Such learning mechanisms could also provide ways for automatically evolving and revising ASP specifications in dynamic environments.

Within the last few years, we have addressed this problem and developed a novel system, called Inductive Learning of Answer Set Programs (ILASP) [21]. The theoretical framework underpinning the ILASP system differs from conventional approaches for Inductive Logic Programming (ILP), which are mainly focused on learning Prolog programs. Due to the declarative nature of ASP, the learning process in ILASP primarily targets learning the logical specification of a problem, rather than the procedure for solving that problem. Secondly, programs learned by ILASP can include extra types of rules that are not available in Prolog, such as choice rules and hard and weak constraints. Enabling the learning of these extra rules has opened up new applications, which were previously out of scope for ILP systems; for instance, learning weak constraints allows ILASP to learn a user’s preferences from examples of which solutions the user prefers [20].

ILASP’s learning framework has been proved to generalise existing frameworks and systems for learning ASP programs [24], such as the brave learning framework [30], adopted by almost all previous systems (e.g. XHAIL [29], ASPAL [6], ILED [16], RASPAL [2]), and the less common cautious learning framework [30]. Brave systems require the examples to be covered in at least one answer set of the learned program, whereas cautious systems find a program which covers the examples in every answer set. We showed in [24] that some ASP programs cannot be learned with either a brave or a cautious approach, and that to learn ASP programs in general, a combination of both brave and cautious reasoning is required. ILASP’s learning framework enables this combination, and is capable of learning the full class of ASP programs [24]. ILASP’s generality has allowed it to be applied to a wide range of applications, including event detection [23], preference learning [20], natural language understanding [4], learning game rules [7], grammar induction [17] and automata induction [11].

In this paper we give an introduction to ILASP’s learning framework and its capabilities, demonstrating the various types of examples that ILASP can learn from, and describe the evolution of the ILASP system. This evolution has been driven by a need for efficiency with respect to various dimensions, including handling noisy data, large numbers of examples and large search spaces. We discuss the strengths and weaknesses of each variation of the ILASP system, explaining how each system improves on the efficiency of its predecessor. We conclude with a discussion of recent developments and future research directions.

2 Components of a Learning Task

ILASP is used to solve a learning task, which consists of three main components: the background knowledge, the mode bias and the examples. The background knowledge is an ASP program, which describes a set of concepts that are already known before learning. ILASP accepts a subset111For a formal definition of this subset, please see the ILASP manual (http://www.ilasp.com/manual). of ASP, consisting of normal rules, choice rules and hard and weak constraints. We use the term rule to refer to any of these four components.

The mode bias (often called a language bias) is used to express the ASP programs that can be learned; for example, it specifies which predicates may be used in the head/body of learned rules, and how they may be used together. From , it is possible to construct a (finite) set of rules called the rule space222In other literature, the rule space is often called the hypothesis space., that contains every rule that is compatible with . The power set of , , is called the program space, and contains the set of all ASP programs that can be learned.

The examples describe a set of semantic properties that the learned program should satisfy. When the semantic property of an individual example is satisfied, we say that is covered. The goal of an ILP system, such as ILASP, is to find a program (often called a hypothesis) such that (the combination of this program with the background knowledge) covers every example in . Many ILP systems (including ILASP) follow the principle of Occam’s Razor, that the simplest solution should be preferred, and therefore search for an optimal program, which is the shortest in terms of the number of literals.

Many ILP systems learn from (positive and negative) examples of atoms which should be true or false. This is because many ILP systems are targeted at learning Prolog programs, where the main “output” of a program is a query of a single atom. In ASP, the main “output” of a program is a set of answer sets. For this reason, ILASP learns from positive and negative examples of (partial) interpretations, which should or should not (respectively) be an answer set of the learned program. These examples are sufficient to learn any ASP program consisting of normal rules, choice rules and hard constraints (up to strong equivalence) [24]; however, it is not possible to learn weak constraints using only positive and negative examples, because they can only specify what should (or should not) be an answer set. Weak constraints do not have any effect on what is or is not an answer set – they only create a preference ordering over the answer sets. For this reason, ILASP allows a second type of example called an ordering example, the semantic property of which is a preference ordering over a pair of answer sets of . Learning weak constraints corresponds to a form of preference learning.

2.1 Positive and Negative Examples

Consider a very simple setting, where we want to learn a program that describes the behaviour of three (specific) coins, by flipping them and observing which sides they land on. We can use a very simple mode bias to describe the rules we are allowed to learn. The predicates and are both allowed to appear in the head with a single argument, which is either a variable or constant of type (where there are three constants of type coin in the domain: , and ). In the body, we can use three predicates (both positively and negatively): , and . The mode bias expressing this language is shown below.

#modeh(heads(var(coin))).
#modeh(tails(var(coin))).

#modeb(heads(var(coin))).
#modeb(tails(var(coin))).
#modeb(coin(var(coin))).

#modeh(heads(const(coin))).
#modeh(tails(const(coin))).

#constant(coin, c1).
#constant(coin, c2).
#constant(coin, c3).

We flip the coins twice, and see the following combinations of observations: ;, heads(c1);tails(c2);heads(c3), ;, heads(c1);heads(c2);tails(c3). We can encode these “observations” in ILASP using positive examples. Positive examples specify properties which should hold in at least one answer set of the learned program (). The two observations are represented by the following two examples:

#pos({heads(c1), tails(c2), heads(c3)},
     {tails(c1), heads(c2), tails(c3)}).

#pos({heads(c1), heads(c2), tails(c3)},
     {tails(c1), tails(c2), heads(c3)}).

Each positive example contains two sets of ground atoms, called the inclusions and the exclusions (respectively). For a positive example to be covered, there must be at least one answer set of that contains all of the inclusions and none of the exclusions. In this case, these examples mean that there must be (at least) two answer sets, one which contains , and , and does not contain , and , and another answer set which contains , and , and does not contain , and . Although these particular examples completely describe the values of all coins, this does not need to be the case in general. Partial examples allow us to represent uncertainty; for example, we could have a fourth coin for which we do not know the value.

Together with the above examples, we can also give the following, very simple, background knowledge, which defines the set of coins we have:

coin(c1).    coin(c2).    coin(c3).

If we run this task in ILASP, then ILASP returns the following solution:

heads(V1) :- coin(V1), not tails(V1).
tails(V1) :- coin(V1), not heads(V1).

This program states that every coin must land on either or , but not both. Although the first coin has never landed on in the scenarios we have observed, ILASP has generalised

to learn first-order rules that apply to all coins, rather than specific ground rules that only explain the specific instances we have seen. The ability to generalise in this way is a huge advantage of ILP systems over other forms of machine learning, because it usually means that ILP techniques require very few examples to learn general concepts. Note that both positive examples are required for ILASP to learn this general program. Neither positive example on its own is sufficient because in both cases there is a shorter program that explains the example – the set of facts

covers the first example and similarly the set of facts covers the second.

It may be that after many more observations, we still have not witnessed landing on , and we could be convinced that it never will. In this case, we can use ILASP’s negative examples to specify that there should be no answer set that contains . This example is expressed in ILASP as follows:

#neg({tails(c1)}, {}).

Given this extra example, ILASP learns the slightly larger program:

heads(V1) :- coin(V1), not tails(V1).
tails(V1) :- coin(V1), not heads(V1).
heads(c1).

This program states that all coins must land on either or , but not both, except for , which can only land on . Note that negative examples often cause ILASP to learn programs with rules that eliminate answer sets. In this case, the fact eliminates all answer sets that contain . Negative examples are often used to learn constraints. The constraint “” would have has the same effect; however, it is not permitted because the mode bias does not allow constants to be used in the body of a rule.

2.1.1 Context-dependent Examples.

Positive and negative examples of partial answer sets are targeted at learning a fixed program, . When ASP is used in practice, however, a program representing a general problem definition is often combined with another set of rules (usually just facts) describing a particular instance of the problem to be solved. For instance, a general program defining what it means for a graph to be Hamiltonian (i.e. the general problem definition) can be combined with a set of facts describing a particular graph (i.e. a problem instance). The combined program is satisfiable if and only if the graph represented by the set of facts is Hamiltonian. The context-dependent behaviour of the general Hamilton program cannot be captured by positive and negative examples of partial answer sets. Instead, we need an extension, called a context-dependent example. This allows each example to come with its own extra bit of background knowledge, called a context , which applies only to that example. It is now that has to satisfy the semantic properties of the example, rather than . In ILASP, the context of an example is expressed by adding an extra set to the example, containing the context.

#pos({}, {}, {
  node(1..4).
  edge(1, 2).
  edge(2, 3).
  edge(3, 4).
  edge(4, 1).
}).

#neg({}, {}, {
  node(1..4).
  edge(1, 2).
  edge(2, 1).
  edge(2, 3).
  edge(3, 4).
  edge(4, 2).
}).

1

2

3

4
    

1

2

3

4
    
Figure 1: One positive and one negative example of Hamiltonian graphs. On the left is the ILASP representation of the example, and on the right is the corresponding graph.

Consider the two context-dependent examples in Figure 1. Both examples have empty inclusions and exclusions. In the case of a positive example, this simply means that there must exist at least one answer set of – any answer set is consistent with the empty partial interpretation – and in the case of a negative example, it means that there should be no answer set of . Given a sufficient number of examples of this form, ILASP can be used to learn a program that corresponds to the definition of a Hamiltonian graph; i.e. the program is satisfiable if and only if the set of facts represents a Hamiltonian graph. The full program learned by ILASP is:

0 { in(V0, V1) } 1 :- edge(V0, V1).
reach(V0) :- in(1, V0).
reach(V1) :- reach(V0), in(V0, V1).
:- not reach(V0), node(V0).
:- V1 != V2, in(V0, V2), in(V0, V1).

This example shows the high expressive power of ILASP, compared to many other ILP systems, which are only able to learn definite logic programs. In this case, ILASP has learned a choice rule, constraints and a recursive definition of reachability. The full learning task, hamilton.las, used to learn this program is available online.333For instructions on how to install ILASP, see http://www.ilasp.com. All learning tasks discussed in this section are available at http://www.ilasp.com/research.

2.2 Ordering Examples

Positive and negative examples can be used to learn any ASP program consisting of normal rules, choice rules and hard constraints.444This result holds, up to strong equivalence, which means that given any such ASP program , it is possible to learn a program that is strongly equivalent to  [24]. As positive and negative examples can only express what should or should not be an answer set of the learned program, they cannot be used to learn weak constraints, which do not affect what is or is not an answer set. Weak constraints create a preference ordering over the answer sets of a program, so in order to learn them we need to give examples of this preference ordering – i.e. examples of which answer sets should be preferred to which other answer sets. These ordering examples come in two forms: brave orderings, which express that at least one pair of answer sets that satisfy the semantic properties of a pair of positive examples are ordered in a particular way; and cautious orderings, which express that every such pair of answer sets should be ordered in that way.

Consider a scenario in which a user is planning journeys from one location to another. All journeys consist of several legs, in which the user may take various modes of transport. Other known attributes of the journey legs are the distance of the leg, and the crime rating of the area (which ranges from 0 – no crime – to 5 – extremely high). By offering the user various journey options, and observing their choices, we can use ILASP to learn the preferences the user is using to make such choices. The options a user could take can be represented using context-dependent examples. Four such examples are shown below. Note that the first argument of the example is a unique identifier for the example. This identifier is optional, but is needed when expressing ordering examples.

#pos(eg_a, {}, {}, {              #pos(eg_c, {}, {}, {
  leg_mode(1, walk).                leg_mode(1, bus).
  leg_crime_rating(1, 2).           leg_crime_rating(1, 2).
  leg_distance(1, 500).             leg_distance(1, 400).
  leg_mode(2, bus).                 leg_mode(2, bus).
  leg_crime_rating(2, 4).           leg_crime_rating(2, 4).
  leg_distance(2, 3000).            leg_distance(2, 3000).
}).                               }).

#pos(eg_b, {}, {}, {              #pos(eg_d, {}, {}, {
  leg_mode(1, bus).                 leg_mode(1, bus).
  leg_crime_rating(1, 2).           leg_crime_rating(1, 5).
  leg_distance(1, 4000).            leg_distance(1, 2000).
  leg_mode(2, walk).                leg_mode(2, bus).
  leg_crime_rating(2, 5).           leg_crime_rating(2, 1).
  leg_distance(2, 1000).            leg_distance(2, 2000).
}).                               }).

By observing a user’s choices, we might see that the user prefers the journey represented by to the one represented by . This can be expressed in ILASP using an ordering example:

#brave_ordering(eg_a, eg_b, <).

This states that at least one answer set of must be preferred to at least one answer set (where and are the contexts of the examples and , respectively, and , in this simple case, is empty). The final argument of the brave ordering is an operator, which says how the answer sets should be ordered. The operator means “strictly preferred”. It is also possible to use any of the other binary comparison operators: , , , or . For instance, the following example states that the journeys represented by and should be equally preferred.

#brave_ordering(eg_c, eg_d, =).

By using several such ordering examples, it is possible to learn weak constraints corresponding to a user’s journey preferences. For example, the learning task journey.las (available online) causes ILASP to learn the following set of weak constraints:

:~ leg_mode(L, walk), leg_crime_rating(L, C), C > 3.[1@3, L, C]
:~ leg_mode(L, bus).[1@2, L]
:~ leg_mode(L, walk), leg_distance(L, D).[D@1, L, D]

These weak constraints represent that the user’s top priority is to minimise the number of legs of the journey in which the user must walk through an area with a high crime rating; their next priority is to minimise the number of buses the user must take; and finally, their lowest priority is to minimise the total walking distance of their journey.

Note that in the given scenario there is always a single answer set of for each of the contexts , meaning that brave and cautious orderings coincide. When may have multiple answer sets, the distinction is important, and cautious orderings are much stronger than brave orderings, expressing that the preference ordering holds universally over all pairs of answer sets that meet the semantic properties of the positive examples.

2.3 Noisy Examples

Everything presented so far in this paper assumes that all examples are correctly labelled, and therefore that all examples should be covered by the learned program. In real applications, of course, this is often not the case; examples may be noisy (i.e. mislabelled), and so finding a program that covers all examples may not be possible, or even desirable (as this might be overfitting on the examples). In ILASP, each example can be given a penalty, which is a cost for not covering that example. The search for an optimal learned program now searches for a program that minimises , where is the length of the program and is the sum of the penalties of all examples that are not covered by the learned program. We have used this approach to noise to apply ILASP to a wide range of real world problems, including event detection [23], sentence chunking [23], natural language understanding [4] and user preference learning [23].

The function is one example of a scoring function. Most systems, including ILASP, come with a built-in scoring function that cannot be modified, but in recent work [18], we have developed a new system that allows the user to define their own scoring function, allowing a custom (domain-specific) interpretation of optimality.

3 Evolution of the ILASP system

Although we refer to ILASP as a single system, in reality it is a collection of algorithms, with each algorithm developed to address a scalability weakness of its predecessor.555The learning framework has also been expanded with each new algorithm; however, older algorithms have been updated so that every ILASP algorithm supports the most general version of the learning framework. Table 1 considers various “dimensions” of learning tasks and shows which ILASP algorithms scale with respect to each of these dimension.

Scales with
Algorithm Any Negative Examples? # of Examples Level of Noise Size of
ILASP1 No No No No
ILASP2 Yes No No No
ILASP2i Yes Yes No No
ILASP3 Yes Yes Yes No
Table 1: A summary of the scalability of each ILASP system, with respect to various dimensions of learning tasks.

Note that although newer versions of ILASP scale with respect to various dimensions of learning tasks, none of the current ILASP systems scales with respect to large rule spaces; however, this is being addressed in current work (for more details, see the next section).

ILASP1

Pre-processor

Multi-shot ASP

Solver (Clingo)

Compute

Compute

n=n+1

n=0

Post-processor

Task

Learned

Program ()

ILASP2

Pre-processor

Multi-shot ASP

Solver (Clingo)

Compute optimal

Find violation

Post-processor

Task

Learned

Program ()
Figure 2: ILASP1 and ILASP2. and denote, respectively, the positive and violating hypotheses of length and are the current violating reasons.

3.1 ILASP1 and ILASP2

As depicted in Figure 2, the first two ILASP systems both have three main phases: (1) pre-processing; (2) solving; and (3) post-processing. In the first phase, ILASP1 and ILASP2 map the input learning task into an ASP program. Next, this ASP program is solved by the Clingo ASP solver [14, 12]. Finally, the answer set returned by Clingo is post-processed to extract the learned program.

The procedures of the ILASP1 [19] and ILASP2 [20] algorithms are encoded using Clingo’s built-in scripting feature, which allows a technique called multi-shot solving [13] to be used. Multi-shot solving enables a program to be solved iteratively, each time adding new parts to the program (or removing existing ones). The difference between the first two ILASP algorithms is in the multi-shot procedure. Both systems rely on the concepts of positive hypotheses – programs that cover all of the positive examples – and violating hypotheses – which also cover all of the positive examples, but do not cover at least one negative example. Starting at length , in each iteration, ILASP1 computes all the violating hypotheses of length , converts each violating hypothesis to a constraint, which is added to the program, and then searches for a positive hypothesis of length that does not violate any of the computed constraints (i.e. a program of length which covers all of the examples). If there is such a program, it is returned; otherwise, is incremented and the next iteration begins. As there can be a large number of violating hypotheses, and because there is one constraint per violating hypothesis, this process can be very inefficient if there is at least one negative example. ILASP2, on the other hand, computes a single (optimal) positive solution in each iteration. If is a violating hypothesis, then it extracts a “violating reason” , which explains why is violating. It then encodes into a set of ASP rules, which are added to the program in order to rule out not only , but also any other program which is violating for the same reason . Compared with ILASP1, ILASP2 adds far less to the program in each iteration, and often requires fewer iterations (as the same violating reason will often apply to many violating hypotheses), leading to orders of magnitude of improvement in performance on tasks with negative examples.

3.2 ILASP2i

The number of rules in the grounding of the ASP encoding used by ILASP1 and ILASP2 is proportional to the number of examples in the learning task. As the size of the grounding of an ASP program is one of the major factors in how long it takes for Clingo to solve that program, this means that ILASP1 and ILASP2 do not scale with respect to the number of examples.

In real datasets, there is often considerable overlap between the concepts required to cover several different examples. In other words, there are classes of examples such that each example in a class is covered by exactly the same programs as every other example in the class. In a non-noisy setting (where all examples must be covered), only one example per class is actually required, and all other examples are “irrelevant”. The idea behind ILASP2i is to construct a subset of the examples called the relevant examples, which is often significantly smaller than the full set of examples, but nonetheless still forces ILASP to learn the correct program. The construction of the relevant examples is achieved by interleaving the search for an optimal program that covers the (partially constructed) set of relevant examples with a second search for a new relevant example – an example that is not covered by the current program . This interleaving is illustrated in Figure 3. At the start of the process, the program is set to be empty, because at this point this is the shortest program that covers the (empty) set of relevant examples. In each iteration, ILASP2i searches for a new relevant example, and if it finds one, it searches for a new program (updating ) that covers all relevant examples found so far, using ILASP2 to perform the search. If no relevant example exists, then is an optimal solution of the task, and is returned. An example of the ILASP2i procedure, based on the coin learning task from the previous section, is shown in Figure 4.

ILASP2i

Relevant Example Search

Pre-processor

Single-shot ASP

Solver (Clingo)

Extract

ILASP2

Pre-processor

Multi-shot ASP

Solver (Clingo)

Post-processor

Update

Task

Learned

Program ()
Figure 3: ILASP2i. The relevant examples computed so far are denoted .
% Background Knowledge

coin(c1).
coin(c2).
coin(c3).

% Examples

#pos(eg1, {heads(c1), tails(c2), heads(c3)},
          {tails(c1), heads(c2), tails(c3)}).

#pos(eg2, {heads(c1), heads(c2), tails(c3)},
          {tails(c1), tails(c2), heads(c3)}).

#pos(eg3, {tails(c1), heads(c2), tails(c3)},
          {heads(c1), tails(c2), heads(c3)}).

#pos(eg4, {tails(c1), tails(c2), tails(c3)},
          {heads(c1), heads(c2), heads(c3)}).


% Mode bias

#modeh(heads(var(coin))).
#modeh(tails(var(coin))).
#modeh(heads(const(coin))).
#modeh(tails(const(coin))).

#modeb(heads(var(coin))).
#modeb(tails(var(coin))).
#modeb(coin(var(coin))).

#constant(coin, c1).
#constant(coin, c2).
#constant(coin, c3).
    
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Iteration 1
%%
%% Hypothesis:
%%
%% Searching for an uncovered example...
%% The hypothesis does not cover the example: eg1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Iteration 2
%%
%% Searching for a hypothesis that covers the
%% examples in { eg1 }.
%%
%% Hypothesis:
%%
%% tails(c2).  heads(c1).  heads(c3).
%%
%% Searching for an uncovered example...
%% The hypothesis does not cover the example: eg2
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Iteration 3
%%
%% Searching for a hypothesis that covers the
%% examples in { eg1, eg2 }.
%%
%% Hypothesis:
%%
%% heads(V1) :- coin(V1), not tails(V1).
%% tails(V1) :- coin(V1), not heads(V1).
%%
%% Searching for an uncovered example...
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Solution found:
heads(V1) :- coin(V1), not tails(V1).
tails(V1) :- coin(V1), not heads(V1).
    
Figure 4: On the left, an extension of the coin learning task from Section 2 (with more examples), and on the right, the output from ILASP2i. In the first iteration, ILASP2i searches for an example that is not covered by the empty program, and finds . In the second iteration, it finds a very specific program that covers , and then finds the second relevant example . Next, it searches for a program that covers both relevant examples, and finds a more general program. As this program covers all examples, no further relevant examples are computed, and the process terminates.

The experiments in [22] demonstrate that ILASP2i can be over two orders of magnitude faster than ILASP2 on tasks with hundreds of examples. The reason is that the call to ILASP2 calls Clingo with an ASP program whose grounding is only proportional to the number of relevant examples, rather than to the full set of examples. Note that although the call to Clingo in the relevant example search considers all examples, its grounding is not proportional to the number of examples and it only requires single-shot solving in Clingo, meaning that each call to Clingo is relatively cheap compared to the Clingo execution in ILASP2.

3.2.1 Relation to other approaches.

ILASP1 and ILASP2 are examples of batch learners, which consider all examples simultaneously. Some older ILP systems, such as ALEPH [31], Progol [27] and HAIL [28], incrementally consider each positive example in turn, employing a cover loop. The idea behind a cover loop is that the algorithm starts with an empty program and, in each iteration, adds new rules to such that a single positive example is covered, and none of the negative examples are covered. This approach does not work in a non-monotonic setting, as new rules could “undo” the coverage of previously covered examples. For this reason, most ASP learners are batch learners (e.g. [29, 6]). ILASP2i’s method of using relevant examples can essentially be thought of as a non-monotonic version of the cover loop. There are three main differences:

  1. In cover loop approaches, in each iteration a previous program is extended with extra rules, giving a new program that contains . In ILASP2i, a completely new program is learned in each iteration. This not only resolves the issue of non-monotonicity, but is also necessary to guarantee that optimal programs are computed. Many cover loop approaches make no guarantee about the optimality of the final learned program.

  2. In ILASP2i, the set of relevant examples is maintained and used in every iteration, whereas in cover loop approaches, only one example is considered per iteration.

  3. In cover loop approaches, once an example has been processed, even if it did not cause any changes to the current program , it is guaranteed to be covered by any future program and so it is not checked again. In ILASP2i, this is not the case. ILASP2i performs the search for relevant examples on the full set of examples, even if some were previously known to be covered.

ILASP2i is also somewhat similar to active learning algorithms, such as  [1]. Active learners are able to query an oracle

as to whether what they have learned is correct. The oracle is then able to provide counterexamples to aid further learning. ILASP2i’s set of relevant examples are very similar to the counterexamples provided by the oracle. The main difference between the two approaches is that in active learning, the oracle is assumed to know the correct definition of the concept being learned, whereas in ILASP2i, this is not the case, and the search for relevant examples is only over the provided training examples.

3.3 Ilasp3

ILASP3

Relevant Example Search

Pre-processor

Single-shot ASP

Solver (Clingo)

Extract

Program Search

Single-shot ASP

Solver (Clingo)

Post-processor

Example Translator

Translate

Implication Check

, ,

Update and

Task

Learned

Program ()
Figure 5: The ILASP3 algorithm.

Although the concept of relevant examples allows ILASP2i to scale far better than ILASP2 with respect to the number of examples, this only holds in a non-noisy setting, where all examples must be covered. When examples can be noisy, finding a single relevant example only means that a penalty must be paid if the example is not covered. Many relevant examples (of the same class) may need to be found before their total penalty makes it “worth” covering the examples. For this reason, the final set of relevant examples is usually much larger in noisy settings, which significantly reduces the technique’s impact on scalability; in fact, ILASP2i is often even slower than ILASP2 on tasks with large numbers of potentially noisy examples (i.e. large numbers of examples with finite penalties) [25].

ILASP3 uses a novel method of translating an example into a set of coverage constraints over the solution space. The intuition of these coverage constraints is that they give a list of conditions which are satisfied by exactly those programs that cover the examples (e.g. the learned program must include at least one of a certain set of rules, or none of another set of rules). The benefit of having these constraints is twofold. Firstly, finding the optimal program that conforms to the constraints can be performed using a single-shot ASP call rather than a multi-shot call, as in ILASP2i, meaning that in ILASP3 the search for the optimal program is computationally much cheaper. This is because most of the work is done in the translation procedure (which does, itself, use a multi-shot call to Clingo). Secondly, once the constraints for one example have been computed, it is possible to check whether these constraints are necessary for other examples to be covered. This second benefit is the main reason why ILASP3 performs so much better than ILASP2i on tasks with noisy examples. After a relevant example is found (and translated), it is known that the computed constraints must be satisfied by the learned program, otherwise a whole set of examples will not be covered (and the penalties of each example in this set must be paid), rather than just the single relevant example as in ILASP2i. This can significantly reduce the number of iterations required by ILASP3, compared with ILASP2i.

Figure 5 depicts the procedure of ILASP3. It is similar in structure to ILASP2i, and similarly interleaves the program search with a search for relevant examples. The main difference is the addition of the Example Translator, which has two steps: firstly, it translates the relevant examples into a set of constraints on the solution space; and secondly, in the implication check step, it checks which other examples would be guaranteed to not be covered if the coverage constraints were not satisfied – i.e. for which other examples the coverage constraints are necessary conditions. The coverage constraints give an approximation of the coverage and score of every program in the program space. This approximation of a program’s score is always guaranteed to be less than or equal to the program’s real score, as it will only overestimate the program’s coverage (if the program violates a coverage constraint, then it is known not to cover the corresponding examples). The program search computes the optimal program with respect to the approximation of the score. When the approximation of the score of is correct (i.e. the approximation of the coverage of is equal to the true coverage of ), is guaranteed to be an optimal solution of the learning task, and is returned. Checking whether the approximation is correct is performed by the relevant example search. As the approximation never underestimates the coverage of , it suffices to only search for a relevant example within the set of examples that the approximation says should cover. To facilitate this, in addition to returning , the program search also returns the set of examples, , which are known not to be covered by (according to the coverage constraints). The relevant example search is then within , rather than the full example set .

In addition to the procedure described in this section, ILASP3 has several other optional features, designed to boost performance on certain types of task. For more information, please see [25].

4 Current and Future Work

4.1 Conflict-driven ILP and ILASP4

Meta-level ILP systems, such as TAL [5], ASPAL [6] and Metagol [26, 8], encode an ILP task as a fixed meta-level logic program, which is solved by an off-the-shelf Prolog or ASP solver, after which the meta-level solution is translated back to an (object-level) inductive solution of the ILP task.

At first glance, the earliest ILASP systems (ILASP1 and ILASP2) may seem to be meta-level systems, and they do indeed involve encoding a learning task as a meta-level ASP program; however, they are actually in a more complicated category. Unlike “pure” meta-level systems, the ASP solver is not invoked on a fixed program, and is instead (through the use of multi-shot solving) incrementally invoked on a program that is growing throughout the execution.

With each new version, ILASP has shifted further away from pure meta-level approaches, towards a new category of ILP system, which we call conflict-driven. Conflict-driven ILP systems, inspired by conflict-driven SAT and ASP solvers, iteratively construct a set of constraints on the solution space – where the term constraint is used very loosely to mean anything that partitions the solution space into one partition that satisfies the constraint and another that does not – which must be satisfied by any inductive solution. In each iteration, the solver finds a program that satisfies the current constraints, then searches for a conflict , which corresponds to a reason why is not an (optimal) inductive solution. If none exists, then is returned; otherwise, is converted to a new constraint which the next programs must satisfy.

In some sense ILASP2i is already a conflict-driven ILP system, where the relevant examples in each iteration are the conflicts, although it is not really in the spirit of a true conflict-driven system as the constraint generated in each iteration is that one of the examples must be covered, which was already obvious from the original task. ILASP3 is arguably the first truly conflict-driven ILP system, as it translates the relevant example (the conflict) into a set of constraints on the solution space; however, unlike conflict-driven SAT and ASP approaches the constraints can be extremely large and expensive to compute, especially when the program space is large. The issue stems from the fact that the constraints are both sufficient and necessary for the example to be covered (i.e. the example is covered if and only if the constraints are satisfied). ILASP4, which is currently in development, relaxes this and computes constraints which are only guaranteed to be necessary (but may not be sufficient) for the example to be covered. This may mean that the same relevant example is found twice, leading to more iterations, but each iteration will be considerably less expensive, and the constraints constructed in each iteration will be significantly smaller.

4.2 FastLAS

Although each ILASP system has improved scalability with respect to several dimensions, one bottleneck that remains is the size of the rule space. This is because every version of ILASP begins by computing the rule space in full. FastLAS [18] is a new algorithm that solves a restricted version of ILASP’s learning task (currently with no recursion, only observational predicate learning and no predicate invention). Rather than generating the rule space in full, FastLAS computes a much smaller subset of the rule space that is guaranteed to contain at least one optimal solution of the task (called an OPT-sufficient subset). As this OPT-sufficient subset is often many orders of magnitude smaller than the full rule space, FastLAS is far more scalable than ILASP. Due to FastLAS’s restrictions, once it has computed the OPT-sufficient subset, it is able to solve the task in one (single-shot) call to Clingo. FastLAS2, which is currently in development, will lift the restrictions and replace the call to Clingo with a call to ILASP, thus enabling ILASP to take advantage of FastLAS’s increased scalability.

References

  • [1] D. Angluin (1987) Learning regular sets from queries and counterexamples. Information and Computation 75 (2), pp. 87–106. Cited by: §3.2.1.
  • [2] D. Athakravi, D. Corapi, K. Broda, and A. Russo (2013) Learning through hypothesis refinement using answer set programming. In International Conference on Inductive Logic Programming, pp. 31–46. Cited by: §1.
  • [3] G. Brewka, T. Eiter, and M. Truszczyński (2011) Answer set programming at a glance. Communications of the ACM 54 (12), pp. 92–103. Cited by: §1.
  • [4] P. Chabierski, A. Russo, M. Law, and K. Broda (2017) Machine comprehension of text using combinatory categorial grammar and answer set programs. Cited by: §1, §2.3.
  • [5] D. Corapi, A. Russo, and E. Lupu (2010) Inductive logic programming as abductive search.. In ICLP (Technical Communications), pp. 54–63. Cited by: §4.1.
  • [6] D. Corapi, A. Russo, and E. Lupu (2012) Inductive logic programming in answer set programming. In Inductive Logic Programming, pp. 91–97. Cited by: §1, §3.2.1, §4.1.
  • [7] A. Cropper, R. Evans, and M. Law (2019) Inductive general game playing. Machine Learning, pp. 1–42. Cited by: §1.
  • [8] A. Cropper and S. H. Muggleton (2016) Metagol system. External Links: Link Cited by: §4.1.
  • [9] E. Erdem, M. Gelfond, and N. Leone (2016) Applications of answer set programming. AI Magazine 37 (3), pp. 53–68. Cited by: §1.
  • [10] A. Falkner, G. Friedrich, K. Schekotihin, R. Taupe, and E. C. Teppan (2018) Industrial applications of answer set programming. KI-Künstliche Intelligenz 32 (2-3), pp. 165–176. Cited by: §1.
  • [11] D. Furelos-Blanco, M. Law, A. Russo, K. Broda, and A. Jonsson (2020)

    Induction of subgoal automata for reinforcement learning

    .
    In AAAI, Cited by: §1.
  • [12] M. Gebser, R. Kaminski, B. Kaufmann, M. Ostrowski, T. Schaub, and P. Wanko (2016) Theory solving made easy with clingo 5. In Technical Communications of the 32nd International Conference on Logic Programming (ICLP 2016), Cited by: §3.1.
  • [13] M. Gebser, R. Kaminski, B. Kaufmann, and T. Schaub (2019) Multi-shot ASP solving with clingo. Theory and Practice of Logic Programming 19 (1), pp. 27–82. Cited by: §3.1.
  • [14] M. Gebser, B. Kaufmann, R. Kaminski, M. Ostrowski, T. Schaub, and M. Schneider (2011) Potassco: the Potsdam answer set solving collection. AI Communications 24 (2), pp. 107–124. Cited by: §3.1.
  • [15] M. Gelfond and V. Lifschitz (1988) The stable model semantics for logic programming.. In ICLP/SLP, Vol. 88, pp. 1070–1080. Cited by: §1.
  • [16] N. Katzouris, A. Artikis, and G. Paliouras (2015) Incremental learning of event definitions with inductive logic programming. Machine Learning 100 (2-3), pp. 555–585. Cited by: §1.
  • [17] M. Law, A. Russo, E. Bertino, K. Broda, and J. Lobo (2019) Representing and learning grammars in answer set programming. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    ,
    Vol. 33, pp. 2919–2928. Cited by: §1.
  • [18] M. Law, A. Russo, E. Bertino, K. Broda, and J. Lobo (2020) FastLAS: scalable inductive logic programming incorporating domain-specific optimisation criteria. In AAAI, Cited by: §2.3, §4.2.
  • [19] M. Law, A. Russo, and K. Broda (2014) Inductive learning of answer set programs. In Logics in Artificial Intelligence - 14th European Conference, JELIA 2014, Funchal, Madeira, Portugal, September 24-26, 2014. Proceedings, pp. 311–325. Cited by: §3.1.
  • [20] M. Law, A. Russo, and K. Broda (2015) Learning weak constraints in answer set programming. Theory and Practice of Logic Programming 15 (4-5), pp. 511–525. Cited by: §1, §1, §3.1.
  • [21] M. Law, A. Russo, and K. Broda (2015) The ILASP system for learning answer set programs. Note: http://www.ilasp.com/ Cited by: §1.
  • [22] M. Law, A. Russo, and K. Broda (2016) Iterative learning of answer set programs from context dependent examples. Theory and Practice of Logic Programming 16 (5-6), pp. 834–848. Cited by: §3.2.
  • [23] M. Law, A. Russo, and K. Broda (2018) Inductive learning of answer set programs from noisy examples. Advances in Cognitive Systems. Cited by: §1, §2.3.
  • [24] M. Law, A. Russo, and K. Broda (2018) The complexity and generality of learning answer set programs. Artificial Intelligence 259, pp. 110–146. External Links: ISSN 0004-3702, Link Cited by: §1, §2, footnote 4.
  • [25] M. Law (2018) Inductive learning of answer set programs. Ph.D. Thesis, Imperial College London. Cited by: §3.3, §3.3.
  • [26] S. Muggleton and D. Lin (2013) Meta-interpretive learning of higher-order dyadic Datalog: predicate invention revisited. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, pp. 1551–1557. Cited by: §4.1.
  • [27] S. Muggleton (1995) Inverse entailment and Progol. New Generation Computing 13 (3-4), pp. 245–286. Cited by: §3.2.1.
  • [28] O. Ray, K. Broda, and A. Russo (2003) Hybrid abductive inductive learning: a generalisation of Progol. In Inductive Logic Programming, pp. 311–328. Cited by: §3.2.1.
  • [29] O. Ray (2009) Nonmonotonic abductive inductive learning. Journal of Applied Logic 7 (3), pp. 329–340. Cited by: §1, §3.2.1.
  • [30] C. Sakama and K. Inoue (2009) Brave induction: a logical framework for learning from incomplete information. Machine Learning 76 (1), pp. 3–35. Cited by: §1.
  • [31] A. Srinivasan (2001) The Aleph manual. Machine Learning at the Computing Laboratory, Oxford University. Cited by: §3.2.1.