A Scheme-Driven Approach to Learning Programs from Input/Output Equations

02/04/2018 ∙ by Jochen Burghardt, et al. ∙ 0

We describe an approach to learn, in a term-rewriting setting, function definitions from input/output equations. By confining ourselves to structurally recursive definitions we obtain a fairly fast learning algorithm that often yields definitions close to intuitive expectations. We provide a Prolog prototype implementation of our approach, and indicate open issues of further investigation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

This paper describes an approach to learn function definitions from input/output equations.111 We will use henceforth “i/o equations” for brevity.   We avoid calling them “examples” as this could cause confusion when we explain our approach along example sort definitions, example signatures, and example functions.   In trivial cases, a definition is obtained by syntactical anti-unification of the given i/o equations.   In non-trivial cases, we assume a structurally recursive function definition, and transform the given i/o equations into equations for the employed auxiliary functions.   The latter are learned from their i/o equations in turn, until a trivial case is reached.

We came up with this approach in 1994 but didn’t publish it until today.   In this paper, we explain it mainly along some learning examples, leaving a theoretical elaboration to be done.   Also, we indicate several issues of improvement that should be investigated further.   However, we provide at least a Prolog prototype implementation of our approach.

In the rest of this section, we introduce the term-rewriting setting our approach works in.   In Sect. 2, we define the task of function learning.   In Sect. 3 and 4, we explain the base case and the inductive case of our approach, that is, how to learn trivial functions, and how to reduce learning sophisticated functions to learning easier functions, respectively.   Section 5 sketches some ideas for possible extensions to our approach; it also shows its limitations.   Some runs of our Prolog prototype are shown in Appendix A.

Figure 1: Employed sort definitions

We use a term-rewriting setting that is well-known from functional programming:   A sort can be defined recursively by giving its constructors.   For example, sort definition 1, shown in Fig. 1, defines the sort of all natural numbers in - notation.   In this example, we use as a nullary, and as a unary constructor.

Figure 2: Employed function signatures

A sort is understood as representing a possibly infinite set of ground constructor terms,222 i.e. terms without variables, built only from constructor symbols e.g. the sort represents the set .   A function has a fixed signature; Fig. 2 gives some examples.   The signature of a constructor can be inferred from the sort definition it occurs in, e.g.  and .   We don’t allow non-trivial equations between constructor terms, therefore, we have iff syntactically equals , for all ground constructor terms , .

Figure 3: Example function definitions

A non-constructor function can be defined by giving a terminating ([DJ90, Sect.5.1, p.270]) term rewriting system for it such that its left-hand sides are sufficiently complete ([Gut77], [Com86], [DJ90, Sect.3.2, p.264]).   Examples for function definitions are shown in Fig. 3.   Given some functions defined by such a term rewriting system, for each and each ground constructor terms we can find a unique ground constructor term such that .   We then say that evaluates to .

Given a term , we denote by the set of variables occurring in .

2 The task of learning functions

The problem our approach shall solve is the following.   Given a set of sort definitions, a non-constructor function symbol , its signature, and a set of input/output equations

for , construct a term rewriting system defining such that it behaves as prescribed by the i/o equations.   We say that we want to learn a definition for , or sloppily, that we want to learn , from the given i/o equations.

For example, given sort definition 1, signature 2, and the following input/output ground equations

we are looking for a definition of such that equations 2, 2, 2, and 2 hold.   One such definition is

We say that this definition covers the i/o equations 2, 2, 2, and 2.   In contrast, a definition

would cover i/o equations 2 and 2, but neither 2 nor 2.   We wouldn’t accept this definition, since we are interested only in function definitions that cover all given i/o equations.

It is well-known that there isn’t a unique solution to our problem.   In fact, given i/o equations and an arbitrary function of appropriate domain and range, e.g. the function defined by333 We use common imperative notation here for sake of readability.

trivially covers all i/o equations.   Usually, the “simplest” function definitions are preferred, with “simplicity” being some user-defined measure loosely corresponding to term size and/or case-distinction count, like e.g. in [Bur05, p.8] and [Kit10, p.77].   However, the notion of simplicity depends on the language of available basic operations.444 The “invariance theorem” in Kolmogorov complexity theory (e.g. [LV08, p.105, Thm.2.1.1]) implies that , where the range over Turing-complete algorithm description languages, is a natural number, ranges over i/o equation sets, and denotes the length of the shortest function definition, written in , that covers .   This theorem is sometimes misunderstood to enable a language-independent notion of simplicity; however, it does not, at least for small i/o example sets.   In the end, the notion of a “good” definition can hardly be defined more precisely than being one that meets common human prejudice.   From our prototype runs we got the feeling that our approach often yields “good” definition in that sense.

3 Learning functions by anti-unification

One of the simplest ways to obtain a function definition is to syntactically anti-unify the given i/o equations.

be their least general generalization (lgg for short, see [Plo70, Plo71, Rey70]).   If the variable condition holds, then the lgg will cover all given i/o equations.

For example, assume we are to generate a definition for a unary function called

As another example, we can generate a definition for a binary function called

which satisfies the variable condition.555 Whenever applied to terms that don’t start all with the same function symbol, Plotkin’s lgg algorithm returns a variable that uniquely depends on .   We indicate the originating terms by an index sequence; e.g.  was obtained as .   Hence when is defined by equation 3, it covers i/o equations 3, 3, and 3.

As a counter-example, the lgg of the above

which violates the variable condition, and thus cannot be used to reduce a term to a ground constructor term, i.e. to evaluate .

The above anti-unification approach can be extended in several ways, they are sketched in Sect. 5.1.   However, in all but trivial cases, an lgg will violate the variable condition, and we need another approach to learn a function definition.

4 Learning functions by structural recursion

For a function that can’t be learned by Sect. 3, we assume a defining term rewriting system that follows a structural recursion scheme obtained from ’s signature and a guessed argument position.

For example, for the function with the signature given in 2 and the only possible argument position, 1, we obtain the schematic equations

where and are fresh names of non-constructor functions.

If we could learn appropriate definitions for and , we could obtain a definition for just by adding equations 4 and 4.   The choice of is obvious:

In order to learn a definition for , we need to obtain appropriate i/o examples for from those for .   Joining equation 4 with ’s relevant i/o equations yields three i/o equations for :

A definition for covering its i/o examples 3, 3, and 3 has already been derived by anti-unification in Sect. 3 as

Altogether, we obtain the rewriting system

as a definition for that covers its i/o equations 2, 2, 2, and 2.   Subsequently, this system may be simplified, by inlining, to

which is the usual definition of the function.

Returning to the computation of i/o equations for from those for , note that ’s derived i/o equations 3, 3, and 3 were necessary in the sense that they must be satisfied by each possible definition of that leads to covering its i/o equations (2, 2, and 2).   Conversely, ’s i/o equations were also sufficient in the sense that each possible definition of covering them ensures that covers 2, 2, and 2, provided it covers 2:

Observe that the above proofs are based just on permutations of the equation chains from 3, 3, and 3.   Moreover, note that the coverage proof for relies on the coverage for already being proven.   That is, the coverage proofs follow the employed structural recursion scheme.   As for the base case, ’s coverage of 4 is of course necessary and sufficient for ’s coverage of 2.

4.1 Non-ground i/o equations

As an example that uses i/o equations containing variables, consider the function , with the signature given in 2.   Usually, i/o equations for this functions are given in a way that indicates that the particular values of the list elements don’t matter.   For example, an i/o equation like is seen much more often than .   Our approach allows for variables in i/o equations, and treats them as universally quantified.   That is, a non-ground i/o equation is covered by a function definition iff all its ground instances are.

Assume for example we are given the i/o equations

Given the signature of (see 2) and argument position 1, we obtain a structural recursion scheme

Similar to the example, we get

and we can obtain i/o equations for from those for :666 In the rightmost equation of each line, we employ a renaming substitution.   For example, we apply to i/o equation 4.1 in line 3.   For this reason, our approach wouldn’t work if , , were considered non-constructor constants rather than universally quantified variables.

Again, a function definition covering these i/o equation happens to have been derived by anti-unification in Sect. 3:

Altogether, equations 4.1, 4.1, 4.1, and 3 build a rewriting system for that covers all its given i/o equations.   By subsequently inlining ’s and ’s definition, we obtain a simplified definition for :

which agrees with the usual one found in textbooks.

Similar to the ground case, ’s derived i/o equations 3, 3, and 3 were necessary for covering its i/o equations.   And as in the ground case, they are also sufficient:

Again, renaming substitutions were used in the application of 4.1 and 4.1.

4.2 Functions of higher arity

For functions with more than one argument, we have several choices of the argument on which to do the recursion.   In these cases, we currently systematically try all argument positions777 In particular, the recursive argument’s sort and the function’s result sort needn’t be related in any way, as the example above demonstrates. in succession.   This is feasible since

  • our approach is quite simple, and hence fast to compute, and

  • we have a sharp and easy to compute criterion (viz. coverage888 Checking if an i/o equation is covered by a definition requires executing the latter on the lhs arguments of the former.   Our structural recursion approach ensures the termination of such computations, and establishes an upper bound for the number of rewrite steps.   For example, and , defined in 3 and 3, respectively, need one such step, while their callers and , defined in 4,4 and 4.1,4.1, respectively, need a linear amount of steps.   An upper-bound expression for learned functions’ time complexity remains to be defined and proven. of all i/o examples) to decide whether recursion on a given argument was successful.

For the function , with the signature given in 2, and argument position 2, we obtain the structural recursion scheme

Appendix A.1 shows a run of our Prolog prototype implementation that obtains a definition for .   In Sect. 5.2, we discuss possible extensions of the structural recursion scheme, like simultaneous recursion.

4.3 Constructors with more than one recursion argument

When computing a structural recursion scheme, we may encounter a sort with a constructor that takes more than one argument of sort .   A common example is the sort of all binary trees (of natural numbers), as given in 1.   The function , with the signature given in 2, computes the size of such a tree, i.e. the total number of nodes.   A recursion scheme for the and argument position 1 looks like:

In App. A.2, we show a prototype run to obtain a definition for .

4.4 General approach

In the previous sections, we have introduced our approach using particular examples.   In this section, we sketch a more abstract and algorithmic description.

Given a function and its signature , and given one of its argument positions , we can easily obtain a term rewriting system to define by structural recursion on its th argument.   Assume in the definition of ’s th domain sort we have an alternative

assume is the set of non-recursive arguments of the constructor , and are the recursive arguments of .   Let be a new function symbol.   We build an equation

In a somewhat simplified presentation, we build the equation

From the i/o equations for , we often999 Our construction isn’t successful in all cases.   We give a counter-example in Sect. 5.3 can construct i/o equations for :   If we have an i/o equation that matches the above equation’s left-hand side, and we have all i/o equations needed to evaluate the recursive calls to on its right-hand side, we can build an i/o equation equation for .

This way, we can reduce the problem of synthesizing a definition for that reproduces the given i/o equations to the problem of synthesizing a definition for from its i/o equations.   As a base case for this process, we may synthesize non-recursive function definitions by anti-unification of the i/o equations.

It should be possible to prove that covers all its i/o equations iff covers its, under some appropriate conditions.   We expect that a sufficient condition is that all recursive calls to could be evaluated.   At least, we could demonstrate this in the above and example.

4.5 Termination

In order to establish the termination of our approach, it is necessary to define a criterion by which is easier to learn from it i/o equations than is from its.   Term size or height cannot be used in a termination ordering; when proceeding from to they may remain equal, or may even increase, as shown in Fig. 4 for the vs.  example.

However, the number of i/o equations decreases in this example, and in all other ones we dealt with.   A sufficient criterion for this is that ’s i/o equations don’t all have the same left-hand side top-most constructor.   However, the same criterion would have to be ensured in turn for , and it is not obvious how to achieve this.

In any case, by construction of ’s i/o example from ’s, no new terms can arise.101010 except for the fresh left-hand side top function symbols   Even more, each term appearing in an i/o example for originates from a right-hand side of an i/o example for .   Therefore, our approach can’t continue generating new auxiliary functions forever, without eventually repeating the set of i/o equations.   Our prototype implementation doesn’t check for such repetitions, however.

Figure 4: Left- and right-hand term sizes of i/o equations for and

5 Possible extensions

In this section, we briefly sketch some possible extensions of our approach.   Their investigation in detail still remains to be done.

5.1 Extension of anti-unification

In Sect. 3 we used syntactical anti-unification to obtain a function definition, as a base case of our approach.   Several way to extend this technique can be thought of.

Set anti-unification

It can be tried to split the set of i/o equations into disjoint subsets such that from each one an lgg satisfying the variable condition is obtained.   This results in several defining equations.   An additional constraint might be that each subset corresponds to another constructor symbol, observed at some given fixed position in the left-hand side terms.

Anti-unification modulo equational theory

Another extension consists in considering an equational background theory in anti-unification; it wasn’t readily investigated in 1994.   See [Hei94b, Hei94a, Hei95] for the earliest publications, and [Bur05, Bur17] for the latest.

As of today, the main application of -anti-unification turned out to be the synthesis of non-recursive function definitions from input/ output equations [Bur17, p.3].   To sketch an example, let consist just of definitions 3, 3, 3, and 3.

Assume the signature

and the i/o equations 5, 5, 5, and 5 of the squaring function.   Applying syntactical anti-unification to the left-hand sides yields a variable , and four corresponding substitutions.   Applying constrained -generalization [Bur05, p.5, Def.2] to the right-hand sides yields a term set that contains as a minimal-size member, see Fig. 5.

Figure 5: Application of -anti-unification to learn squaring

Depth-bounded anti-unification

In many cases, defining equations obtained by syntactical anti-unification appear to be too particular.   For example, and are generalized to , while being by greater than something wouldn’t be the first choice for a common property of both numbers for most humans.   As a possible remedy, a maximal depth may be introduced for the anti-unification algorithm.   Beyond this depth, terms are generalized by a variable even if all their root function symbols agree.   Denoting by the result of an appropriately modified algorithm, it should be easy to prove that can be instantiated to both and , and is the most special term with that property among all terms of depth up to .   If is chosen as , and coincide.

Figure 6: Learned tree size definition for anti-unification depth 2, 3, and 4

In our prototype implementation, we meanwhile built in such a depth boundary.   Figure 6 compares the learned function definitions for for (top to bottom).   For example, for , the —nonsensical— equation is learned, while for the respective equation reads .   Not surprisingly, for only one of the given i/o equations is covered.   For , the attempt to learn defining equations for fails.

For , the learned equations agree with those for , and hence also with those for all intermediate depths.   The prototype run for is shown in App. A.2.   Note that the prototype simplifies equations by removing irrelevant function arguments.   For this reason, f12 there has only two arguments, while the corresponding function in Fig. 6 has three.

5.2 Extension of structural recursion

Some functions are best defined by simultaneous recursion on several arguments.   As an example, consider the sort definition 1 with , , and denoting an empty list, a digit, and a digit, respectively.   For technical reasons, such a list is interpreted in reversed order, e.g.  denotes the number .   The sum function , its signature shown in 2, may then be defined by the following rewrite system:

where

is a function to increment a binary digit list.   This corresponds to the usual hardware implementation, with being used for the carry.

It is obvious that this definition cannot be obtained from our simple structural recursion scheme from Sect. 4, neither by recurring over argument position nor over .   Instead, we would need recursion over both positions simultaneously, i.e. a scheme like

An extension of our approach could provide such a scheme, additionally to the simple structural recursion scheme.

If we could prove that each function definition obtainable by the simple recursion scheme can also be obtained by a simultaneous recursion scheme, we needed only to employ the latter.   This way, we would no longer need to guess an appropriate argument position to recur over; instead we could always recur simultaneously over all arguments of a given sort.   Unfortunately, simultaneous recursion is not stronger than simple structural recursion.   For example, the function to concatenate two given lists can be obtained by simple recursion over the first argument (see 3,3 in Fig. 3), but not by simultaneous recursion: doesn’t lead to a sensible definition, for any choice of .

One possible remedy is to try simple structural recursion first, on any appropriate argument position, and simultaneous recursion next, on any appropriate set of argument positions.   Alternatively, user commands may be required about which recursion to try on which argument position(s).

Another possibility might be to employ a fully general structural recursion scheme, like

In this scheme, calls for simple recursion over each position are provided, as well as for simultaneous recursion over each position set.   A new symbol , intended to denote an undefined term, could be added to the term language.   When e.g. i/o equations are missing to compute for some particular instance, the first argument of would be set to in the respective i/o equation.   In syntactical anti-unification and coverage test, needed to be handled appropriately.   This way, only one recursion scheme would be needed, and no choice of appropriate argument position(s) would be necessary.   However, arities of auxiliary functions might grow exponentially.

5.3 Limitations of our approach

In this section, we demonstrate an example where our approach fails.   Consider again the squaring function, its signature shown in 5.1, and consider again its i/o equations 5, 5, 5, and 5.

Since syntactical anti-unification as in Sect. 3 (i.e. not considering an equational background theory ) doesn’t lead to a valid function definition, we build a structural recursion scheme as in Sect. 4:

We get , and the following i/o equations for :

Observe that we are able to obtain i/o equations for only on square numbers.   For example, there is no obvious way to determine the value of .

Syntactically anti-unifying ’s i/o equation still doesn’t yield a valid function definition.   So we set up a recursion scheme for , in turn:

Again, is obvious.   Trying to obtain i/o equations for , we get stuck, since we don’t know how should behave on non-square numbers:

As an alternative, by applying 5.3 sufficiently often rather than just once, we can obtain:

However, no approach is known to learn from an extended i/o equation like 5.3, which determines rather than itself.   In such cases, we resort to the excuse that the original function, isn’t definable by structural recursion.

A precise criterion for the class that our approach can handle is still to be found.   It is not even clear that such a criterion can be computable.   If not, it should still be possible to give computable necessary and sufficient approximations.

Appendix A Example runs of our prototype implementation

a.1 Addition of - numbers

?- SgI = [ + signature [nat,nat] --> nat],
|    SD  = [ nat sortdef 0 ! s(nat)],
|    ExI = [ 0       + 0             = 0,
|            s(0)    + 0             = s(0),
|            0       + s(0)          = s(0),
|            0       + s(s(0))       = s(s(0)),
|            s(0)    + s(0)          = s(s(0)),
|            s(0)    + s(s(0))       = s(s(s(0))),
|            s(s(0)) + s(0)          = s(s(s(0))),
|            s(s(0)) + 0             = s(s(0))],
|    run(+,SgI,SD,ExI).
+++++ Examples input check:
+++++ Example 1:
+++++ Example 2:
+++++ Example 3:
+++++ Example 4:
+++++ Example 5:
+++++ Example 6:
+++++ Example 7:
+++++ Example 8:
+++++ Examples input check done
induce(+)
. trying argument position:     1
. inducePos(+,1,0)
. . matching examples:  [0+0=0,0+s(0)=s(0),0+s(s(0))=s(s(0))]
. . anti-unifier:       0+v3 = v3
. inducePos(+,1,0)
. inducePos(+,1,s(nat))
. . matching examples:  [s(0)+0=s(0),s(0)+s(0)=s(s(0)),s(0)+s(s(0))=s(s(s(0))),s(s(0))+s(0)=s(s(s(0))),s(s(0))+0=s(s(0))]
. . new recursion scheme:       s(v9)+v8 = f10(v8,v9+v8)
. . derive new equation:        s(0) = s(0)+0 = f10(0,0)
. . derive new equation:        s(s(0)) = s(0)+s(0) = f10(s(0),s(0))
. . derive new equation:        s(s(s(0))) = s(0)+s(s(0)) = f10(s(s(0)),s(s(0)))
. . derive new equation:        s(s(s(0))) = s(s(0))+s(0) = f10(s(0),s(s(0)))
. . derive new equation:        s(s(0)) = s(s(0))+0 = f10(0,s(0))
. . induce(f10)
. . . trying argument position: 1
. . . inducePos(f10,1,0)
. . . . matching examples:      [f10(0,0)=s(0),f10(0,s(0))=s(s(0))]
. . . . anti-unifier:   f10(0,v13) = s(v13)
. . . inducePos(f10,1,0)
. . . inducePos(f10,1,s(nat))
. . . . matching examples:      [f10(s(0),s(0))=s(s(0)),f10(s(0),s(s(0)))=s(s(s(0))),f10(s(s(0)),s(s(0)))=s(s(s(0)))]
. . . . anti-unifier:   f10(s(v15),s(v16)) = s(s(v16))
. . . inducePos(f10,1,s(nat))
. . . all examples covered
. . induce(f10)
. inducePos(+,1,s(nat))
. all examples covered
induce(+)
+++++ Examples output check:
+++++ Examples output check done
FUNCTION SIGNATURES:
f10 signature [nat,nat]-->nat
(+)signature[nat,nat]-->nat

FUNCTION EXAMPLES:
0+0=0
s(0)+0=s(0)
0+s(0)=s(0)
0+s(s(0))=s(s(0))
s(0)+s(0)=s(s(0))
s(0)+s(s(0))=s(s(s(0)))
s(s(0))+s(0)=s(s(s(0)))
s(s(0))+0=s(s(0))

FUNCTION DEFINITIONS:
0+v17=v17
s(v18)+v19=f10(v19,v18+v19)
f10(0,v20)=s(v20)
f10(s(v21),s(v22))=s(s(v22))

?-

a.2 Size of a tree


?- SgI = [ size signature [tree] --> nat],
|    SD  = [ tree sortdef nl ! nd(tree,nat,tree),
|            nat  sortdef 0 ! s(nat)],
|    ExI = [ size(nl)                                = 0,
|            size(nd(nl,va,nl))                      = s(0),
|            size(nd(nd(nl,va,nl),vb,nl))    = s(s(0)),
|            size(nd(nl,va,nd(nl,vb,nl)))    = s(s(0)),
|            size(nd(nd(nl,va,nl),vb,nd(nl,vc,nl)))  = s(s(s(0))),
|            size(nd(nl,va,nd(nd(nl,vb,nl),vc,nl)))  = s(s(s(0))),
|            size(nd(nl,va,nd(nl,vb,nd(nl,vc,nl))))  = s(s(s(0))),
|            size(nd(nd(nl,va,nl),vb,nd(nd(nl,vc,nl),vd,nl)))        = s(s(s(s(0)))),
|            size(nd(nd(nd(nl,va,nl),vb,nl),vc,nd(nl,vd,nl)))        = s(s(s(s(0))))
|            ],
|    run(size,SgI,SD,ExI).
+++++ Examples input check:
+++++ Example 1:
+++++ Example 2:
+++++ Example 3:
+++++ Example 4:
+++++ Example 5:
+++++ Example 6:
+++++ Example 7:
+++++ Example 8:
+++++ Example 9:
Variable sorts:
[vd:nat,vc:nat,vb:nat,va:nat]
+++++ Examples input check done
induce(size)
. trying argument position:     1
. inducePos(size,1,nl)
. . matching examples:  [size(nl)=0]
. . anti-unifier:       size(nl) = 0
. inducePos(size,1,nl)
. inducePos(size,1,nd(tree,nat,tree))
. . matching examples:  [size(nd(nl,va,nl))=s(0),size(nd(nd(nl,va,nl),vb,nl))=s(s(0)),size(nd(nl,va,nd(nl,vb,nl)))=s(s(0)),size(nd(nd(nl,va,nl),vb,n...
. . new recursion scheme:       size(nd(v10,v9,v11)) = f12(v9,size(v10),size(v11))
. . derive new equation:        s(0) = size(nd(nl,va,nl)) = f12(va,0,0)
. . derive new equation:        s(s(0)) = size(nd(nd(nl,va,nl),vb,nl)) = f12(vb,s(0),0)
. . derive new equation:        s(s(0)) = size(nd(nl,va,nd(nl,vb,nl))) = f12(va,0,s(0))
. . derive new equation:        s(s(s(0))) = size(nd(nd(nl,va,nl),vb,nd(nl,vc,nl))) = f12(vb,s(0),s(0))
. . derive new equation:        s(s(s(0))) = size(nd(nl,va,nd(nd(nl,vb,nl),vc,nl))) = f12(va,0,s(s(0)))
. . derive new equation:        s(s(s(0))) = size(nd(nl,va,nd(nl,vb,nd(nl,vc,nl)))) = f12(va,0,s(s(0)))
. . derive new equation:        s(s(s(s(0)))) = size(nd(nd(nl,va,nl),vb,nd(nd(nl,vc,nl),vd,nl))) = f12(vb,s(0),s(s(0)))
. . derive new equation:        s(s(s(s(0)))) = size(nd(nd(nd(nl,va,nl),vb,nl),vc,nd(nl,vd,nl))) = f12(vc,s(s(0)),s(0))
. . induce(f12)
. . . trying argument position: 1
. . . inducePos(f12,1,0)
. . . . matching examples:      []
. . . . no examples
. . . inducePos(f12,1,0)
. . . inducePos(f12,1,s(nat))
. . . . matching examples:      []
. . . . no examples
. . . inducePos(f12,1,s(nat))
. . . uncovered examples:       [f12(va,0,0)=s(0),f12(va,0,s(0))=s(s(0)),f12(va,0,s(s(0)))=s(s(s(0))),f12(vb,s(0),0)=s(s(0)),f12(vb,s(0),s(0))=s(s(s...
. . . trying argument position: 2
. . . inducePos(f12,2,0)
. . . . matching examples:      [f12(va,0,0)=s(0),f12(va,0,s(0))=s(s(0)),f12(va,0,s(s(0)))=s(s(s(0)))]
. . . . anti-unifier:   f12(va,0,v37) = s(v37)
. . . inducePos(f12,2,0)
. . . inducePos(f12,2,s(nat))
. . . . matching examples:      [f12(vb,s(0),0)=s(s(0)),f12(vb,s(0),s(0))=s(s(s(0))),f12(vb,s(0),s(s(0)))=s(s(s(s(0)))),f12(vc,s(s(0)),s(0))=s(s(s(s...
. . . . new recursion scheme:   f12(v43,s(v45),v44) = f46(v43,v44,f12(v43,v45,v44))
. . . . derive new equation:    s(s(0)) = f12(vb,s(0),0) = f46(vb,0,s(0))
. . . . derive new equation:    s(s(s(0))) = f12(vb,s(0),s(0)) = f46(vb,s(0),s(s(0)))
. . . . derive new equation:    s(s(s(s(0)))) = f12(vb,s(0),s(s(0))) = f46(vb,s(s(0)),s(s(s(0))))
. . . . derive new equation:    s(s(s(s(0)))) = f12(vc,s(s(0)),s(0)) = f46(vc,s(0),s(s(s(0))))
. . . . induce(f46)
. . . . . trying argument position:     1
. . . . . inducePos(f46,1,0)
. . . . . . matching examples:  []
. . . . . . no examples
. . . . . inducePos(f46,1,0)
. . . . . inducePos(f46,1,s(nat))
. . . . . . matching examples:  []
. . . . . . no examples
. . . . . inducePos(f46,1,s(nat))
. . . . . uncovered examples:   [f46(vb,0,s(0))=s(s(0)),f46(vb,s(0),s(s(0)))=s(s(s(0))),f46(vb,s(s(0)),s(s(s(0))))=s(s(s(s(0)))),f46(vc,s(0),s(s(s(0...
. . . . . trying argument position:     2
. . . . . inducePos(f46,2,0)
. . . . . . matching examples:  [f46(vb,0,s(0))=s(s(0))]
. . . . . . anti-unifier:       f46(vb,0,s(0)) = s(s(0))
. . . . . inducePos(f46,2,0)
. . . . . inducePos(f46,2,s(nat))
. . . . . . matching examples:  [f46(vb,s(0),s(s(0)))=s(s(s(0))),f46(vb,s(s(0)),s(s(s(0))))=s(s(s(s(0)))),f46(vc,s(0),s(s(s(0))))=s(s(s(s(0))))]
. . . . . . anti-unifier:       f46(v63,s(v64),s(s(v65))) = s(s(s(v65)))
. . . . . inducePos(f46,2,s(nat))
. . . . . all examples covered
. . . . induce(f46)
. . . inducePos(f12,2,s(nat))
. . . all examples covered
. . induce(f12)
. inducePos(size,1,nd(tree,nat,tree))
. all examples covered
induce(size)
+++++ Examples output check:
+++++ Examples output check done
FUNCTION SIGNATURES:
f46 signature [nat,nat,nat]-->nat
f12 signature [nat,nat,nat]-->nat
size signature [tree]-->nat

FUNCTION EXAMPLES:
size(nl)=0
size(nd(nl,va,nl))=s(0)
size(nd(nd(nl,va,nl),vb,nl))=s(s(0))
size(nd(nl,va,nd(nl,vb,nl)))=s(s(0))
size(nd(nd(nl,va,nl),vb,nd(nl,vc,nl)))=s(s(s(0)))
size(nd(nl,va,nd(nd(nl,vb,nl),vc,nl)))=s(s(s(0)))
size(nd(nl,va,nd(nl,vb,nd(nl,vc,nl))))=s(s(s(0)))
size(nd(nd(nl,va,nl),vb,nd(nd(nl,vc,nl),vd,nl)))=s(s(s(s(0))))
size(nd(nd(nd(nl,va,nl),vb,nl),vc,nd(nl,vd,nl)))=s(s(s(s(0))))

FUNCTION DEFINITIONS:
size(nl)=0
size(nd(v66,v67,v68))=f12(size(v66),size(v68))
f12(0,v69)=s(v69)
f12(s(v70),v71)=f46(v71,f12(v70,v71))
f46(0,s(0))=s(s(0))
f46(s(v72),s(s(v73)))=s(s(s(v73)))

?-

a.3 Reversing a list

?- SgI = [rev signature [list] --> list],
|    SD  = [ list sortdef [] ! [nat|list],
|            nat  sortdef  0 ! s(nat)],
|    ExI = [ rev([])         = [],
|            rev([va])       = [va],
|            rev([vb,va])    = [va,vb],
|            rev([vc,vb,va]) = [va,vb,vc]],
|    run(rev,SgI,SD,ExI).
+++++ Examples input check:
+++++ Example 1:
+++++ Example 2:
+++++ Example 3:
+++++ Example 4:
Variable sorts:
[vc:nat,vb:nat,va:nat]
+++++ Examples input check done
induce(rev)
. trying argument position:     1
. inducePos(rev,1,[])
. . matching examples:  [rev([])=[]]
. . anti-unifier:       rev([]) = []
. inducePos(rev,1,[])
. inducePos(rev,1,[nat|list])
. . matching examples:  [rev([va])=[va],rev([vb,va])=[va,vb],rev([vc,vb,va])=[va,vb,vc]]
. . new recursion scheme:       rev([v7|v8]) = f9(v7,rev(v8))
. . derive new equation:        [va] = rev([va]) = f9(va,[])
. . derive new equation:        [va,vb] = rev([vb,va]) = f9(vb,[va])
. . derive new equation:        [va,vb,vc] = rev([vc,vb,va]) = f9(vc,[va,vb])
. . induce(f9)
. . . trying argument position: 1
. . . inducePos(f9,1,0)
. . . . matching examples:      []
. . . . no examples
. . . inducePos(f9,1,0)
. . . inducePos(f9,1,s(nat))
. . . . matching examples:      []
. . . . no examples
. . . inducePos(f9,1,s(nat))
. . . uncovered examples:       [f9(va,[])=[va],f9(vb,[va])=[va,vb],f9(vc,[va,vb])=[va,vb,vc]]
. . . trying argument position: 2
. . . inducePos(f9,2,[])
. . . . matching examples:      [f9(va,[])=[va]]
. . . . anti-unifier:   f9(va,[]) = [va]
. . . inducePos(f9,2,[])
. . . inducePos(f9,2,[nat|list])
. . . . matching examples:      [f9(vb,[va])=[va,vb],f9(vc,[va,vb])=[va,vb,vc]]
. . . . new recursion scheme:   f9(v22,[v23|v24]) = f25(v22,v23,f9(v22,v24))
. . . . derive new equation:    [va,vb] = f9(vb,[va]) = f25(vb,va,[vb])
. . . . derive new equation:    [va,vb,vc] = f9(vc,[va,vb]) = f25(vc,va,[vb,vc])
. . . . induce(f25)
. . . . . trying argument position:     1
. . . . . inducePos(f25,1,0)
. . . . . . matching examples:  []
. . . . . . no examples
. . . . . inducePos(f25,1,0)
. . . . . inducePos(f25,1,s(nat))
. . . . . . matching examples:  []
. . . . . . no examples
. . . . . inducePos(f25,1,s(nat))
. . . . . uncovered examples:   [f25(vb,va,[vb])=[va,vb],f25(vc,va,[vb,vc])=[va,vb,vc]]
. . . . . trying argument position:     2
. . . . . inducePos(f25,2,0)
. . . . . . matching examples:  []
. . . . . . no examples
. . . . . inducePos(f25,2,0)
. . . . . inducePos(f25,2,s(nat))
. . . . . . matching examples:  []
. . . . . . no examples
. . . . . inducePos(f25,2,s(nat))
. . . . . uncovered examples:   [f25(vb,va,[vb])=[va,vb],f25(vc,va,[vb,vc])=[va,vb,vc]]
. . . . . trying argument position:     3
. . . . . inducePos(f25,3,[])
. . . . . . matching examples:  []
. . . . . . no examples
. . . . . inducePos(f25,3,[])
. . . . . inducePos(f25,3,[nat|list])
. . . . . . matching examples:  [f25(vb,va,[vb])=[va,vb],f25(vc,va,[vb,vc])=[va,vb,vc]]
. . . . . . anti-unifier:       f25(v37,va,[vb|v38]) = [va,vb|v38]
. . . . . inducePos(f25,3,[nat|list])
. . . . . all examples covered
. . . . induce(f25)
. . . inducePos(f9,2,[nat|list])
. . . all examples covered
. . induce(f9)
. inducePos(rev,1,[nat|list])
. all examples covered
induce(rev)
+++++ Examples output check:
+++++ Examples output check done
FUNCTION SIGNATURES:
f25 signature [nat,nat,list]-->list
f9 signature [nat,list]-->list
rev signature [list]-->list

FUNCTION EXAMPLES:
rev([])=[]
rev([va])=[va]
rev([vb,va])=[va,vb]
rev([vc,vb,va])=[va,vb,vc]

FUNCTION DEFINITIONS:
rev([])=[]
rev([v39|v40])=f9(v39,rev(v40))
f9(v41,[])=[v41]
f9(v42,[v43|v44])=f25(v43,f9(v42,v44))
f25(v41,[v45|v46])=[v41,v45|v46]

?-

References

  • [Bur05] Jochen Burghardt. -generalization using grammars. Artificial Intelligence Journal, 165(1):1–35, 2005.
  • [Bur17] Jochen Burghardt. An improved algorithm for E-generalization. Technical report, Berlin, Sep 2017.
  • [Com86] Hubert Comon. Sufficient completeness, term rewriting systems and “anti-unification”. In Proc. 8th International Conference on Automated Deduction, volume 230 of LNCS, pages 128–140. Springer, 1986.
  • [DJ90] N. Dershowitz and J.-P. Jouannaud. Rewrite Systems, volume B of Handbook of Theoretical Computer Science, pages 243–320. Elsevier, 1990.
  • [Gut77] John V. Guttag. Abstract data types and the development of data structures. Communications of the ACM, 20(6):396–404, Jun 1977.
  • [Hei94a] Birgit Heinz. Anti-unification and its application to lemma discovery, 1994. Workshop Talk given in CADE-12.
  • [Hei94b] Birgit Heinz. Lemma discovery by anti-unification of regular sorts. Technical Report 94–21, TU Berlin, 1994.
  • [Hei95] Birgit Heinz. Anti-Unifikation modulo Gleichungstheorie und deren Anwendung zur Lemmagenerierung. PhD thesis, TU Berlin, Dec 1995.
  • [Kit10] Emanuel Kitzelmann. A Combined Analytical and Search-Based Approach to the Inductive Synthesis of Functional Programs. PhD thesis, Univ. Bamberg, May 2010.
  • [LV08] Ming Li and Paul Vit nyi. An Introduction to Kolmogorov Complexity and Its Applications. texts in computer science. Springer, New York, 3rd edition, 2008.
  • [Plo70] Gordon D. Plotkin. A note on inductive generalization. Machine Intelligence, 5:153–163, 1970.
  • [Plo71] Gordon D. Plotkin. A further note on inductive generalization. Machine Intelligence, 6:101–124, 1971.
  • [Rey70] John C. Reynolds. Transformational systems and the algebraic structure of atomic formulas. Machine Intelligence, 5:135–151, 1970.