On First-order Cons-free Term Rewriting and PTIME

11/09/2017 ∙ by Cynthia Kop, et al. ∙ Københavns Uni 0

In this paper, we prove that (first-order) cons-free term rewriting with a call-by-value reduction strategy exactly characterises the class of PTIME-computable functions. We use this to give an alternative proof of the result by Carvalho and Simonsen which states that cons-free term rewriting with linearity constraints characterises this class.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In [4], Jones introduces the notion of cons-free programming: working with a small functional programming language, cons-free programs are defined to be read-only: recursive data cannot be created or altered, only read from the input. By imposing further restrictions on data order and recursion style, classes of cons-free programs turn out to characterise various deterministic classes in the time and space hierarchies of computational complexity.

Rather than using an artificial language, it would make sense to consider term rewriting. The authors of [3] explore a first definition of cons-free first-order term rewriting, and prove that this exactly characterises PTIME, provided a partial linearity restriction is imposed. This restriction is necessary since, without it, we can implement exponential algorithms in a cons-free system [5]. However, the restriction is not common, and the proof is intricate.

In this paper, we provide an alternative, simpler proof of this result. We do so by giving some simple syntactical transformations which allow a call-by-value reduction strategy to be imposed, and show that call-by-value cons-free first-order term rewriting characterises PTIME. This incidentally gives a new result with respect to call-by-value cons-free term rewriting, as well as a simplification of the linearity restriction in [3].

2. Cons-free Term Rewriting

We assume the basic notions of first-order term rewriting to be understood. We particularly assume that the set of rules is finite, and split the signature into of defined symbols () and constructors (). denotes the set of terms built from symbols in and variables, and the set of ground terms over . Elements of (ground constructor terms) are called data terms. The call-by-value reduction relation is where a term may only be reduced at position if has the form with all data terms. The subterm relation is denoted , or for strict subterms.

Like Jones [4], we will limit interest to cons-free rules. To start, we must define what this means in the setting of term rewriting.

Definition 1 (Cons-free Rules).

([3]) A set of rules is cons-free if for all :

  • is linear (so no variable occurs more than once);

  • has the form with all constructor terms (including variables);

  • if where with , then either or .

So is a left-linear constructor system whose rules introduce no new constructors (besides fixed data). Cons-free term rewriting enjoys many convenient properties. Most importantly, the set of data terms that may be reduced to is limited by the data terms in the start term and the right-hand sides of rules, as described by the following definition.

Definition 2.

For a given ground term , the set contains all data terms which occur as (a) a subterm of , or (b) a subterm of the right-hand side of some rule in .

is closed under subterms and, since is fixed, has linear size in the size of . We will see that cons-free reduction, when starting with a term of the right shape, preserves -safety, which limits the constructors that may occur at any position in a term:

Definition 3 (-safety).

Given a set of data terms which is closed under subterms, and which contains all data terms occurring in a right-hand side of :

  1. any term in is -safe;

  2. if has arity and are -safe, then is -safe.

For cons-free , it is not hard to obtain the following property:

Lemma 4.

Let be cons-free. For all : if is -safe and , then is -safe.

Thus, for a decision problem or (where and all are data terms), all terms occurring in the reduction are -safe. This insight allows us to limit interest to -safe terms in most cases, and is instrumental in the following.

3. Call-by-value Cons-free Rewriting Characterises PTIME

For our first result – which will serve as a basis for our simplification of the proof in [3] – we will see that any decision problem in PTIME can be accepted by a cons-free TRSs with call-by-value reduction, and vice versa. First, we define what accepting means for a TRS.

Definition 5.

A decision problem is a set .

A TRS with nullary constructors and , a binary constructor (denoted infix) and a unary defined symbol accepts if for all : if and only if . Similarly, such a TRS accepts by call-by-value reduction if: if and only if .

It is not required that all evaluations end in , just that there is such an evaluation – and that there is not if . This is important as TRSs are not required to be deterministic. We say that a TRS decides if it accepts

and is moreover deterministic. This also corresponds to the notion for (non-deterministic) Turing Machines. We claim:

Lemma 6.

If a decision problem is in PTIME, then there exists a cons-free TRS which decides by call-by-value reduction.


It is not hard to adapt the method of [4] which, given a fixed deterministic Turing Machine operating in polynomial time, specifies a cons-free TRS simulating the machine. ∎

To see that cons-free call-by-value term rewriting characterises PTIME, it merely remains to be seen that every decision problem that is accepted by a call-by-value cons-free TRS can be solved by a deterministic Turing Machine – or, equivalently, an algorithm in pseudo code – running in polynomial time. We consider the following algorithm.

Algorithm 7.

For a given starting term , let . For all of arity and for all , let .

Now, for and of arity in :

  • if , then ;

  • if there is some rule matching and a substitution such that , and if , then :

  • if neither of the above hold, then .

Here, is defined recursively for -safe terms by:

  • if is a data term, then ;

  • if , then let

We stop the algorithm at the first index where for all and : .

As and are both finite, and the number of positions at which is increases in every step, this process always ends. What is more, it ends (relatively) fast:

Lemma 8.

Algorithm 7 operates in steps, where is the size of the input term and the greatest arity in (assuming the size and contents of and constant).

Moreover, it provides a decision procedure, calculating all normal forms at once:

Lemma 9.

For of arity and : if and only if .

Combining these results, we obtain:

Corollary 10.

Cons-free call-by-value term rewriting characterises PTIME.

Comment: although new, this result is admittedly unsurprising, given the similarity of this result to Jones’ work in [4]. Although Jones uses a deterministic language, Bonfante [1] shows (following an early result in [2]) that adding a non-deterministic choice operator to cons-free first-order programs makes no difference in expressivity.

4. “Constrained” Systems

Towards the main topic in this work, we consider the syntactic restriction imposed in [3].

Definition 11.

For any non-variable term , let consist of those which are variables. We say a rule is semi-linear if each occurs at most once in . A set of rules is constrained if there exists such that for all :

  • if the root symbol of is an element of , then is semi-linear;

  • for all and terms : if then the root symbol of is in .

We easily obtain a counterpart of Lemma 6, so to obtain a characterisation result, it suffices if a “constrained” cons-free TRS cannot handle problems outside PTIME. This we show by translating any such system into a cons-free call-by-value TRS, in two steps:

  • First, the “constrained” definition is hard to fully oversee. We will consider a simple syntactic transformation to an equivalent system where all rules are semi-linear.

  • Second, we add rules to the system to let every ground term reduce to a data term. Having done this, we can safely impose a call-by-value evaluation strategy.

4.1. Semi-linearity

It is worth noting that, of the two restrictions, the key one is for rules to be semi-linear. While it is allowed for some rules not to be semi-linear, their variable duplication cannot occur in a recursive way. In practice, this means that the ability to have symbols and non-semi-linear rules is little more than syntactic sugar.

To demonstrate this, let us start by a few syntactic changes which transform a “constrained” cons-free TRS into a semi-linear one (that is, one where all rules are semi-linear).

Definition 12.

For all , for all indexes with , we let , where is:

  • if or is not a variable;

  • the number of occurrences of in if and is a variable.

Note that, by definition of , for all if . Let the new signature (where indicates has arity ).

In order to transform terms to , we define :

Definition 13.

For any term in , let in be inductively defined:

  • if is a variable, then ;

  • if with , then ;

  • if with , then each is copied times; that is: .

We easily obtain that respects the arities in , provided with implies – which is the case in -safe terms and right-hand sides of rules in . Moreover, -safe terms over are mapped to -safe terms over .

Definition 14.

We create a new set of rules containing, for all elements , a rule where for and:

  • for all : , and all other are distinct fresh variables;

  • , where is obtained from by replacing all occurrences of a variable by distinct variables from .

Using the restrictions and the property that each if , we obtain:

Lemma 15.

The rules in are well-defined, cons-free and semi-linear.

Moreover, these altered rules give roughly the same rewrite relation:

Theorem 16.

Let be -safe terms and a data term. Then:

  • if , then (an easy induction on the size of );

  • if , then (by induction on the length of );

  • if and only if (by combining the first two statements).

To avoid a need to alter the input, we may add further (semi-linear!) rules such as . We obtain the corollary that constrained cons-free rewriting characterises PTIME iff semi-linear cons-free rewriting does.

4.2. Call-by-value Reduction

Now, to draw the connection with Corollary 10, we cannot simply impose a call-by-value strategy and expect to obtain the same normal forms; an immediate counterexample is the TRS with rules and : we have , but this normal form is never reached using call-by-value rewriting.

Thus, we will use another simple syntactic adaptation:

Definition 17.

We let , and let . We also include in .

After this modification, every ground term reduces to a data term, which allows a call-by-value strategy to work optimally. Otherwise, the extra rules have little effect:

Lemma 18.

Let be a -safe term in and . Then iff .

On this TRS, we may safely impose call-by-value strategy.

Lemma 19.

Let be a -safe term and a data term such that . Then .


The core idea is to trace descendants: if by reductions in and is not data, then because of semi-linearity, has at most one copy of : say with . Any subsequent reduction in might as well be done immediately in . ∎

Binding Lemmas 18 and 19 together, we obtain:

Corollary 20.

For every -safe term and data term : iff .

5. Conclusion

Putting the transformations and Algorithm 7 together, we thus obtain an alternative proof for the result in [3]. But we have done a bit more than that: we have also seen that both call-by-value and semi-linear cons-free term rewriting characterise PTIME. Moreover, through these transformations we have demonstrated that, at least in the first-order setting, there is little advantage to be gained by considering constrained or semi-linear rewriting over the (arguably simpler) approach of imposing an evaluation strategy.

Although we have used a call-by-value strategy here for simplicity, it would not be hard to adapt the results to use the more common (in rewriting) innermost strategy instead. An interesting future work would be to test whether the parallel with Jones’ work extends to higher orders, i.e. whether innermost -order rewriting characterises EXPTIME – and whether instead using semi-linearity restrictions does add expressivity in this setting.


  • [1] G. Bonfante. Some programming languages for logspace and ptime. In M. Johnson, editor, AMAST ’06, volume 4019 of LNCS, pages 66–80, 2006.
  • [2] S.A. Cook. Characterizations of pushdown machines in terms of time-bounded computers. ACM, 18(1):4–18, 1971.
  • [3] D. de Carvalho and J. Simonsen. An implicit characterization of the polynomial-time decidable sets by cons-free rewriting. In G. Dowek, editor, RTA-TLCA ’14, volume 8560 of LNCS, pages 179–193, 2014.
  • [4] N. Jones. Life without cons. JFP, 11(1):5–94, 2001.
  • [5] C. Kop and J. Simonsen. Complexity hierarchies and higher-order cons-free rewriting. In D. Kesner and B. Pientka, editors, FSCD ’16, volume 52 of LIPIcs, pages 23:1–23:18, 2016.