DeepAI
Log In Sign Up

Intensional Constructed Numbers: Towards Formalizing the Notion of Algorithm

09/25/2017
by   Fritz Müller, et al.
0

This work is meant to be a step towards the formal definition of the notion of algorithm, in the sense of an equivalence class of programs working "in a similar way". But instead of defining equivalence transformations directly on programs, we look at the computation for each particular argument and give it a structure. This leads to the notion of constructed number: the result of the computation is a constructed number whose constructors (0, successor) carry a history condition (or trace) of their computation. There are equivalence relations on these conditions and on constructed numbers. Two programs are equivalent if they produce equivalent constructed numbers for each argument.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

07/04/2019

A Formal Axiomatization of Computation

We introduce a set of axioms for the notion of computation, and show tha...
01/10/2021

Tietze Equivalences as Weak Equivalences

A given monoid usually admits many presentations by generators and relat...
07/21/2022

On Feller Continuity and Full Abstraction (Long Version)

We study the nature of applicative bisimilarity in λ-calculi endowed wit...
05/10/2019

Dynamic Verification with Observational Equivalence of C/C++ Concurrency

Program executions under relaxed memory model (rmm) semantics are signif...
02/17/2020

Equivalence of Dataflow Graphs via Rewrite Rules Using a Graph-to-Sequence Neural Model

In this work we target the problem of provably computing the equivalence...
02/21/2018

Proper Semirings and Proper Convex Functors

Esik and Maletti introduced the notion of a proper semiring and proved t...
04/26/2019

Quantitative Logics for Equivalence of Effectful Programs

In order to reason about effects, we can define quantitative formulas to...

1. Introduction: is there a definition of “Algorithm”?

Computation still remains a fundamental mystery of the human mind. What fundamental things do we already know about Computation, roughly, from the standpoint of a “semantical” computer scientist?

We know how to program and how to compute. This is Turing’s analysis [16], leading to the notion of computable function and recursion theory. We know how to give mathematical meaning to programs, this is Scott’s denotational semantics [15]. We know how to formally prove qualitative properties of programs, this is Hoare’s program logic [11]. There is complexity theory exploring two quantitative properties of programs [10]. Complexity theory has some unsolved fundamental problems for which we do not even know the theoretical means to attack them [3, 4]

. There is a deep division between the qualitative and the quantitative theories of Computation. But there are already bridges between them, mainly within the topic of implicit complexity, where complexity classes of higher type functions are explored, logical characterizations of complexity classes are given, or complexity classes are characterized by syntactic restrictions of the computation mechanism by various means. All this remains mainly on the syntactic level, there still is no use of denotational semantics or program logic in complexity theory.

Is there missing something else, perhaps something more obvious and simpler, whose solution could lead to new theoretical means and insights?

Computer scientists always talk about algorithms.
So what is an Algorithm?
There still is no general formal definition of this notion
.

  • Please note that an algorithm is not a program

    , and not a Turing machine. And a program does not become an algorithm when it is written in natural language.

  • Please note that an algorithm is not a computable function.

  • Intuitively, an algorithm is some equivalence class of “similar” programs for the same computable function. The problem is to define the corresponding equivalence relation on programs.

If you think that “insertion sort” and “bubble sort” are different algorithms, then you should be able to prove this. (I have chosen these two algorithms with the same time complexity, so that their difference cannot be justified by different complexity.)

But generally, there is almost no awareness of the problem. There are only few people, notably Yiannis Moschovakis [14] [13], Noson Yanofsky [18], and Bergstra/Middelburg [1], who have insisted on posing the problem and giving partial answers. Besides these works, the talk about algorithms has just produced some kind of “bureaucracy” where certain important algorithms (programs) are sorted out and named. By this talk we all have subconscious intuitions, a kind of “feeling”, when two programs should be regarded as equivalent.

What should/could/might be expected from a formalization of the notion of algorithm and its theory?

  • First, the definition should be more scientific than bureaucratic.

  • There may be different notions of algorithm, for different aspects or different purposes, or even a whole spectrum of notions. The topic seems to have a great arbitrariness, at least in the beginning.

  • The notion of algorithm should be independent of the language in which the programs are written, i.e. the notion should be unique for the data domain at hand. But for the beginning it would be enough to give a definition for a particular programming language.

  • There may be counter-intuitive results: It might come out that “insertion sort” and “bubble sort” are the same algorithm. Or there might be two programs with different time compexity that are equivalent as algorithms. (Counter-intuitive results are no exceptions in science, you will easily find examples.)

  • And even worse: It might turn out that the notion of algorithm does not directly refer to the “normal” (Turing machine, imperative, functional) programs, but to programs of a different kind, think of reversible or quantum programs. (In fact, our tentative solution is about a new kind of programs that have much of the flavour of reversible ones.)

I do not bother much whether the definition that may come out matches the intuitions for algorithms. Who cares about a definition that nobody seems to need? For me the importance of the question is as an indicator that something fundamental is missing, has been overlooked. So I think the question is just a good starting point to dig deeper, it is a guidance for a new analysis per se of Computation.

This introduction is continued in the next section with the basic idea of our tentative solution of the algorithm question, the constructed numbers. Please note that the discovery of the constructed numbers (and their generalizations) is the main achievement of this paper. We are still far from the full solution of the formalization of “algorithm”. This paper is a preliminary version. At the end of the next section there is an outline of the paper.

2. The basic idea of constructed numbers

We begin our analysis with another question different from the algorithm question, it could be something like:

Do we fully know what is happening when we compute?

(Please note that here humans are supposed to compute, sorry for being so old-fashioned.)
I admit that this is a silly question, as it has been answered already in 1936. But let us try a brief fresh analysis of it:

In a Computation, there are data elements of some ontology, here numbers typically described by Peano arithmetic. We think mainly of functional Computation. There are different qualities of Computation steps. There are constructions (), there are destructions (), and there are conditions/decisions/observations (). Recursion and the execution of conditions accounts for the dynamics of the Computation process.

The reality that we see at some point (of time and of place) in the execution of a program are the values of the variables at this point; these values are taken from the data elements. This reality corresponds to the unconscious moves of the mechanic computer. I call it the objective reality. It is the natural, primary reality. Both systems of making sense of programs, Scott’s denotational semantics and Hoare’s program logic, take this reality as their basis.

But in the world there are always different realities. First, often the values of the program are interpreted in some material reality external to the program. But also when we work just inside a formal or programming language, we may impose any interpretation we like as long as it is in accordance with the structure of the language, in some sense. Computer scientists know this already: they may interpret their programs in different ways, with values of some other kind, e.g. with more abstract values in abstract interpretation [2].

So let us be insolent and criticize the objective reality of Computation, and ask if there is a fuller reality that encompasses the primary one; and the critique goes like this:

The value of a variable at a program point should not just be a plain number, but a constructed number, i.e. a number that carries the history of its construction or deduction. The value of the variable should become dynamic, as Computation is.

Let us develop this idea in a first approximation. E.g. there may be a number that is a “” that has been constructed, caused by the fact that the condition was checked. Let us symbolize it as . And there is a similar number that is a “” after the check of , where the (program) variable is different from , .

Now the forgetful computer, already unconscious of its former constructions of and by conditions, may compute . But is this “correct”? It is correct in the extensional sense, the extension of a constructed number being the corresponding plain number, expressed as . It is and .

It is not correct in the intensional sense, the intension of a constructed number being …the constructed number expression itself modulo some equivalences. It is not , and not , because and are (intensionally) different, and the difference of two different numbers cannot be .

So the (human!) computer has to go back to school again (sorry for that), and learn how to compute intensionally. What should she do with the expression ? Simply let it stand as it is for later use, it cannot be reduced intensionally itself. It is an expression that has to be carried through the Computation in suspended unreduced form. Later the constructed number may come again and we may have to compute . Then we get , and this is intensionally correct. The general arithmetic laws stay valid for intensions.

So the subtraction of identical numbers, i.e. with the same history, , annihilates, while the subtraction of non-identical numbers, , with different history, does not. This is one of the main points of our definition of algorithm: Annihilation in all cases would lead to the final result, so to the computed function of the program. Annihilation only in cases of identity leads to a finer distinction of programs, to algorithms.

In our first approximation we have seen constructs , where is a propositional formula and is a number. We keep this variant of our new constructions in mind for the next section. Our constructed numbers will in fact be another variant, where is a history (or trace or certificate) of Computation. We want to get rid of the variable in . And we want to write our numbers in constructor form, with the constructors 0 and suc (and another constructor), and associate the conditions to the constructors. The history-conditions are, roughly, formal products of atomic conditions. The value 0 of the variable will itself be conditioned with its history . So the value of is of the form , where is the condition, the history, of this occurrence of 0, and simply written before the constructor 0. The result of the Computation, the conditioned 1, will now be written , where we make two copies and of the condition , and write them before the constructors suc and 0. The variable no more appears in this expression.

So the conditions of our constructed numbers are not propositional formulas, but histories (or traces or certificates) of Computation. They are described not by logic, but by a new kind of algebra.

This is in a very small nutshell the idea of the arithmetic CN of constructed numbers, which will be given in Sections 4 and 5 with more features than described here. The expressions of CN arose as traces of Computation. The main insight was to see that these are numbers in their own right.

What have constructed numbers to do with algorithms?
The system CN of constructed numbers is the main achievement of this paper and it was motivated by the algorithm question. The common approach to the algorithm question is to define equivalence transformations directly on programs, e.g. [18]. We take another approach: we look at the Computation for each particular argument and give it a structure. The result of this Computation is a constructed number and the conditions of its constructors carry the history of their computations. There are equivalence relations on conditions and on constructed numbers. Two programs are equivalent if they produce equivalent constructed numbers for each argument, see Definition 6. But there is a difficulty: with the introduction of constructed numbers the programming has changed its character in CN, it is programming with numbers and with conditions

. It looks more like reversible computing in that it also keeps traces. There should be a notion of correct correspondence between CN and normal programs, but we must leave this question open for the moment.

What has been achieved, in general terms?
We have made a whole Computation an object (in the form of the conditioned final result), and given it some specific structure. The most primitive form of doing this is to take just the sequence of computation or reduction steps and to count these steps, to get just a number. We can read out more information from our solution, e.g. we can see which input observations were needed for the generation of a particular constructor of the result, and much more. I only know of two other approaches to make a Computation a structured object: the study of the equivalence of reductions of term rewriting systems [17], and Jean-Yves Girard’s research programme of Geometry of Interaction [6][5][9], which is primarily on the normalization of logic proofs. There is a certain arbitrariness and freedom to choose what we want to see in the structure of a Computation.

What will be the use of our system CN of constructed numbers, besides being a step towards formalizing “algorithm”? Can we program better? First, to keep things simple, our tentative solution is only about numbers in tally form. This is not enough to do normal complexity theory. Lists have to be coded. But our principles can surely be extended to a system with list or tree data types. Even then, programming in CN is a different kind of programming and more complicated, so that it is not meant to compete with conventional programming on its own ground. But the applicability might change with the advent of reversible and quantum computing. CN has the explicit keeping of computation traces in common with reversible programs, but a CN program need not be reversible. Another application could be in analyses of programs connected to complexity.

Outline of the paper:

  1. Subjective objects: ideas and philosophy:
    Here we take up the idea of our “first approximation” (with propositional formula in ) of Section 2. Our “subjective objects” are general constructs , where the condition is a history (as for our constructed numbers) or a propositional formula. We explain the meaning of them and speculate about a “decomposition” of set theory. This section is a speculative digression from the algorithm topic and is more on logic. It can be skipped by those interested only in algorithms.

  2. System CN (constructed numbers): condition algebra and explanation with examples:
    We explain the programming in CN with the examples of addition and subtraction. In Subsection 4.1 “Finite size limitation of Computation, the condition algebra and the bracket mechanism” we also explain a copy mechanism that was inspired by Jean-Yves Girard’s locativity [7, 8].

  3. System CN (constructed numbers): the arithmetic:
    Here the system CN is completed by giving the rules on numbers.

  4. Basic notions of the theory of system CN:
    We give the notions of CN-algorithm and direct computation.

  5. Outlook

3. Subjective objects: ideas and philosophy

In the last section we saw constructed numbers (read “A condition a”), where is a fact about the (history) of Computation, and is a natural or constructed number. The meaning is that is a condition for the existence of the individual constructed. Already before giving the formal system CN, let us speculate here about the deeper meaning and possible generalizations of these constructs. Please note that we have only realized the system CN in this paper. This section can be skipped by those interested only in algorithms. It is more on logic.

The constructs  come in two guises, with history conditions or with propositional conditions . I like to call all these objects  generally “subjective objects”.

Why “subjective objects”?
The polarity between subject and object is primordial for the human mind. We always think about objects, and we do this always by subjective means. There is an interplay between the two realms, nothing can be seen in one-sided isolation. So pure objects are only an idealization. In mathematics, the logical formulas and the processes of proof, deduction and computation at first sight belong to the realm of the subjective.111 It should be clear that the adjectives objective resp. subjective have just the meaning “belonging to the realm of the object resp. the subject”, not the popular meaning “being generally valid or formal” resp. “being valid only for one person or informal”.

Therefore the name “subjective object” for , as we form a new object from the object by adding a (subjective) logical formula resp. a (subjective) history of a computation.

So there is no fixed border between the objective and the subjective, the border can be shifted. Subjective processes can be construed as objects, they can be reified (verdinglicht), as is done e.g. in proof theory. Think also of Gödel’s coding of propositions as numbers.

At last we come to the decisive question:
What does the subjective object  “mean”?
We must distinguish the two cases:

(1) is a history of a computation:
 does not stand for a set-theoretic object, just as a program does not stand for a set-theoretic object.
For numbers the meaning will be given by the rules of system CN, which describe how we can compute with constructed numbers.

(2) is a propositional formula (for the rest of this section):
Also here  does not stand for a set-theoretical object. At best a set-theoretical interpretation of the system can be given, esp. when we are in a simple system where is a number. This interpretation assigns for each variable-environment (a function from the variables to normal numbers) a set of normal numbers

roughly.

The general informal meaning of  is hard to describe. The problem is that we build a new kind of object that falls out of the basic universe of objects on which all is grounded (normal numbers, sets). So it does not suffice to use the language of these basic objects. We must circumscribe the meaning in a kind of contradictory way; contradictory not in the formal logical sense, but in the sense of “contradictory in the notion”, just as when we say “light is a wave and a particle”. I have to offer the following two meanings” of :
(a) The object which comes to existence when is fulfilled, or
(b) The concept of (an object under the fulfillment of ), but this concept construed as an object.
So  is in some sense both an object and a concept. Let us call these objects of variant (2) “concept objects”. I admit that this is very strange, but I have some (preliminary) rules that should express precisely the meaning of , the most important ones are:




from this follows:
from this follows:
If we are in a system with numbers, there are also rules to compute, e.g. :

Of course, subjective objects are not new. But the old forms always had a plain meaning in the basic universe of objects, they did not transcend it. There are the usual or description operators. Please note that our construction is not of this kind.  is not the object that fulfills . This latter object does not exist when is not fulfilled.  always exists, regardless whether is fulfilled or not. E.g.  is a very honourable number, it is not , but it is .

The expressions of set comprehension in set theory are also examples for subjective objects: should be the set of all for which is fulfilled, if this set exists. Georg Cantor’s informal definition of set was this:

A set is a multitude of things that can be thought as a unity.

Guided by this definition we can make a decomposition of the object into three steps:

  1. We start with our subjective object . This is just the object that denotes the concept of an fulfilling .

  2. We make of this object the multitude of all such by the expression , where is a “data choice operator” binding the in . (This is not yet a unity.)

  3. We have convinced ourselves that this multitude can be thought as a unity. This does not mean that it is already a unity. We must make it a unity, if we want to: .

Based on these constructions, there (hopefully) will be a general theory expressing concept objects, multitudes, classes and sets. The property of being a set is definable in this system. What about Russell’s paradox? We can form the class

which is not a set.

4. System CN (constructed numbers): condition algebra and explanation with examples

We have explained the basic idea of constructed numbers in Section 2. Here we try a gentle introduction to CN guided by two example programs of addition and subtraction, like an introduction to a new programming language. It should be clear that we explain new features and rules when they are needed on the way, there will always be points left open. The algebra of conditions is given here in full detail in subsection 4.1, the rest of CN appears in full detail in Section 5.

First, the normal numbers are built by the constructors 0 and suc, and the programs are non-deterministic first-order recursive reduction rules for each defined function. Here is a normal program for addition, , integer type, number variables:

0

This was too simple, let us adapt this definition to constructed numbers. There are condition expressions and number expressions . Conditions are built up from atomic conditions by an algebra with a formal product being associative and commutative, and other operators. Imagine the atomic conditions as free objects of the algebra, we will not see them in this section. The basic numbers are built up from the constructors , and , where are condition arguments of the constructors.

We have a third constructor “ ann” which takes two condition arguments and a number argument : . In the programs, is created from the (suspended) mutual annihilation of , taken positively, and , taken negatively. Such a creation takes place e.g. in a subtraction, which we will see below. behaves extensionally as the identity function. can be reduced to when are “inverses” of each other, in a certain sense that is not the sense of groups. In the other cases has to be carried through the Computation, but it can react and be observed and processed in a reduction. This means that the programs must have reduction rules also for the case of  ann, if they are not under-specified. (But functions are allowed to be under-specified.)

There are six relations on conditions and numbers:

  • The equality on conditions and the “smooth equality” on numbers of basic equations (congruent equivalences). These are independent of the program.

  • The reduction relation on numbers caused by the program rules (a congruence).

  • The “equality reduction” on numbers that encompasses and (reflexive, transitive and congruence).

  • The “direct equality reduction” on numbers is a restriction of that accounts for “direct” Computation.

  • In case of consistency (of reversing the reduction rules, see below) there is the equality on numbers defined by iff and .

Examples are:

exchange laws on numbers for any pairwise combination of suc- and  ann-constructors.
The conditions give the individual constructors an identity, and our aim is to give them each a unique identity by a unique condition, roughly. With the exchange laws the sequence of the constructors of a number can be permuted in any order. So we can push the “right” constructor to the top in order to perform a reduction rule for a function. Programming in CN means programming with numbers and with conditions.

Here is a possible addition program for constructed numbers derived from the normal program above, the are condition variables:

(1)
(2)
(3)
(4)
(5)

Here the constructor  ann is treated like suc: it walks up out of the sum unchanged. There is the same case analysis in the first argument of as in the normal program. But for the case there is a second recursion over the second argument of to bring down to . In the end, they coalesce to . If we intensionally change a rule, e.g.  as the last rule, then we still have an addition program in the extensional sense, but with different (intensional) properties. In this example the commutativity of addition would get lost.

Here is a new operator of the condition algebra: , the bracket operator. It is always used to enclose the composed condition of a constructor in the right rule side (here ), so that it has limited capabilities to react with the outside. Why that?

4.1. Finite size limitation of Computation, the condition algebra and the bracket mechanism


Since 1936 we know that Computation has a finite size limitation. In his analysis Alan Turing explained that a computer has only finitely many states of mind, so that her consciousness for the Computation is limited. Accordingly, the Turing machine has finitely many states, and the program has finite size. The state changes, when data are observed: state atomic datum state . There are states that are fully “conscious” of a finite sequence of preceding data that have been observed; and states that lack such knowledge, because they have already “digested” the data. In a Turing machine, the different quality of these states does not appear in the syntax, it is unstructured.

In a program, esp. a functional one, the different quality is distinguished in the syntax: the “conscious” states appear after observations in an if-then-else construct. In our programs they appear after a whole left rule side is matched, and the gathered knowledge comprises all the constructors that have been matched. There are pieces of this knowledge distributed on the constructors of the right side by our condition mechanism. We give these pieces of information/condition, that were fully known at that moment of creation, a special status by enclosing them in brackets ; so that the laws on conditions, like commutativity and associativity, are only applicable inside the brackets and cannot “cross the border”.

So we have a trace of the recursive structure of the Computation in the conditions (of the final result). What would happen if we had no bracket operator? Then all the atomic conditions in the condition of an output constructor would be mixed together in a big pot by associativity and commutativity and can annihilate and merge, so that in the end the condition just says which input constructors were used for the output constructor. The recursive structure of the Computation would get lost and we merely have a quantitative information.

But we do not want the borders of the brackets to be strict, we want to be allowed to shift them. The reason is that we want to identify some programs (for the same function) as algorithms, but of cause not all of them. So we introduce the rule

(I have also tried other rules, namely resp. , but dismissed them.)
But this alone does not make it. The condition inside a bracket can get unlimited many factors, and we have seen above that in a Computation this size is always limited. Hence our system CN comes with a parameter “” (), it must always be . The parameter is set once for each proof that is performed in CN. (But in many cases it need not be set to a fixed number as is sufficient.) The rule is not applicable when . What would happen if we had no size limitation? Things like described above. Programs would become equivalent that cannot be justified so by local transformations.

We now give the complete algebra of conditions. Conditions are built up from atomic conditions (of a countably infinite set ) and variables by some algebraic operations. The atomic conditions obey no other laws than those given here.

We already said that we want to distinguish each individual constructor in a number. This is a variant of the idea of “locativity” of Jean-Yves Girard [7, 8]. For this, system CN has a built-in copy mechanism which makes out of a number term two copies , and out of a condition term two copies . We explain how a term has to be. [copy exponent] We can define the notion of position in a term (a condition or number term) in the usual way as a word over a small alphabet. is the subterm of at position . We define the (copy) exponent of position in , , as the sequence of exponents that we see on the way from walking up to the root of . E.g. let . Then for the position of the occurrence of in we get . For the position of the occurrence of in we get . [unique copy exponent] Let be a condition or number term. has unique (copy) exponents if the following is fulfilled: Let be an atomic condition, a condition variable , or a number variable . Let be positions in and . Then , are not comparable, i.e. neither nor . (Here for words it is iff there is a word with .) This means that two different occurrences of are distinguished by their incomparable copy exponents. If the term has unique exponents, then also every subterm of it.

The conditions are:
condition variables , atomic conditions ,
(product), (neutral element), (a kind of inverse, but not in group sense!),
, , .

The size function is defined on conditions in their purely syntactic form:

Please note that the “condition placeholder” and the condition variable of the language have different character. is used in the laws of the algebra, whereas is used in the reduction rules for the functions. can be replaced by any condition, whereas can be replaced only by conditions with , so that is valid even after replacement.

Every condition term must be limited, i.e. for all subterms of it is .
Every condition term must have unique copy exponents. For an equation to be valid, both sides must have them. If we make a replacement in a (condition) term according to a valid equation, then the term stays with unique exponents. (Please note that for these restrictions the product is a partial operation.)

The equality on conditions:

The following equations will later have a special status because of their asymmetric character:

There are rules that close to be a congruent equivalence.
Perhaps we should also add , , .

For technical reasons, there are the following rules for numbers:
For a condition of a constructor it must always be .

[due to Reinhold Heckmann]
In the condition algebra we have:
(1)
(2)

Proof.

(1) also is a neutral element: .
.
.
analogous.
(2) . ∎ (1) If , then contains a variable or an atomic condition.
(2) iff .

Proof.

(1) by induction on the term .
(2) Let . Then must contain a variable or an atomic condition, let us name this . Let be the position of some occurrence of in . Then it is , but . (Here means the left, the right subterm.) So has not unique exponents. ∎

(due to Reinhold Heckmann)
If we do not impose the restriction of unique copy exponents, then we get the contradiction (to copies) for every .
For every :
Then

If we keep the restriction of unique exponents, but add the equations and , then we get other contradictions like and .

To prove consistency of the condition algebra, i.e. absence of such contradictions, we can make a model of normal form representations. To keep things simple, we leave out the bracket operator for the moment. We give a sketch of the proof.
We define elementary conditions as conditions or , where is atomic, and exponent is a finite word over . (These are not the copy exponents!)
A set condition is a finite set of such elementary conditions, which obeys the (analogous) property of having unique copy exponents.
There are four reduction rules on set conditions:
(1) replace in an exponent by the empty word,
(2) replace a subset by ,
(3) replace a subset by ,
(4) replace a subset by .
We get the normal form of by:
(a) reducing by rule (1) until it can no more be applied, then
(b) reducing by rules (2-4) to normal form.
The process (b) is confluent, because there is no “overlap” between the rules (2-4), because of unique copy exponents. As it is also terminating, the normal form is unique.
We give an interpretation of the condition algebra:
, for atomic, , , , , . (In the last cases the exponent works on all elements of .)
This interpretation fulfills the equations.
For every condition it is for words of as exponents. End of subsection 4.1


A problem for equality arises with the reduction rules , which may be non-deterministic, in both extensional and intensional sense, so cannot be taken in reverse direction as part of equality. (The extensional non-determinism can be forbidden, if not wanted, but the intensional non-determinism seems to be useful in many cases.) We must ensure that the reverse reductions do not cause contradictions, i.e. that there are no terms without function symbols for which we can deduce though not . Only then can we establish as the equality encompassing and . This can often be proved by confluence.

In the case of our addition program, we use the confluence Theorem 3.3 of [12], with the complete set of laws in Section 5. Essentially, we must check termination of the composition of with one step of ; and the convergence of all critical pairs that are caused by an overlap of two rules of (there are none), or by an overlap of a rule of with a rule of (there are some).

Having established the equality for our addition program, we can prove the equalities and for all constructed numbers , by induction on constructed numbers in their constructor form. The constructor form is the form built from , , . These are the basic objects that exist.

For the proof of :
As the constructors suc and  ann behave in the same way, these cases are analogous. In the outer induction over there are two inner inductions to prove and . All the exchange laws for constructors are used and the commutativity of condition product .

For the proof of :
In the outer induction on there is an inner induction to prove , and in this induction there is an inner induction to prove . Associativity on conditions is used.


Now to natural number subtraction . A normal program is this:

Here is a possible subtraction program in CN:

(6)
(7)
(8)
(9)
(10)
(11)

The rule (6) is the rule where a constructor  ann is created from the subtraction of two sucs. Rule (7) forms the “inverse” of by reversing the order of the two conditions. In rule (9) the inverse of is employed.

We have set the rule (11) in brackets, as it destroys confluence of the program. The program without this rule should be confluent, but I do not yet know how to prove it. The Theorem 3.3 of [12] that we employed above, does not work. We should check the convergence of the critical pairs
(a) of the overlap between two reduction rules: there is just one between (7) and (10), and this converges, and
(b) between a reduction rule and an equality law, there are some with an exchange law.
The critical pairs of (b) do not converge. But if we enlarge the overlap term, and take the two reducts of the enlarged overlap term, then the two converge. I do not know any theorem that would provide a simple proof of confluence from this. (Perhaps a new challenge for term rewriters?) As we cannot prove confluence, we cannot establish equality for this program. But we will prove an inequation with .

Subtraction and addition obey some laws, as expected. For all it should be something like , , and for , where is the extensional value of . But these laws of subtraction/addition all contain two copies of . By “locativity” (see 4.1) we have to distinguish them by naming them differently. (And we can only prove .)

, for all constructed numbers .

Proof.

By induction on . There are five cases for the sum .
We use another kind of exchange law:

(1) :

by copy
by (1) and commut. of
by (6)
as
by induction hyp.

(2) :

by copy
by (2) and commut. of
by (7)
by (10)
by the new exchange law
as and
by induction hyp.

(3) , :

by copy and (3)
by (8)
by induction hyp.

(4) , : analogous to (3), use the rules (4) and (10).

(5) , :

by copy
by (5)
as
by (9)
regard that
regard that

5. System CN (constructed numbers): the arithmetic

We have already given the algebra of conditions with the copy mechanism and an explanation of the bracket mechanism in Subsection 4.1. Here we give the remaining rules on numbers, we have seen the most important ones in applications in Section 4. We also give a detailed explanation of the atomic conditions.

We have already said that there seems to be great arbitrariness and freedom in choosing a way to give structure to Computation. And following from this: arbitrariness in choosing a definition of algorithm. There is no a priori justification for the “correctness” of the choice. The justification will come with the outcome of the approach (or not). But it also seems that once the basic idea is fixed, there is a prescribed way for working it out. This way can only be seen with experience. As I still have not enough experience with my own system, it is here in a preliminary state, it might still be “incomplete”.

We still have to explain: What are the atomic conditions and where do they come from? They have two possible sources:
(1) There may be a main program-function on whose arguments the whole Computation is “grounded”. Let these arguments correspond to the parameters of the function. The atomic conditions are the conditions of the constructors of the (arguments corresponding to the) . The th constructor of is the 0, we name its condition by . For , the th constructor of is a suc or  ann. For suc we name its condition by . For  ann we name its two conditions by and . Here is an example of an argument conditioned by its atomic conditions in this way:

(2) It may be necessary to give atomic conditions for some of the constructors in the right sides of the reduction rules for some function . We give them each a unique number and name them by .

The program for a function should be universally applicable, so we use condition variables in its arguments, and we see no atomic conditions of source (1). They appear only if we want to “ground” the Computation, which we will do in Section 6.

The complete algebra of conditions is in Subsection 4.1.

The arithmetic of CN


The types are: (numbers), (-fold cartesian product), (function space), with and .

There are number variables for number (tuples), and function variables .
There are typing environments and typing judgements meaning “under the typing environment , has the type ”. We mostly leave out the in these judgements, as it is the same on both sides of a rule.

Raw Number terms :
are well-formed condition terms (i.e. limited and with unique exponents) with .





, for a natural number outside of the arithmetic,



Because of non-deterministic reduction, there should be a sharing mechanism for number terms, i.e. constructs where contains the special kind of variable . We leave this out.

We remember the Definitions 4.1 and 4.1 of unique copy exponents. A number term is well-formed if it has unique (copy) exponents and for every condition of a constructor in it is not . [In the following every number term is supposed to be well-formed.] Let be a position in term . The exponentiated subterm of at position is . The following strong conjecture describes how the conditions of constructors in a number term are distinguished: Let be positions of conditions of constructors in a (well-formed) number term . Then not .

Reduction rules for functions:
Every used function variable has a finite set of associated reduction rules of the form:

The form of the :
They are built from number variables and constructors of the form , or where the are of the form or with . All variables appearing in are different (left-linearity). (From this it already follows that the are well-formed. Note also that does not contain atomic conditions.)
(The form is useful for some programming tasks, e.g. for doubling a number. It is a question if this form should be even more liberal.)
The form of :
is a (well-formed) constructed number term. Every (number or condition) variable in comes from the left side. Every atomic condition in is of the form . (Here is the function defined by the rule, is unique for all the occurrences of atomic conditions in right sides of rules of .)
For every in : are condition variables and appear in the left side. (It is not yet clear if the last restriction is “needed”.)


We define the smooth equality on numbers.
There are rules that close to be a congruent equivalence.

Here the four equations on numbers before Proposition 4.1 should be inserted.

The following equations will later have a special status because of their asymmetric character:
Tuple-selection:
, for
Copy-expansion:

Inversion-simplification:
, for


We define the equality reduction on numbers.
There are rules that close to be reflexive, transitive, congruence.
Let be a reduction rule for the function .
Let be a substitution of the variables of by number resp. condition terms such that for every condition variable it is .
Then , where the substitution extends to term arguments.
must be well-formed.
contains : .


We define the direct equality reduction on numbers.
is built up like , we describe this roughly. Take all the defining equations and reductions of , but with the following exemptions:
, on conditions are not allowed.
on conditions is only allowed from left to right.
The equations of tuple-selection and copy-expansion (for ) are only allowed from left to right.
The equation of inversion-simplification (for ) is not allowed.
Then close to be reflexive, transitive, congruence.

Equality reduction accounts for the full possibilities of Computation. But direct equality reduction restricts the Computation to steps that have symmetric character or that go “straight forward”, so that Computation makes no “detours”.

Let be a well-formed number term and for some . Then is also well-formed.

Proof.

Check the defining equations of . Also the reduction rules preserve well-formedness (unique copy exponents), as is well-formed. ∎

6. Basic notions of the theory of system CN

Because of lack of time, I cannot develop the theory of CN properly. We give here only some basic definitions, namely of CN-algorithm and of direct algorithm.

A constructor number is a number term built entirely of constructors , without any condition variables in the conditions. (So the conditions are built only of atomic conditions and condition operators.)
is the set of all constructor numbers.
is the set of all equivalence classes w.r.t. , for .
is the -fold cartesian product, it is isomorphic to the set of all equivalence classes for .
A ground number for a number variable is a constructor number that is formed as the example described in the explanation of atomic conditions at the beginning of Section 5, point (1).
For , is the set of all with a ground number for the (fixed) variable .

Let be a function (symbol) defined by a program.
The algorithm of is the map defined by

We call such algorithm of a CN-algorithm.
For CN-algorithms we define the partial order
if for all