The em-convex rewrite system

07/05/2018 ∙ by Marius Buliga, et al. ∙ 0

We introduce and study em (or "emergent"), a lambda calculus style rewrite system inspired from dilations structures in metric geometry. Then we add a new axiom (convex) and explore its consequences. Although (convex) forces commutativity of the infinitesimal operations, Theorems 6.2, 8.9 and Proposition 8.7 appear as a lambda calculus style version of Gleason and Montgomery-Zippin solution to the Hilbert 5th problem.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

There is evidence coming from analysis in metric spaces that the correct algebraic structure of the infinitesimal tangent space is not the commutative one of a vector space, but the more general one of a conical group. Particular examples of conical groups appear in many places. As contractible groups

[16]. In the Lie groups category they appear as Carnot groups which are models of metric tangent spaces in sub-riemannian geometry Gromov [9], Bellaïche [1], Pansu [15], or as limits of Cayley graphs of groups of polynomial growth Gromov [10]. They are used as models of approximate groups Breuillard, Green, Tao [2]. Related, in model theory Hrushovski [12].

By trying to understand how to construct a larger theory which might cover this more general calculus, we arrive to the conclusion that even in the particular case of classical calculus, there is too much algebraic structure. In fact, we can show that algebraic structures (like the one of a vector space or conical group), linearity and differentiability come from, or emerge from, a much more simple and general structure, called dilation structure in metric geometry [5] or emergent algebra (uniform idempotent right quasigroups) in more general situations [4].

In this article we give a lambda calculus treatment to emergent algebras, as a part of a two steps program which we propose to the interested reader: (a) how to generalize the categorical treatment of various subjects from logic so that it applies to categories of conical groups? (b) what is to be learned from the even more general point of view of emergent algebras, starting from very little algebraic structures?

We add an intuitively very natural new axiom (convex) which allows us to construct a field of numbers. The price of (convex) is too big though, because it forces commutativity (of the infinitesimal operations), but nevertheless the whole construction is interesting because it is based on a very small set of primitives. Theorems 6.2, 8.9 and Proposition 8.7 appear as a lambda calculus style version of Gleason [11] and Montgomery-Zippin [14] solution to the Hilbert 5th problem.

In a future article we shall give an alternative to (convex) which does not force commutativity.

1 Dilation terms

Definition 1.1

We introduce a lambda calculus for dilation terms, described by the following: variables, atomic types, constants, terms, typing rules, reductions.

Variables. Atomic types.

We start with two atomic types:

  1. are variables of type (for edge)

  2. are variables of type (for node)

Lambda calculus notation conventions.

As it is customary in lambda calculus, for a chain of applications we use a left associative notation . Also, for types we use a right associative notation, i.e. . For abstraction we indicate the type of variable, for ex. and for a chain of abstractions we use a right associative notation.

Constants.

There are constant terms:

  1. the multiplication

  2. the inverse

  3. the dilation

  4. the inverse dilation

Terms.

Typing rules.

We shall consider only well typed terms according to the rules:

  1. if a term then the type of is

  2. if and then

Notation.

  1. for any term and we denote ,

  2. for any term and we denote ,

A graphical notation for dilation terms.

We represent terms by their syntactic trees. A syntactic tree is a particular case of an oriented ribbon graph, where we assume that the edges of the syntactic tree are oriented from the leaves to the root and that any node of the syntactic tree has only one output edge and the order of the other edges comes from the clockwise orientation, starting from the output edge.

Whenever we draw syntactic trees, the root will appear at the left of the figure. In this way the orientations of the edges can be deduced from the rules from the clockwise order on the page, the position of the root and the color or names of the nodes.

We shall use the color red for those decorations (of the half-edges or nodes) which appear as variables in a lambda abstraction.

Reductions.

in the following will mean the reflexive, symmetric, tranzitive closure the relation where is any of the reductions from the list.

First are the lambda calculus reductions:

  1. if , where denotes one of the types , , then

  2. for any , if then , if then .

  3. for any

A direct consequence of () is: for any we have therefore

Then we have the algebraic reductions.

  1.   ,  

    Here the graphical notation needs both the introduction of a ”termination” decoration and to accept forrests of syntactic trees instead trees, but (for the moment) we choose to just delete any syntactic tree with the root decorrated with the termination symbol ”T”.

  2. for any and

  3. for any ,  

  4. for any and any term   ,  

  5. for any and any   ,  

  6. for any   ,  

The graphical representations of the algebraic rewrites (R1), (R2) and (C) are in combinatory terms form, by using the reduction. We also represented these reductions for terms and we used (in). In this way the two constants , have a symmetric role.

Let’s introduce the terms

(1)
(2)

From (id) we have

2 Reidemeister moves and idempotent right quasigroups

The reductions (R1), (R2) are related to the Reidemeister moves from knot theory. We see knot diagrams as oriented ribbon graphs made of two kinds of 4 valent nodes. The usual knot diagrams are also planar graphs, but this a condition which is irrelevant for this exposition, so we ignore it. The Reidemeister moves are indeed graph rewrites which apply on this class of ribbon graphs.

Knot diagrams edges can be decorated by elements from an algebraic structure called ”quandle”, in such a way that the Reidemeister rewrites (from knot theory) preserve the decoration. A quandle is a self-distributive idempotent right quasigroup and the correspondence between the Reidemeister rewrites (from knot theory) and the axioms of a quandle is the following: ”self-distributive” = R3, ”idempotent” = R1, ”right quasigroup” = R2. For the moment we concentrate on the Reidemeister moves R1 and R2. The R3 move will appear later as an ”emergent” rewrite.

Definition 2.1

An idempotent right quasigroup (irq) is a set with two binary operations which satisfy the axioms:

  1. (R1) for any

  2. (R2) for any

A simple example of an irq is given by ,

where , a real vector space and is a fixed parameter. (This example is actually a quandle, meaning that it satisfies also a third axiom R3 of self-distributivity). There are many more other examples of irqs, some of them which generalize this simple example in a non-commutative setting.

We arrive at the notion of a -irq if we consider instead a family of irqs indexed with a parameter , where is a commutative group. See Definition 4.2 [7], or Definition 5.1 [3]. In Definition 3.3. [4] we started from one irq and defined a -irq.

Definition 2.2

Let be a commutative group, with the operation denoted multiplicatively and the neutral element denoted by . A -irq is a family of irqs , for any , with the properties:

  1. (a) for any

  2. (b) for any

  3. (c) for any .

As concerns dilation terms, we have the following group structure on terms of type .

Proposition 2.3

The terms of type form a commutative group with the multiplication , inverse and neutral element .

Proof.

The inverse is involutive. Indeed, from (in) . From (ext) we get

For the inverse of , we remark that by (in) and (id). From (ext) we get

From (R2) and (id) we obtain:

From (in) and (act) we continue the string of equalities with

From (ext), then (C) we obtain

In order to prove the associativity of multiplication we compute, from (act), then (), then two () reductions

In the same way we compute:

Therefore which leads to the associativity of multiplication by using (ext).

We use this to give an interpretation of the (R1), (R2) reductions as Reidemeister rewrites.

Proposition 2.4

The terms of type form a -irq with the operations: for any and , define and .

Proof.

We know from Proposition 2.3 that is a commutative group. The reductions (R1), (R2) imply the points (R1), (R2) from the Definition 2.1 applied for the operations , , for . We have to verify Definition 2.2. The point (a) is the reduction (id), the point (b) is the reduction (in) and the point (c) is the reduction (act).

Definition 2.5

For any we define their multiplication by:

Proposition 2.6
  1. For any we have

    (3)
    (4)
  2. The reduction (R1) is equivalent to: for any

    (5)
  3. The reduction (act) is equivalent to: for any

    (6)

Proof.

(a) For any we have

(b) For any and any we have:

Also, . This proves the equivalence of (5) with (R1).

(c) Indeed. for

The equality (6) is a reformulation of (act).

3 Differences

Definition 3.1

For , the difference is the combinator:

(7)

The difference combinator is:

(8)

For any the term is another difference combinator defined by:

(9)

so that for we have . We shall also use the notation for any and .

Notice the graphical notation for the difference combinators and . The correct graphical notation is the one from the middle of the last two figures. The ones from the left are only partially correct. For example, in the figure for the difference combinator the color red indicates that appear in lambda abstractions, however it does not indicate precisely the order . On the other side the notation is more human-friendly, therefore we are going to use it, or analogous ones, several times in this paper.

The difference has a different type than . But it has the same type as . We might try to rename by , which would give a term by (ext) but this is not feasible, because we can’t expect that exists. We arrive at a solution for this with a convex dilation terms calculus in Section 8. A more general solution will be presented in a future article. Until then, the difference has some interesting properties.

Proposition 3.2

For any we have . If there exists such that then . If the collection of edge variables has more than one element, this is impossible.

Proof.

Via (R2)

Suppose that there is such that . Then by (R2)

By the previous reduction

which leads us to

If this is true then for any we have: .

By using the difference combinators (9) we can chain several differences: if and then is equal to .

Theorem 3.3

is functionally equivalent with the dilation constant:

(10)

is the approximate inverse combinator from (17), Definition 4.1

(11)

The combinator , can be obtained from:

(12)

The reduction (R2) is equivalent with: for any and

(13)

Proof.

The proofs of (10), (11), (12) are given in the associated figures. For the last part, in the following figure we prove that (13) is true from (R2).

In the opposite direction, notice that we can still use the first two equalities of the previous figure, which use only (9) and (7) from Definition 3.1. We obtain:

(14)

Let’s use (13) and (14) for . We obtain: for any

This is the first statement from Proposition 3.2 : for any . We use this, (14) and (13) for . We obtain: for any

Take now instead of A in the previous equality. Apply to , use (in) and obtain (R2):

which ends the proof of the last statement.

4 Approximate operations terms

We introduce some new combinators: aproximate sum, approximate difference, approximate inverse. The names come from dilation structures, Definition 11 [5], where they play an important role.

Definition 4.1

The asum (approximate sum), adif (approximate difference) and ainv (approximate inverse) combinators are:

  1. asum or approximate sum:

    (15)

  2. adif or approximate difference:

    (16)

  3. ainv or approximate inverse:

    (17)

They satisfy a useful list of properties. In the following proposition we collect them and also we indicate the places where they appear in the formalism of dilation structures for the first time. The proofs are given in the associated figures.

Proposition 4.2
  1. and are, in a sense, one inverse of the other (Section 4.2, Proposition 3 [5]):

  2. The approximate difference can be computed from the approximate sum, approximate inverse and the dilation constant (Section 4.2, Proposition 4 [5]):

  3. The approximate sum is approximately associative (Section 4.2, Proposition 5 [5]):

  4. The approximate inverse is approximately it’s own inverse (Section 4.2, Proposition 5 [5]):

  5. The approximate sum has neutral elements (from the proof of Theorem 10, Section 6 [5]):

  6. The approximate sum is approximately distributive with respect to dilations:

  7. The approximate inverse approximately commutes with dilations:

5 Finite terms. Emergent terms

Definition 5.1

Finite terms are those dilation terms which are generated from:

We shall extend the class of finite terms to emergent terms and their reductions, via the enlarging the class of terms with a constant .

Definition 5.2

We introduce the extended node type by: if or . We introduce new terms and constants:

  1. ,

  2. extended from by:

  3. extended from by if or

  4. we define for as if , where is the combinator (15), else

  5. we define for as if , where is the combinator (16), else

  6. we define for as if , where is the combinator (17), else

The emergent terms are defined as those terms

for which the extension function is well defined.

The extension function is defined recursively from finite terms to emergent terms, as:

  1. for any , ,

  2. for any , ,

  3. ,

  4. for any , , , ,

  5. for any ,

  6. ,

  7. .

We saw in Proposition 3.2 that if the class of variables contains more than one element then there is no such that , therefore is truly an extension of the type .

Definition 5.3

The emergent reductions extend the equality of finite terms to an equality of emergent terms, via the axiom:

  1. for any finite terms , if as dilation terms then

6 Infinitesimal operations

Proposition 4.2 give lots of emergent reductions.

Definition 6.1

On the collection of emergent terms we define the operations:

  1. , the addition of relative to

  2. , the inverse of relative to

  3. for any and any ,

Theorem 6.2

For any the class of emergent terms of type is a group with the operation , the inverse function and neutral element .

For any element the function which maps to is a group morphism and moreover an action of the group , of terms of type from Proposition 2.3, on the group of emergent terms of type .

Proof.

We use Proposition 4.2. Indeed, both terms from the equality (c) are finite terms, therefore by (em) their extensions are equal.

Let’s apply to the term from the left. We obtain:

We apply further the emergent terms and we use Definition 6.1

Same procedure, for the term from the right gives:

We apply now the emergent terms

We obtained therefore the associativity of the operation :

For the fact that is the neutral element we use the equalities (e) from Proposition 4.2. Again, we see there only finite terms. We use (em) to obtain equalities of the extensions

We apply to the first equality

therefore

We apply and we obtain, after we use Definition 6.1

Same treatment for the second equality:

We apply and we obtain

From Proposition 4.2 (d) we use (em) and we apply to obtain:

which leads us in the same way to: for any

Proposition 4.2 (a) gives, by using (em), then by application of , then , the following:

(18)

We look now at Proposition 4.2 (b). The left hand side term is finite, but the right hand side term, i.e. is not finite. It is nevertheless equal via reductions of dilation terms, to the finite term . So we can use (em), then apply and we obtain:

We apply , then

(19)

From the right side equality of (18), along with (19) for , and the fact that is a neutral element, we get:

therefore is an inverse at left of . We use the equality from the left of (18), (19) for , and the fact that is a neutral element:

which shows that is an inverse at right of . All in all we proved the fact that is a group operation, with inverse and neutral element .

For the morphism property we use Proposition 4.2 (f). We first apply , then we can ”pass to the limit” by using (em), then by application of :