# The em-convex rewrite system

We introduce and study em (or "emergent"), a lambda calculus style rewrite system inspired from dilations structures in metric geometry. Then we add a new axiom (convex) and explore its consequences. Although (convex) forces commutativity of the infinitesimal operations, Theorems 6.2, 8.9 and Proposition 8.7 appear as a lambda calculus style version of Gleason and Montgomery-Zippin solution to the Hilbert 5th problem.

## Authors

• 5 publications
• ### On the EM-Tau algorithm: a new EM-style algorithm with partial E-steps

The EM algorithm is one of many important tools in the field of statisti...
11/21/2017 ∙ by Val Andrei Fajardo, et al. ∙ 0

• ### An ML-style record calculus with extensible records

In this work, we develop a polymorphic record calculus with extensible r...
08/13/2021 ∙ by Sandra Alves, et al. ∙ 0

• ### On a recipe for quantum graphical languages

Different graphical calculi have been proposed to represent quantum comp...
08/10/2020 ∙ by Titouan Carette, et al. ∙ 0

• ### Factorize Factorization

We present a new technique for proving factorization theorems for compou...
05/04/2020 ∙ by Beniamino Accattoli, et al. ∙ 0

• ### Dialectica Categories for the Lambek Calculus

We revisit the old work of de Paiva on the models of the Lambek Calculus...
01/21/2018 ∙ by Valeria de Paiva, et al. ∙ 0

• ### Cirquent calculus in a nutshell

This paper is a brief and informal presentation of cirquent calculus, a ...
08/28/2021 ∙ by Giorgi Japaridze, et al. ∙ 0

• ### Discrete calculus with cubic cells on discrete manifolds

This work is thought as an operative guide to discrete exterior calculus...
06/17/2019 ∙ by Leonardo De Carlo, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## Introduction

There is evidence coming from analysis in metric spaces that the correct algebraic structure of the infinitesimal tangent space is not the commutative one of a vector space, but the more general one of a conical group. Particular examples of conical groups appear in many places. As contractible groups

[16]. In the Lie groups category they appear as Carnot groups which are models of metric tangent spaces in sub-riemannian geometry Gromov [9], Bellaïche [1], Pansu [15], or as limits of Cayley graphs of groups of polynomial growth Gromov [10]. They are used as models of approximate groups Breuillard, Green, Tao [2]. Related, in model theory Hrushovski [12].

By trying to understand how to construct a larger theory which might cover this more general calculus, we arrive to the conclusion that even in the particular case of classical calculus, there is too much algebraic structure. In fact, we can show that algebraic structures (like the one of a vector space or conical group), linearity and differentiability come from, or emerge from, a much more simple and general structure, called dilation structure in metric geometry [5] or emergent algebra (uniform idempotent right quasigroups) in more general situations [4].

In this article we give a lambda calculus treatment to emergent algebras, as a part of a two steps program which we propose to the interested reader: (a) how to generalize the categorical treatment of various subjects from logic so that it applies to categories of conical groups? (b) what is to be learned from the even more general point of view of emergent algebras, starting from very little algebraic structures?

We add an intuitively very natural new axiom (convex) which allows us to construct a field of numbers. The price of (convex) is too big though, because it forces commutativity (of the infinitesimal operations), but nevertheless the whole construction is interesting because it is based on a very small set of primitives. Theorems 6.2, 8.9 and Proposition 8.7 appear as a lambda calculus style version of Gleason [11] and Montgomery-Zippin [14] solution to the Hilbert 5th problem.

In a future article we shall give an alternative to (convex) which does not force commutativity.

## 1 Dilation terms

###### Definition 1.1

We introduce a lambda calculus for dilation terms, described by the following: variables, atomic types, constants, terms, typing rules, reductions.

### Variables. Atomic types.

1. are variables of type (for edge)

2. are variables of type (for node)

### Lambda calculus notation conventions.

As it is customary in lambda calculus, for a chain of applications we use a left associative notation . Also, for types we use a right associative notation, i.e. . For abstraction we indicate the type of variable, for ex. and for a chain of abstractions we use a right associative notation.

### Constants.

There are constant terms:

1. the multiplication

2. the inverse

3. the dilation

4. the inverse dilation

### Terms.

 var. x:E∣ var. a:N∣1∣
 ∘A,∙A for A:N∣⋅AB for A,B:N∣∗A for A:N∣
 AB for A:T→T′ and B:T∣λx:E.A∣λa:N.A

### Typing rules.

We shall consider only well typed terms according to the rules:

1. if a term then the type of is

2. if and then

### Notation.

1. for any term and we denote ,

2. for any term and we denote ,

### A graphical notation for dilation terms.

We represent terms by their syntactic trees. A syntactic tree is a particular case of an oriented ribbon graph, where we assume that the edges of the syntactic tree are oriented from the leaves to the root and that any node of the syntactic tree has only one output edge and the order of the other edges comes from the clockwise orientation, starting from the output edge.

Whenever we draw syntactic trees, the root will appear at the left of the figure. In this way the orientations of the edges can be deduced from the rules from the clockwise order on the page, the position of the root and the color or names of the nodes.

We shall use the color red for those decorations (of the half-edges or nodes) which appear as variables in a lambda abstraction.

### Reductions.

in the following will mean the reflexive, symmetric, tranzitive closure the relation where is any of the reductions from the list.

First are the lambda calculus reductions:

1. if , where denotes one of the types , , then

 (λu:T.A)B=A[u=B]
2. for any , if then , if then .

3. for any

A direct consequence of () is: for any we have therefore

 ∘A=λe:E.λx:E.(∘Aex)=λe:E.λx:E.(Aex)

Then we have the algebraic reductions.

1.   ,

Here the graphical notation needs both the introduction of a ”termination” decoration and to accept forrests of syntactic trees instead trees, but (for the moment) we choose to just delete any syntactic tree with the root decorrated with the termination symbol ”T”.

2. for any and

3. for any ,

4. for any and any term   ,

5. for any and any   ,

6. for any   ,

The graphical representations of the algebraic rewrites (R1), (R2) and (C) are in combinatory terms form, by using the reduction. We also represented these reductions for terms and we used (in). In this way the two constants , have a symmetric role.

Let’s introduce the terms

 ¯¯¯0=λe:E.λx:E.e (1)
 ¯¯¯1=λe:E.λx:E.x (2)

From (id) we have

 ¯¯¯1=∘1=∙1

## 2 Reidemeister moves and idempotent right quasigroups

The reductions (R1), (R2) are related to the Reidemeister moves from knot theory. We see knot diagrams as oriented ribbon graphs made of two kinds of 4 valent nodes. The usual knot diagrams are also planar graphs, but this a condition which is irrelevant for this exposition, so we ignore it. The Reidemeister moves are indeed graph rewrites which apply on this class of ribbon graphs.

Knot diagrams edges can be decorated by elements from an algebraic structure called ”quandle”, in such a way that the Reidemeister rewrites (from knot theory) preserve the decoration. A quandle is a self-distributive idempotent right quasigroup and the correspondence between the Reidemeister rewrites (from knot theory) and the axioms of a quandle is the following: ”self-distributive” = R3, ”idempotent” = R1, ”right quasigroup” = R2. For the moment we concentrate on the Reidemeister moves R1 and R2. The R3 move will appear later as an ”emergent” rewrite.

###### Definition 2.1

An idempotent right quasigroup (irq) is a set with two binary operations which satisfy the axioms:

1. (R1) for any

2. (R2) for any

A simple example of an irq is given by ,

 x∘y=(1−a)x+ay,x∙y=(1−a−1)x+a−1y

where , a real vector space and is a fixed parameter. (This example is actually a quandle, meaning that it satisfies also a third axiom R3 of self-distributivity). There are many more other examples of irqs, some of them which generalize this simple example in a non-commutative setting.

We arrive at the notion of a -irq if we consider instead a family of irqs indexed with a parameter , where is a commutative group. See Definition 4.2 [7], or Definition 5.1 [3]. In Definition 3.3. [4] we started from one irq and defined a -irq.

###### Definition 2.2

Let be a commutative group, with the operation denoted multiplicatively and the neutral element denoted by . A -irq is a family of irqs , for any , with the properties:

1. (a) for any

2. (b) for any

3. (c) for any .

As concerns dilation terms, we have the following group structure on terms of type .

###### Proposition 2.3

The terms of type form a commutative group with the multiplication , inverse and neutral element .

### Proof.

The inverse is involutive. Indeed, from (in) . From (ext) we get

 A=∗(∗A)

For the inverse of , we remark that by (in) and (id). From (ext) we get

 ∗1=1

From (R2) and (id) we obtain:

 ∘1=λe:E.λx:E.x=λe:E.λx:E.(∘Ae(∙Aex))

From (in) and (act) we continue the string of equalities with

 λe:E.λx:E.(∘Ae(∙Aex))=λe:E.λx:E.(∘Ae(∘(∗A)ex))=∘(⋅A(∗A))

From (ext), then (C) we obtain

 1=⋅A(∗A)=⋅(∗A)A

In order to prove the associativity of multiplication we compute, from (act), then (), then two () reductions

 ∘(⋅A(⋅BC))=λe:E.λx:E.(Ae(⋅BC)ex)=
 =λe:E.λx:E.(Ae(λu:E.λv:E.(Bu(Cuv))ex))=
 =λe:E.λx:E.(Ae(Be(Cex)))

In the same way we compute:

 ∘(⋅(⋅AB)C)=λe:E.λx:E.((⋅AB)e(Cex))=
 λe:E.λx:E.((λu:E.λv:E.(Au(Buv)))e(Cex))=
 =λe:E.λx:E.(Ae(Be(Cex)))

Therefore which leads to the associativity of multiplication by using (ext).

We use this to give an interpretation of the (R1), (R2) reductions as Reidemeister rewrites.

###### Proposition 2.4

The terms of type form a -irq with the operations: for any and , define and .

### Proof.

We know from Proposition 2.3 that is a commutative group. The reductions (R1), (R2) imply the points (R1), (R2) from the Definition 2.1 applied for the operations , , for . We have to verify Definition 2.2. The point (a) is the reduction (id), the point (b) is the reduction (in) and the point (c) is the reduction (act).

###### Definition 2.5

For any we define their multiplication by:

 A⋅B=λe:E.λx:E.(Ae(Bex))
###### Proposition 2.6
1. For any we have

 ¯¯¯0⋅A=¯¯¯0 (3)
 (∘1)⋅A=A⋅(∘1)=A (4)
2. The reduction (R1) is equivalent to: for any

 (∘A)⋅¯¯¯0=¯¯¯0 (5)
3. The reduction (act) is equivalent to: for any

 (∘A)⋅(∘B)=∘(⋅AB) (6)

### Proof.

(a) For any we have

 ¯¯¯0⋅A=λe:E.λx:E.(¯¯¯0e(Aex))=λe:E.λx:E.e=¯¯¯0
 (∘1)⋅A=λe:E.λx:E.((∘1)e(Aex))=λe:E.λx:E.(Aex)=A
 A⋅(∘1)=λe:E.λx:E.(Ae(∘1ex))=λe:E.λx:E.(Aex)=A

(b) For any and any we have:

 ((∘A)⋅¯¯¯0)BB=(λe:E.λx:E.(∘Ae(¯¯¯0ex)))BB=(λe:E.λx:E.(∘Aee))BB=∘ABB

Also, . This proves the equivalence of (5) with (R1).

(c) Indeed. for

 (∘A)⋅(∘B)=λe:E.λx:E.(∘Ae(∘Bex))

The equality (6) is a reformulation of (act).

## 3 Differences

###### Definition 3.1

For , the difference is the combinator:

 (7)

The difference combinator is:

 (−)=λa:N.λb:N.(a−b)=λa:N.λb:N.λe:E.λx:E.(baex((∗a)aexe)) (8)

For any the term is another difference combinator defined by:

 (−B)=λa:N.λe:E.λx:E.(B(∘aex)((∗a)aexe)) (9)

so that for we have . We shall also use the notation for any and .

Notice the graphical notation for the difference combinators and . The correct graphical notation is the one from the middle of the last two figures. The ones from the left are only partially correct. For example, in the figure for the difference combinator the color red indicates that appear in lambda abstractions, however it does not indicate precisely the order . On the other side the notation is more human-friendly, therefore we are going to use it, or analogous ones, several times in this paper.

The difference has a different type than . But it has the same type as . We might try to rename by , which would give a term by (ext) but this is not feasible, because we can’t expect that exists. We arrive at a solution for this with a convex dilation terms calculus in Section 8. A more general solution will be presented in a future article. Until then, the difference has some interesting properties.

###### Proposition 3.2

For any we have . If there exists such that then . If the collection of edge variables has more than one element, this is impossible.

### Proof.

Via (R2)

 =λe:E.λx:E.(∘A(Aex)(∙A(Aex)e))=λe:E.λx:E.e=¯0

Suppose that there is such that . Then by (R2)

 λe:E.λx:E.(Be((∗B)ex))=∘1=λe:E.λx:E.x

By the previous reduction

 λe:E.λx:E.(Be((∗B)ex))=λe:E.λx:E.e

 ¯¯¯0=λe:E.λx:E.e=λe:E.λx:E.x=¯¯¯1

If this is true then for any we have: .

By using the difference combinators (9) we can chain several differences: if and then is equal to .

###### Theorem 3.3

is functionally equivalent with the dilation constant:

 (−¯¯¯0)=λa:N.(∘a) (10)

is the approximate inverse combinator from (17), Definition 4.1

 (−∘1)=ι=λa:N.λe:E.λx:E.λy:E.((∗a)aexe) (11)

The combinator , can be obtained from:

 λa:N.λe:E.λx:E.((−∘a)1ex)=C=λa:N.λe:E.λx:E.(∘axe) (12)

The reduction (R2) is equivalent with: for any and

 (−(A−B))A=A−(A−B)=B (13)

### Proof.

The proofs of (10), (11), (12) are given in the associated figures. For the last part, in the following figure we prove that (13) is true from (R2).

In the opposite direction, notice that we can still use the first two equalities of the previous figure, which use only (9) and (7) from Definition 3.1. We obtain:

 (14)

Let’s use (13) and (14) for . We obtain: for any

 ¯¯¯0=(−(A−¯¯¯0))A=λe:E.λ:x:E.(¯¯¯0((A−A)ex)((∗A)(A−A)ex(Aex)))=A−A

This is the first statement from Proposition 3.2 : for any . We use this, (14) and (13) for . We obtain: for any

 ∘1=(−(A−∘1))A=λe:E.λ:x:E.((∘1)((A−A)ex)((∗A)(A−A)ex(Aex)))=

Take now instead of A in the previous equality. Apply to , use (in) and obtain (R2):

which ends the proof of the last statement.

## 4 Approximate operations terms

We introduce some new combinators: aproximate sum, approximate difference, approximate inverse. The names come from dilation structures, Definition 11 [5], where they play an important role.

###### Definition 4.1

The asum (approximate sum), adif (approximate difference) and ainv (approximate inverse) combinators are:

1. asum or approximate sum:

 Σ=λa:N.λe:E.λx:E.λy:E.((∗a)e(aaexy)) (15)

 Δ=λa:N.λe:E.λx:E.λy:E.((∗a)aex(aey)) (16)

3. ainv or approximate inverse:

 ι=λa:N.λe:E.λx:E.((∗a)aexe) (17)

They satisfy a useful list of properties. In the following proposition we collect them and also we indicate the places where they appear in the formalism of dilation structures for the first time. The proofs are given in the associated figures.

###### Proposition 4.2
1. and are, in a sense, one inverse of the other (Section 4.2, Proposition 3 [5]):

 λa:N.λe:E.λx:E.λy:E.(Σaex(Δaexy))=λa:N.λe:E.λx:E.λy:E.y
 λa:N.λe:E.λx:E.λy:E.(Δaex(Σaexy))=λa:N.λe:E.λx:E.λy:E.y

2. The approximate difference can be computed from the approximate sum, approximate inverse and the dilation constant (Section 4.2, Proposition 4 [5]):

 λa:N.λe:E.λx:E.λy:E.(Σa(∘aex)(ιaex)y)=Δ

3. The approximate sum is approximately associative (Section 4.2, Proposition 5 [5]):

 λa:N.λe:E.λx:E.λy:E.λz:E.(Σae(Σaexy)z)=
 =λa:N.λe:E.λx:E.λy:E.λz:E.(Σaex(Σa(∘aex)yz))

4. The approximate inverse is approximately it’s own inverse (Section 4.2, Proposition 5 [5]):

 λa:N.λe:E.λx:E.(ιa(∘aex)(ιaex))=λa:N.λe:E.λx:E.x

5. The approximate sum has neutral elements (from the proof of Theorem 10, Section 6 [5]):

 λa:N.λe:E.λx:E.(Σaeex)=λa:N.λe:E.λx:E.x
 λa:N.λe:E.λx:E.(Σaex(∘aex))=λa:N.λe:E.λx:E.x

6. The approximate sum is approximately distributive with respect to dilations:

 λb:N.λa:N.λe:E.λx:E.λy:E.(be(Σ(⋅ab)exy))=
 =λb:N.λa:N.λe:E.λx:E.λy:E.(Σae(bex)(b(⋅ab)exy))

7. The approximate inverse approximately commutes with dilations:

 λb:N.λa:N.λe:E.λx:E.(∘b(∘(⋅ab)ex)ι(⋅ab)ex)=
 =λb:N.λa:N.λe:E.λx:E.(ιae(∘bex))

## 5 Finite terms. Emergent terms

###### Definition 5.1

Finite terms are those dilation terms which are generated from:

 var. x:E∣ var. a:N∣1∣
 ∘A,ΣA,ΔA,ιA for A:N∣⋅AB for A,B:N∣
 AB for A:T→T′ and B:T∣λx:E.A∣λa:N.A

We shall extend the class of finite terms to emergent terms and their reductions, via the enlarging the class of terms with a constant .

###### Definition 5.2

We introduce the extended node type by: if or . We introduce new terms and constants:

1. ,

2. extended from by:

3. extended from by if or

4. we define for as if , where is the combinator (15), else

5. we define for as if , where is the combinator (16), else

6. we define for as if , where is the combinator (17), else

The emergent terms are defined as those terms

 var. x:E∣ var. a:¯¯¯¯¯N∣0,1∣
 ∘A,ΣA,ΔA,ιA for A:¯¯¯¯¯N∣⋅AB for A,B:¯¯¯¯¯N∣
 AB for A:T→T′ and B:T∣λx:E.A∣λa:¯¯¯¯¯N.A

for which the extension function is well defined.

The extension function is defined recursively from finite terms to emergent terms, as:

1. for any , ,

2. for any , ,

3. ,

4. for any , , , ,

5. for any ,

6. ,

7. .

We saw in Proposition 3.2 that if the class of variables contains more than one element then there is no such that , therefore is truly an extension of the type .

###### Definition 5.3

The emergent reductions extend the equality of finite terms to an equality of emergent terms, via the axiom:

1. for any finite terms , if as dilation terms then

## 6 Infinitesimal operations

Proposition 4.2 give lots of emergent reductions.

###### Definition 6.1

On the collection of emergent terms we define the operations:

1. , the addition of relative to

2. , the inverse of relative to

3. for any and any ,

###### Theorem 6.2

For any the class of emergent terms of type is a group with the operation , the inverse function and neutral element .

For any element the function which maps to is a group morphism and moreover an action of the group , of terms of type from Proposition 2.3, on the group of emergent terms of type .

### Proof.

We use Proposition 4.2. Indeed, both terms from the equality (c) are finite terms, therefore by (em) their extensions are equal.

 λa:¯¯¯¯¯N.λe:E.λx:E.λy:E.λz:E.(Σae(Σaexy)z)=
 =λa:¯¯¯¯¯N.λe:E.λx:E.λy:E.λz:E.(Σaex(Σa(∘aex)yz))

Let’s apply to the term from the left. We obtain:

 (λa:¯¯¯¯¯N.λe:E.λx:E.λy:E.λz:E.(Σae(Σaexy)z))(0)=
 =λe:E.λx:E.λy:E.λz:E.(¯¯¯¯Σe(¯¯¯¯Σexy)z)

We apply further the emergent terms and we use Definition 6.1

 (λe:E.λx:E.λy:E.λz:E.(¯¯¯¯Σe(¯¯¯¯Σexy)z))XUVW=(U⊕XV)⊕XW

Same procedure, for the term from the right gives:

 (λa:¯¯¯¯¯N.λe:E.λx:E.λy:E.λz:E.(Σaex(Σa(∘aex)yz)))(0)=
 =λe:E.λx:E.λy:E.λz:E.(¯¯¯¯Σex(¯¯¯¯Σ(0ex)yz))=
 =λe:E.λx:E.λy:E.λz:E.(¯¯¯¯Σex(¯¯¯¯Σeyz))

We apply now the emergent terms

 (λe:E.λx:E.λy:E.λz:E.(¯¯¯¯Σex(¯¯¯¯Σeyz)))XUVW=U⊕X(V⊕XW)

We obtained therefore the associativity of the operation :

 (U⊕XV)⊕XW=U⊕X(V⊕XW)

For the fact that is the neutral element we use the equalities (e) from Proposition 4.2. Again, we see there only finite terms. We use (em) to obtain equalities of the extensions

 λa:¯¯¯¯¯N.λe:E.λx:E.(Σaeex)=λa:N.λe:E.λx:E.x
 λa:¯¯¯¯¯N.λe:E.λx:E.(Σaex(∘aex))=λa:N.λe:E.λx:E.x

We apply to the first equality

 (λa:¯¯¯¯¯N.λe:E.λx:E.(Σaeex))(0)=λe:E.λx:E.(¯¯¯¯Σeex)
 (λa:N.λe:E.λx:E.x)(0)=λe:E.λx:E.x

therefore

 λe:E.λx:E.(¯¯¯¯Σeex)=λe:E.λx:E.x

We apply and we obtain, after we use Definition 6.1

 X⊕XU=U

Same treatment for the second equality:

 =λe:E.λx:E.(¯¯¯¯Σexe)=λe:E.λx:E.x

We apply and we obtain

 U⊕XX=U

From Proposition 4.2 (d) we use (em) and we apply to obtain:

 (λa:¯¯¯¯¯N.λe:E.λx:E.(ιa(∘aex)(ιaex)))(0)=
 =λe:E.λx:E.(¯ι(0ex)(¯ιex))=λe:E.λx:E.(¯ιe(¯ιex))=
 =λe:E.λx:E.(⊖e(⊖ex))=λe:E.λx:E.x

which leads us in the same way to: for any

 ⊖X(⊖XU)=U

Proposition 4.2 (a) gives, by using (em), then by application of , then , the following:

 U⊕X(¯¯¯¯¯ΔXUV)=V=¯¯¯¯¯ΔXU(U⊕XV) (18)

We look now at Proposition 4.2 (b). The left hand side term is finite, but the right hand side term, i.e. is not finite. It is nevertheless equal via reductions of dilation terms, to the finite term . So we can use (em), then apply and we obtain:

 λe:E.λx:E.λy:E.(¯¯¯¯Σ(0ex)(¯ιex)y)=
 =λe:E.λx:E.λy:E.(¯¯¯¯Σe(¯ιex)y)=λe:E.λx:E.λy:E.(¯¯¯¯¯Δexy)

We apply , then

 ¯¯¯¯¯ΔXUV=(⊖XU)⊕XV (19)

From the right side equality of (18), along with (19) for , and the fact that is a neutral element, we get:

 X=¯¯¯¯¯ΔXU(U⊕XX)=(⊖XU)⊕X(U⊕XX)=(⊖XU)⊕XU

therefore is an inverse at left of . We use the equality from the left of (18), (19) for , and the fact that is a neutral element:

 X=U⊕X(¯¯¯¯¯ΔXUX)=U⊕X((⊖XU)⊕X)=U⊕X(⊖XU)

which shows that is an inverse at right of . All in all we proved the fact that is a group operation, with inverse and neutral element .

For the morphism property we use Proposition 4.2 (f). We first apply , then we can ”pass to the limit” by using (em), then by application of :