There is evidence coming from analysis in metric spaces that the correct algebraic structure of the infinitesimal tangent space is not the commutative one of a vector space, but the more general one of a conical group. Particular examples of conical groups appear in many places. As contractible groups. In the Lie groups category they appear as Carnot groups which are models of metric tangent spaces in sub-riemannian geometry Gromov , Bellaïche , Pansu , or as limits of Cayley graphs of groups of polynomial growth Gromov . They are used as models of approximate groups Breuillard, Green, Tao . Related, in model theory Hrushovski .
By trying to understand how to construct a larger theory which might cover this more general calculus, we arrive to the conclusion that even in the particular case of classical calculus, there is too much algebraic structure. In fact, we can show that algebraic structures (like the one of a vector space or conical group), linearity and differentiability come from, or emerge from, a much more simple and general structure, called dilation structure in metric geometry  or emergent algebra (uniform idempotent right quasigroups) in more general situations .
In this article we give a lambda calculus treatment to emergent algebras, as a part of a two steps program which we propose to the interested reader: (a) how to generalize the categorical treatment of various subjects from logic so that it applies to categories of conical groups? (b) what is to be learned from the even more general point of view of emergent algebras, starting from very little algebraic structures?
We add an intuitively very natural new axiom (convex) which allows us to construct a field of numbers. The price of (convex) is too big though, because it forces commutativity (of the infinitesimal operations), but nevertheless the whole construction is interesting because it is based on a very small set of primitives. Theorems 6.2, 8.9 and Proposition 8.7 appear as a lambda calculus style version of Gleason  and Montgomery-Zippin  solution to the Hilbert 5th problem.
In a future article we shall give an alternative to (convex) which does not force commutativity.
1 Dilation terms
We introduce a lambda calculus for dilation terms, described by the following: variables, atomic types, constants, terms, typing rules, reductions.
Variables. Atomic types.
We start with two atomic types:
are variables of type (for edge)
are variables of type (for node)
Lambda calculus notation conventions.
As it is customary in lambda calculus, for a chain of applications we use a left associative notation . Also, for types we use a right associative notation, i.e. . For abstraction we indicate the type of variable, for ex. and for a chain of abstractions we use a right associative notation.
There are constant terms:
the inverse dilation
We shall consider only well typed terms according to the rules:
if a term then the type of is
if and then
for any term and we denote ,
for any term and we denote ,
A graphical notation for dilation terms.
We represent terms by their syntactic trees. A syntactic tree is a particular case of an oriented ribbon graph, where we assume that the edges of the syntactic tree are oriented from the leaves to the root and that any node of the syntactic tree has only one output edge and the order of the other edges comes from the clockwise orientation, starting from the output edge.
Whenever we draw syntactic trees, the root will appear at the left of the figure. In this way the orientations of the edges can be deduced from the rules from the clockwise order on the page, the position of the root and the color or names of the nodes.
We shall use the color red for those decorations (of the half-edges or nodes) which appear as variables in a lambda abstraction.
in the following will mean the reflexive, symmetric, tranzitive closure the relation where is any of the reductions from the list.
First are the lambda calculus reductions:
if , where denotes one of the types , , then
for any , if then , if then .
A direct consequence of () is: for any we have therefore
Then we have the algebraic reductions.
Here the graphical notation needs both the introduction of a ”termination” decoration and to accept forrests of syntactic trees instead trees, but (for the moment) we choose to just delete any syntactic tree with the root decorrated with the termination symbol ”T”.
for any and
for any ,
for any and any term ,
for any and any ,
for any ,
The graphical representations of the algebraic rewrites (R1), (R2) and (C) are in combinatory terms form, by using the reduction. We also represented these reductions for terms and we used (in). In this way the two constants , have a symmetric role.
Let’s introduce the terms
From (id) we have
2 Reidemeister moves and idempotent right quasigroups
The reductions (R1), (R2) are related to the Reidemeister moves from knot theory. We see knot diagrams as oriented ribbon graphs made of two kinds of 4 valent nodes. The usual knot diagrams are also planar graphs, but this a condition which is irrelevant for this exposition, so we ignore it. The Reidemeister moves are indeed graph rewrites which apply on this class of ribbon graphs.
Knot diagrams edges can be decorated by elements from an algebraic structure called ”quandle”, in such a way that the Reidemeister rewrites (from knot theory) preserve the decoration. A quandle is a self-distributive idempotent right quasigroup and the correspondence between the Reidemeister rewrites (from knot theory) and the axioms of a quandle is the following: ”self-distributive” = R3, ”idempotent” = R1, ”right quasigroup” = R2. For the moment we concentrate on the Reidemeister moves R1 and R2. The R3 move will appear later as an ”emergent” rewrite.
An idempotent right quasigroup (irq) is a set with two binary operations which satisfy the axioms:
(R1) for any
(R2) for any
A simple example of an irq is given by ,
where , a real vector space and is a fixed parameter. (This example is actually a quandle, meaning that it satisfies also a third axiom R3 of self-distributivity). There are many more other examples of irqs, some of them which generalize this simple example in a non-commutative setting.
We arrive at the notion of a -irq if we consider instead a family of irqs indexed with a parameter , where is a commutative group. See Definition 4.2 , or Definition 5.1 . In Definition 3.3.  we started from one irq and defined a -irq.
Let be a commutative group, with the operation denoted multiplicatively and the neutral element denoted by . A -irq is a family of irqs , for any , with the properties:
(a) for any
(b) for any
(c) for any .
As concerns dilation terms, we have the following group structure on terms of type .
The terms of type form a commutative group with the multiplication , inverse and neutral element .
The inverse is involutive. Indeed, from (in) . From (ext) we get
For the inverse of , we remark that by (in) and (id). From (ext) we get
From (R2) and (id) we obtain:
From (in) and (act) we continue the string of equalities with
From (ext), then (C) we obtain
In order to prove the associativity of multiplication we compute, from (act), then (), then two () reductions
In the same way we compute:
Therefore which leads to the associativity of multiplication by using (ext).
We use this to give an interpretation of the (R1), (R2) reductions as Reidemeister rewrites.
The terms of type form a -irq with the operations: for any and , define and .
We know from Proposition 2.3 that is a commutative group. The reductions (R1), (R2) imply the points (R1), (R2) from the Definition 2.1 applied for the operations , , for . We have to verify Definition 2.2. The point (a) is the reduction (id), the point (b) is the reduction (in) and the point (c) is the reduction (act).
For any we define their multiplication by:
For any we have
The reduction (R1) is equivalent to: for any
The reduction (act) is equivalent to: for any
(a) For any we have
(b) For any and any we have:
Also, . This proves the equivalence of (5) with (R1).
(c) Indeed. for
The equality (6) is a reformulation of (act).
For , the difference is the combinator:
The difference combinator is:
For any the term is another difference combinator defined by:
so that for we have . We shall also use the notation for any and .
Notice the graphical notation for the difference combinators and . The correct graphical notation is the one from the middle of the last two figures. The ones from the left are only partially correct. For example, in the figure for the difference combinator the color red indicates that appear in lambda abstractions, however it does not indicate precisely the order . On the other side the notation is more human-friendly, therefore we are going to use it, or analogous ones, several times in this paper.
The difference has a different type than . But it has the same type as . We might try to rename by , which would give a term by (ext) but this is not feasible, because we can’t expect that exists. We arrive at a solution for this with a convex dilation terms calculus in Section 8. A more general solution will be presented in a future article. Until then, the difference has some interesting properties.
For any we have . If there exists such that then . If the collection of edge variables has more than one element, this is impossible.
Suppose that there is such that . Then by (R2)
By the previous reduction
which leads us to
If this is true then for any we have: .
By using the difference combinators (9) we can chain several differences: if and then is equal to .
Take now instead of A in the previous equality. Apply to , use (in) and obtain (R2):
which ends the proof of the last statement.
4 Approximate operations terms
We introduce some new combinators: aproximate sum, approximate difference, approximate inverse. The names come from dilation structures, Definition 11 , where they play an important role.
The asum (approximate sum), adif (approximate difference) and ainv (approximate inverse) combinators are:
asum or approximate sum:
adif or approximate difference:
ainv or approximate inverse:
They satisfy a useful list of properties. In the following proposition we collect them and also we indicate the places where they appear in the formalism of dilation structures for the first time. The proofs are given in the associated figures.
and are, in a sense, one inverse of the other (Section 4.2, Proposition 3 ):
The approximate difference can be computed from the approximate sum, approximate inverse and the dilation constant (Section 4.2, Proposition 4 ):
The approximate sum is approximately associative (Section 4.2, Proposition 5 ):
The approximate inverse is approximately it’s own inverse (Section 4.2, Proposition 5 ):
The approximate sum has neutral elements (from the proof of Theorem 10, Section 6 ):
The approximate sum is approximately distributive with respect to dilations:
The approximate inverse approximately commutes with dilations:
5 Finite terms. Emergent terms
Finite terms are those dilation terms which are generated from:
We shall extend the class of finite terms to emergent terms and their reductions, via the enlarging the class of terms with a constant .
We introduce the extended node type by: if or . We introduce new terms and constants:
extended from by:
extended from by if or
we define for as if , where is the combinator (15), else
we define for as if , where is the combinator (16), else
we define for as if , where is the combinator (17), else
The emergent terms are defined as those terms
for which the extension function is well defined.
The extension function is defined recursively from finite terms to emergent terms, as:
for any , ,
for any , ,
for any , , , ,
for any ,
We saw in Proposition 3.2 that if the class of variables contains more than one element then there is no such that , therefore is truly an extension of the type .
The emergent reductions extend the equality of finite terms to an equality of emergent terms, via the axiom:
for any finite terms , if as dilation terms then
6 Infinitesimal operations
Proposition 4.2 give lots of emergent reductions.
On the collection of emergent terms we define the operations:
, the addition of relative to
, the inverse of relative to
for any and any ,
For any the class of emergent terms of type is a group with the operation , the inverse function and neutral element .
For any element the function which maps to is a group morphism and moreover an action of the group , of terms of type from Proposition 2.3, on the group of emergent terms of type .
We use Proposition 4.2. Indeed, both terms from the equality (c) are finite terms, therefore by (em) their extensions are equal.
Let’s apply to the term from the left. We obtain:
We apply further the emergent terms and we use Definition 6.1
Same procedure, for the term from the right gives:
We apply now the emergent terms
We obtained therefore the associativity of the operation :
For the fact that is the neutral element we use the equalities (e) from Proposition 4.2. Again, we see there only finite terms. We use (em) to obtain equalities of the extensions
We apply to the first equality
We apply and we obtain, after we use Definition 6.1
Same treatment for the second equality:
We apply and we obtain
From Proposition 4.2 (d) we use (em) and we apply to obtain:
which leads us in the same way to: for any
Proposition 4.2 (a) gives, by using (em), then by application of , then , the following:
We look now at Proposition 4.2 (b). The left hand side term is finite, but the right hand side term, i.e. is not finite. It is nevertheless equal via reductions of dilation terms, to the finite term . So we can use (em), then apply and we obtain:
We apply , then
which shows that is an inverse at right of . All in all we proved the fact that is a group operation, with inverse and neutral element .
For the morphism property we use Proposition 4.2 (f). We first apply , then we can ”pass to the limit” by using (em), then by application of :