1. Introduction
1.1. Context: quantitative semantics and Taylor expansion
Linear logic takes its roots in the denotational semantics of calculus: it is often presented, by Girard himself [Girard87], as the result of a careful investigation of the model of coherence spaces. Since its early days, linear logic has thus generated a rich ecosystem of denotational models, among which we distinguish the family of quantitative semantics. Indeed, the first ideas behind linear logic were exposed even before coherence spaces, in the model of normal functors [Girard88], in which Girard proposed to consider analyticity, instead of mere continuity, as the key property of the interpretation of terms: in this setting, terms denote power series, representing analytic maps between modules.
This quantitative interpretation reflects precise operational properties of programs: the degree of a monomial in a power series is closely related to the number of times a function uses its argument. Following this framework, various models were considered — among which we shall include the multiset relational model as a degenerate, booleanvalued instance. These models allowed to represent and characterize quantitative properties such as the execution time [deCarvalho09], including best and worst case analysis for nondeterministic programs [LairdMMP13]
, or the probability of reaching a value
[DE11]. It is notable that this whole approach gained momentum in the early 2000’s, after the introduction by Ehrhard of models [Ehrhard02, Ehrhard05] in which the notion of analytic maps interpreting terms took its usual sense, while Girard’s original model involved setvalued formal power series. Indeed, the keystone in the success of this line of work is an analogue of the Taylor expansion formula, that can be established both for terms and for linear logic proofs.Mimicking this denotational structure, Ehrhard and Regnier introduced the differential calculus [ER03] and differential linear logic [ER05], which allow to formulate a syntactic version of Taylor expansion: to a term (resp. to a linear logic proof), we associate an infinite linear combination of approximants [ER08, Ehrhard16]. In particular, the dynamics (i.e. reduction or cut elimination) of those systems is dictated by the identities of quantitative semantics. In turn, Taylor expansion has become a useful device to design and study new models of linear logic, in which morphisms admit a matrix representation: the Taylor expansion formula allows to describe the interpretation of promotion — the operation by which a linear resource becomes freely duplicable — in an explicit, systematic manner. It is in fact possible to show that any model of differential linear logic without promotion gives rise to a model of full linear logic in this way [deCarvalho07]: in some sense, one can simulate cut elimination through Taylor expansion.
1.2. Motivation: reduction in Taylor expansion
There is a difficulty, however: Taylor expansion generates infinite sums and, a priori, there is no guarantee that the coefficients in these sums will remain finite under reduction. In previous works [deCarvalho07, LairdMMP13], it was thus required for coefficients to be taken in a complete semiring: all sums should converge. In order to illustrate this requirement, let us first consider the case of calculus.
The linear fragment of differential calculus, called resource calculus, is the target of the syntactical Taylor expansion of terms. In this calculus, the application of a term to another is replaced with a multilinear variant: denotes the linear symmetric application of resource term to the multiset of resource terms . Then, if denote the occurrences of in , the redex reduces to the sum : here ranges over all bijections so this sum is zero if . As sums are generated by reduction, it should be noted that all the syntactic constructs are linear, both in the sense that they commute to sums, and in the sense that, in the elimination of a redex, no subterm is copied nor erased. The key case of Taylor expansion is that of application:
(1) 
where is the multiset made of copies of — by linearity, is itself an infinite linear combination of multisets of resource terms appearing in . Admitting that represents the th derivative of , computed at , and linearly applied to , …, , one immediately recognizes the usual Taylor expansion formula.
From (1), it is immediately clear that, to simulate one reduction step occurring in , it is necessary to reduce in parallel in an unbounded number of subterms of each component of the expansion. Unrestricted parallel reduction, however, is ill defined in this setting. Consider the sum where each summand consists of successive linear applications of the identity to the variable : then by simultaneous reduction of all redexes in each component, each summand yields , so the result should be which is not defined unless the semiring of coefficients is complete in some sense.
Those considerations apply to linear logic as well as to calculus. We will use proof nets [Girard87] as the syntax for proofs of multiplicative exponential linear logic (MELL). The target of Taylor expansion is then in promotionfree differential nets [ER05], which we call resource nets in the following, by analogy with resource calculus: these form the multilinear fragment of differential linear logic.
In linear logic, Taylor expansion consists in replacing duplicable subnets, embodied by promotion boxes, with explicit copies, as in Fig. 1: if we take copies of the box, the main port of the box is replaced with an ary link, while the links at the border of the box collect all copies of the corresponding auxiliary ports. Again, to follow a single cut elimination step in , it is necessary to reduce an arbitrary number of copies. And unrestricted parallel cut elimination in an infinite sum of resource nets is broken, as one can easily construct an infinite family of nets, all reducing to the same resource net in a single step of parallel cut elimination: see Fig. 2.
1.3. Our approach: taming the combinatorial explosion of antireduction
The problem of convergence of series of linear approximants under reduction was first tackled by Ehrhard and Regnier, for the normalization of Taylor expansion of ordinary terms [ER08]. Their argument relies on a uniformity property, specific to the pure calculus: the support of the Taylor expansion of a term forms a clique in some fixed coherence space of resource terms. This method cannot be adapted to proof nets: there is no coherence relation on differential nets such that all supports of Taylor expansions are cliques [Tasson09, section V.4.1].
An alternative method to ensure convergence without any uniformity hypothesis was first developed by Ehrhard for typed terms in a calculus extended with linear combinations of terms [Ehrhard10]: there, the presence of sums also forbade the existence of a suitable coherence relation. This method can be generalized to strongly normalizable [PTV16], or even weakly normalizable [Vaux17] terms. One striking feature of this approach is that it concentrates on the support (i.e. the set of terms having nonzero coefficients) of the Taylor expansion. In each case, one shows that, given a normal resource term and a term , there are finitely many terms , such that:

the coefficient of in is non zero; and

the coefficient of in the normal form of is non zero.
This allows to normalize the Taylor expansion: simply normalize in each component, then compute the sum, which is componentwise finite.
The second author then remarked that the same could be done for reduction [Vaux17], even without any uniformity, typing or normalizability requirement. Indeed, writing if and are resource terms such that appears in the support of a parallel reduct of , the size of is bounded by a function of the size of and the height of . So, given that if appears in then its height is bounded by that of , it follows that, for a fixed resource term there are finitely many terms in the support of such that : in short, parallel reduction is always welldefined on the Taylor expansion of a term.
Our purpose in the present paper is to develop a similar technique for MELL proof nets: we show that one can bound the size of a resource net by a function of the size of any of its parallel reducts, and of an additional quantity on , yet to be defined. The main challenge is indeed to circumvent the lack of inductive structure in proof nets: in such a graphical syntax, there is no structural notion of height.
We claim that a side condition on switching paths, i.e. paths in the sense of Danos–Regnier’s correctness criterion [DR89], is an appropriate replacement. Backing this claim, there are first some intuitions:

the main culprits for the unbounded loss of size in reduction are the chains of consecutive cuts, as in Fig. 2;

we want the validity of our side condition to be stable under reduction so, rather than chains of cuts, we should consider cuts in switching paths;

indeed, if reduces to via cut elimination, then the switching paths of are somehow related with those of ;

and the switching paths of a resource net in are somehow related with those of .
In the following we will establish precise formulations of those last two points: we study the structure of switching paths through cut elimination in 4; and we describe the switching paths of the elements of in section 7.
In presence of multiplicative units, or of weakenings (nullary links) and coweakenings (nullary links), we must also take special care of another kind of cuts, that we call evanescent cuts: when a cut between such nullary links is eliminated, it simply vanishes, leaving the rest of the net untouched, as in Fig. 3, which is obviously an obstacle for our purpose.^{1}^{1}1 The treatment of weakenings is indeed the main novelty of the present extended version over our conference paper [CV18].
In order to deal with nullary links, a well known trick is to attach each weakening (or link) to another node in the net: switching paths can then follow such jumps, which is useful to characterize exactly those nets that come from proof trees [Girard96, Appendix A.2]. Here we will rely on this structure to control the effect of eliminating evanescent cuts on the size of a net.
In all our exposition, we adopt a particular presentation of nets: we consider ary exponential links rather than separate (co)dereliction and (co)contraction, as this allows to reduce the dynamics of resource nets to that of multiplicative linear logic (MLL) proof nets.^{2}^{2}2 In other words, we adhere to a version of linear logic proof nets and resource nets which is sometimes called nouvelle syntaxe, although it dates back to Regnier’s PhD thesis [Regnier92]. For the linear logic connoisseur, this is already apparent in Fig. 1. See also the discussion in our conclusion (Section 8).
1.4. Outline
In Section 2, we first introduce MLL proof nets formally, in the termbased syntax of Ehrhard [Ehrhard14]. We define the parallel cut elimination relation in this setting, that we decompose into multiplicative reduction , axiomcut reduction and evanescent reduction . We also present the notion of switching path for this syntax, and introduce the two quantities that will be our main objects of study in the following:

the maximum number of links that jump to a common target;

the maximum number of cuts that are visited by any switching path in the net .
Let us mention that typing plays absolutely no role in our approach, so we do not even consider formulas of linear logic in our exposition: we will rely on the geometrical structure of nets only.
We show in Section 3 that, if , or then the size of is bounded by a function of , , and the size of . In order to be able to iterate this combinatorial argument and obtain a similar result for , we must show that, given bounds for and , we can infer bounds on and : this is the subject of the next two sections.
Section 4 is dedicated to the proof that we can bound by a function of , whenever : the main case is the multiplicative reduction, as this may create new switching paths in that we must relate with those in . In this task, we concentrate on the notion of slipknot: a pair of residuals of a cut of occurring in a path of . Slipknots are essential in understanding how switching paths are structured after cut elimination: this analysis is motivated by a technical requirement of our approach, but it can also be considered as a contribution to the theory of MLL nets per se.
In section 5, we show that is bounded by a function of and : the critical case here is that of chains of jumps between evanescent cuts.
We leverage all of the above results in section 6, to generalize them to a reduction , or even an arbitrary sequence of reductions. In particular, if then the size of is bounded by a function of the size of and of and . Again, as explained above, this result is motivated by the study of quantitative semantics, but it is essentially a theorem about MLL.
We establish the applicability of our approach to the Taylor expansion of MELL proof nets in Section 7: we show that if is a resource net of , then the length of switching paths in is bounded by a function of the size of , hence so is , and that is bounded by the size of .
Finally, we discuss the scope of our results in the concluding Section 8.
2. Definitions
We provide here the minimal definitions necessary for us to work with MLL proof nets. As stated before, let us stress the fact that the choice of MLL is not decisive for the development of Sections 2 to 3. The reader can check that we rely on three ingredients only:

the definition of switching paths;

the fact that multiplicative reduction amounts to plug bijectively the premises of a link with those of link (in the nullary case, evanescent cuts simply vanish);

the definition of jumps and how they are affected by cut elimination.
The results of those sections are thus directly applicable to resource nets, thanks to our choice of generalized exponential links: this will be done in Section 7.
2.1. Nets
A proof net is usually presented as a graphical object such as that of Fig. 4 Following Ehrhard [Ehrhard14, Ehrhard16], we will rely on a term syntax for denoting such nets. This is based on a quite standard trichotomy: a proof net can be divided into a top layer of axioms, followed by trees of connectives, down to cuts between the conclusions of some trees.
We will represent the conclusions of axiom rules by variables: the duality between two conclusions of an axiom rule is given by an involution over the set of variables. Our nets will be finite families of trees and cuts, where trees are inductively generated from variables by the application of MLL connectives, of arbitrary arity: and . A tree thus represents a conclusion of a net, together with the nodes above it, up to axiom conclusions. A cut is then given by the pair of trees , whose conclusions it cuts together. In order to distinguish between various occurrences of nullary connectives and , we will index them with labels taken from sets and .
Formally, the set of raw trees (denoted by , , etc.) is generated as follows:
where ranges over , ranges over , ranges over and we require . We assume , and are pairwise disjoint and all three are denumerably infinite. We will always identify a nullary connective tree or with its label or , so that is just the set of atomic trees. We will generally use letters , , for variables, for the elements of , for the elements of , and , , , for arbitrary raw trees.
We write for the set of subtrees of a given raw tree , which is defined inductively in the natural way : if , then ; if with , then . We moreover write for , and similarly for , and . A tree is then a raw tree such that if then the sets for are pairwise disjoint: in other words, each atom occurs at most once in . Observe that each subtree occurs exactly once in a tree .
A is an unordered pair of trees such that , and then we set , and similarly for , , and . Note that, in the absence of typing, we do not put any compatibility requirement on cut trees.
Given a set , we denote by any finite family of elements of . In general, we abusively identify with any enumeration of its elements, and write for the concatenation of families and . We may also write, e.g., , identifying the family with its support set. Since we only consider families of pairwise distinct elements, such abuse of notation is generally harmless. If is a function from to any powerset, we extend it to families in the obvious way, setting . E.g, if is a family of trees or cuts we write .
An MLL bare proof net is a pair of a finite family of cuts and a finite family of trees, that we identify with the family , and such that: for all cuts or trees , ; and is closed under the involution . We write for the family of cuts of . For any tree, cut or bare proof net , we define the size of as : graphically, is nothing but the number of wires in .
As announced in our introduction, our nets will be equipped with jumps from nodes to other nodes. An MLL proof net will thus be the data of a bare proof net and of a jump function . We will often identify a proof net with its underlying bare net , and then write for the associated jump function. Fig. 5 presents such a net, whose underlying graphical structure is that of Fig. 4.
We can already introduce the first of our two key quantities: the jump degree of a net . We first define the jump degree of any tree , setting . We well often write instead of if is clear from the context. Then we set .
2.2. Cut elimination
A is a cut such that:

is a variable and (axiom cut);

or and (evanescent cut);

or we can write and (connective cut).
The substitution of a tree for a variable in a tree (or cut, or family of trees and/or cuts) is defined in the usual way, with the additional assumption that and are disjoint and . By the definition of trees, this substitution is essentially linear: each variable appears at most once in .
There are three basic cut elimination steps defined for bare proof nets, one for each kind of reducible cut:

the elimination of a connective cut yields a family of cuts: we write
that we extend to nets by setting whenever ;

the elimination of an axiom cut generates a substitution: we write whenever ;

the elimination of an evanescent cut just deletes that cut: we write whenever .
Then we write if or or . Observe that if then .
In order to define cut elimination between proof nets (and not bare proof nets only), we need to modify the jump function. Indeed, assume and is obtained from by reducing the cut . Then , but if and , we need to redefine , as in general . This is done as follows:

if then for all such that (resp ), we set (resp ); ^{3}^{3}3 We arbitrarily redirect the jumps to the first subtree to simplify the presentation, but we could have reset the function to any of the immediate subtrees, non deterministically. Another possibility would be to always redirect to axioms, as it is done by Tortora [Tortora00] (Definition 1.3.3), but this would complicate our arguments as the transformation is less local.

if , then for all such that , we set ;

if , then for all such that , we set .
The result of eliminating the connective cut (resp. axiom cut; evanescent cut) of the net of Fig. 5 is depicted in Fig. 6 (resp. Fig. 7; Fig. 8).
We are in fact interested in the simultaneous elimination of any number of reducible cuts, that we describe as follows. We write if
and
assuming that:

for ,

for , and

for .
It should be clear that is then obtained from by successively eliminating the particular cuts we have selected, thus performing steps of , steps of , steps of , in no particular order: indeed, one can check that any two elimination steps of distinct cuts commute on the nose. The resulting jump function can be described directly, by inspecting the possible cases for with :

if and, e.g., then ;

if then then ;

if then then , where is the redirection function inductively defined by if (in which case ) and otherwise;

otherwise .
The result of simultaneously eliminating all the cuts of the net of Fig. 5 is depicted in Fig. 9.
This general description of parallel cut elimination is obviously not very handy. In order not to get lost in notation, we will restrict our attention to the particular case in which only cuts of the same nature are simultaneously eliminated: we write if , if , and if . Then we can decompose any parallel reduction into three separate steps: e.g., .^{4}^{4}4 Of course, the converse does not hold: for instance the reductions cannot be performed in a single step, as the cut was newly created.
2.3. Paths
In order to control the effect of parallel reduction on the size of proof nets, we rely on a side condition involving the number of cuts visited by switching paths, i.e. paths in the sense of Danos–Regnier’s correctness criterion [DR89].
In our setting, a switching of a net is a partial map such that, for each , . Given a net and a switching of , we define adjacency relations between the elements of , written for and for , as the least symmetric relations such that:

for any , ;

for any , for each ;

for any , ;

for any , ;

for any , .
Whenever necessary, we may write, e.g., or for to make the underlying net and switching explicit. Let and be two adjacency labels: we write if or and for some .
Given a switching in , an path is a sequence of trees of such that there exists a sequence of pairwise labels with, for each , .^{5}^{5}5 In standard terminology of graph theory, an path in is a trail in the unoriented graph with vertices in and edges given by the sum of adjacency relations defined by (identifying with ). The only purpose of our choice of labels for adjacency relations and the definition of is indeed to capture this notion of path in the unoriented graph of subtrees induced by a switching in a net. For instance, if and , then the chain of adjacencies defines an path in (see Fig. 10).
We call path in any path for a switching of , and we write for the set of all paths in . We write or whenever there exists a path from to in . Given , we call subpaths of the subsequences of : a subpath is either the empty sequence or a path of . We moreover write for the reverse path: . We say a net is acyclic if for all and , occurs at most once in : in other words, there is no cycle . From now on, we consider acyclic nets only. It is a very standard result that acyclicity is preserved by cut elimination:
If is obtained from by cut elimination and is acyclic then so is .
Proof.
It suffices to check that if then any cycle in induces a cycle in . This follows from the fact that, given a reduction and a switching of , one can define a lifting function and a switching of so that, for each path there is an path .^{6}^{6}6 We do not detail the proof as it is quite standard. We will moreover rely on the same technique in the study of the structure of switching paths under parallel cut elimination in the next section. ∎
If , we may write for either or : by the definition of paths, this notation is unambiguous, unless .
For all , we write , or simply , for the number of cuts visited by : (recall that cuts are unordered). Observe that, by acyclicity, a path visits each cut at most once: either , or , or , with neither nor occurring in . Finally, we write .
3. Bounding the size of antireducts: three kinds of cuts
In this section, we show that the loss of size during a parallel reduction , or is directly controlled by , and : more precisely, we show that the ratio is bounded by a function of and in each case.
3.1. Elimination of connective cuts
The elimination of connective cuts cannot decrease the size by more than a half: If then .
Proof.
It is sufficient to observe that if then .^{7}^{7}7 This is due to the fact that we distinguish between strict connectives and their nullary versions, that are subject to evanescent reductions. ∎
So in this case, and actually play no rôle.
3.2. Elimination of axiom cuts
Observe that:

if then ;

if then .
It follows that, in the elimination of a single axiom cut , we have . But we cannot reproduce the proof of Lemma 3.1 for : as stated in our introduction, chains of axiom cuts reducing into a single wire are the source of the collapse of size. We can bound the length of those chains by , however, and this allows us to bound the loss of size during reduction.
If then .
Proof.
Assume and with for . To establish the result in this case, we make the chains of eliminated axiom cuts explicit.
Due to the condition on free variables, there exists a (necessarily unique) permutation of yielding a family of the form such that:

for , we can write ;

each is maximal with this shape, i.e. and, in case is a variable, ;

if , then the cut
Comments
There are no comments yet.