# Syntactic Cut-Elimination for Intuitionistic Fuzzy Logic via Linear Nested Sequents

This paper employs the recently introduced linear nested sequent framework to design a new cut-free calculus LNIF for intuitionistic fuzzy logic–the first-order Gödel logic characterized by linear relational frames with constant domains. Linear nested sequents–which are nested sequents restricted to linear structures–prove to be a well-suited proof-theoretic formalism for intuitionistic fuzzy logic. We show that the calculus LNIF possesses highly desirable proof-theoretic properties such as invertibility of all rules, admissibility of structural rules, and syntactic cut-elimination.

## Authors

• 10 publications
07/02/2019

### Syntactic cut-elimination and backward proof-search for tense logic via linear nested sequents (Extended version)

We give a linear nested sequent calculus for the basic normal tense logi...
12/30/2021

### Super Exponentials in Linear Logic

Following the idea of Subexponential Linear Logic and Stratified Bounded...
05/13/2020

### Generating collection queries from proofs

Nested relations, built up from atomic types via tupling and set types, ...
10/15/2019

### On Deriving Nested Calculi for Intuitionistic Logics from Semantic Systems

This paper shows how to derive nested calculi from labelled calculi for ...
10/17/2018

### Fast Cut-Elimination using Proof Terms: An Empirical Study

Urban and Bierman introduced a calculus of proof terms for the sequent c...
12/30/2021

### A Deep Inference System for Differential Linear Logic

Differential linear logic (DiLL) provides a fine analysis of resource co...
04/19/2021

### On the Correspondence between Nested Calculi and Semantic Systems for Intuitionistic Logics

This paper studies the relationship between labelled and nested calculi ...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Intuitionistic fuzzy logic () has attracted considerable attention due to its unique nature as a logic blending approximate reasoning and constructive reasoning [1, 2, 21]. The logic, which was initially defined by Takeuti and Titani in [21], has its roots in the work of Kurt Gödel. Gödel introduced extensions of propositional intuitionistic logic (now called, “Gödel logics”) in order to prove that propositional intuitionistic logic does not possess a finite characteristic matrix [10]. These logics were later studied by Dummett who extended Gödel’s finite-valued semantics to include an infinite number of truth-values [6]. Dummett additionally provided an axiomatization for the propositional fragment of  [6]. The first-order logic also admits a finite axiomatization, obtained by extending an axiomatization of first-order intuitionistic logic with the linearity axiom and the quantifier shift axiom (where is not a free variable in [13].

Over the last few decades, propositional and first-order Gödel logics (including the prominent logic ) have been applied in various areas of logic and computer science [1, 2, 4, 12, 15, 16, 24]. For example, Visser [24] applied the propositional fragment of while analyzing the provability logic of Heyting arithmetic, Lifschitz et al. [16]

employed a Gödel logic to model the strong equivalence of logic programs, and Borgwardt et al.

[4] studied standard reasoning problems of first-order Gödel logics in the context of fuzzy description logics. Additionally—and quite significantly—the logic has been recognized as one of the fundamental formalizations of fuzzy logic [12].

The question of whether or not a logic possesses an analytic

proof calculus—that is, a calculus which stepwise (de)composes the formula to be proved—is of critical importance. Such calculi are effective tools for designing automated reasoning procedures and for proving certain (meta-)logical properties of a logic. For example, analytic calculi have been leveraged to provide decidability procedures for logics

[9]

, to prove that logics interpolate

[15], for counter-model extraction [22], and to understand the computational content of proofs [18].

In his seminal work [9], Gentzen proposed the sequent calculus framework for classical and intuitionistic logic, and subsequently, proved his celebrated Hauptsatz (i.e. cut-elimination theorem), which ultimately provided analytic calculi for the two logics. Gentzen’s sequent calculus formalism has become one of the preferred proof-theoretic frameworks for providing analytic calculi, and indeed, many logics of interest have been equipped with such calculi. Nevertheless, one of the alluring features of the formalism—namely, its simplicity—has also proven to be one of the formalism’s drawbacks; there remain many logics for which no cut-free, or analytic, sequent calculus (á la Gentzen) is known [11, 20]. In response to this, the sequent calculus formalism has been extended in various ways over the last 30 years to include additional structure, allowing for numerous logics to be supplied with cut-free, analytic calculi. Some of the most prominent extensions of Gentzen’s formalism include display calculi [3], labelled calculi [19, 23], hypersequent calculi [20], and nested calculi [7, 11].

In this paper, we employ the linear nested sequent formalism, introduced by Lellmann in [14]. Linear nested sequents fall within the nested calculus paradigm, but where sequents are restricted to linear, instead of treelike, structures. Linear nested sequents are based off of Masini’s 2-sequent framework [17, 18] that was used to provide cut-free calculi for the modal logic as well as various other constructive logics. The linear nested formalism proves to be highly compatible with the well-known first-order Gödel logic (i.e. intuitionistic fuzzy logic), due to the fact that can be semantically characterized by linear relational frames (see Section 2). The present work provides the linear nested calculus for , which enjoys a variety of fruitful properties, such as:111We refer to [25] for a detailed discussion of favorable proof-theoretic properties.

• Separation: Each logical rule exhibits no other logical connectives than the one to be introduced.

• Symmetry: Each logical connective has a left and right introduction rule.

• Internality: Each sequent translates into a formula of the logical language.

• Cut-eliminability: There exists an algorithm allowing the permutation of a rule (encoding reasoning with lemmata) upwards in a derivation until the rule is completely eliminated from the derivation.

• Subformula property: Every formula occurring in a derivation is a subformula of some formula in the end sequent.

• Admissibility of structural rules: Everything derivable with a structural rule (cf. and in Sec. 4) is derivable without the structural rule.

• Invertibility of rules: If the conclusion of an inference rule is derivable, then so is the premise.

In [2], a cut-free hypersequent calculus for was introduced. In contrast to , the current approach of exploiting linear nested sequents has two main benefits. First, the admissibility of structural rules has not been shown in , and as such, the calculus does not offer formula driven derviability. Therefore, the calculus serves as a better basis for automated reasoning in —bottom-up applications of the rules in simply decompose or propagate formulae, and so, the question of if/when structural rules need to be applied does not arise. Second, the calculus cannot be leveraged to prove interpolation for the logic via the so-called proof-theoretic method [15]. In [15], it was shown that the propositional fragment of can be harnessed to prove Lyndon interpolation for the propositional fragment of . This result suggests that , in conjunction with the aforementioned proof-theoretic method, may potentially be harnessed to solve the longstanding open problem of if interpolates or not.

The contributions and organization of this paper can be summarized as follows: In Section 2, we introduce the semantics and axiomatization for intuitionistic fuzzy logic (). Section 3 introduces linear nested sequents and the calculus , as well as proves the calculus sound and complete relative to . In Section 4, we provide invertibility, structural rule admissibility, and cut-elimination results. Last, Section 5 concludes and discusses future work.

## 2 Logical Preliminaries

Our language consists of denumerably many variables , denumerably many -ary predicates (with ), the connectives , , , , the quantifiers , , and parentheses and . We define the language via the BNF grammar below, and will use etc. to represent formulae from .

In the above grammar, is any -ary predicate symbol and are variables. We refer to formulae of the form as atomic formulae, and (more specifically) refer to formulae of the form as propositional variables (i.e. a -ary predicate is a propositional variable). The free variables of a formula are defined in the usual way as variables unbound by a quantifier, and bound variables as those bounded by a quantifier.

We opt for the relational semantics of —as opposed to the fuzzy semantics (cf. [2])—since the structure of linear nested sequents is well-suited for interpretation via linear relational frames.

###### Definition 1 (Relational Frames, Models [8])

A relational frame is a triple such that: is a non-empty set of worlds , is a reflexive, transitive, antisymmetric, and connected binary relation on ,222The four properties of are defined as follows: (reflexivity) for all , ; (transitivity) for all , if and , then ; (antisymmetry) for all , if and , then ; and (connectedness) for all , if and , then either or . and is a function that maps a world to a non-empty set of parameters called the domain of such that the following condition is met:

If , then .

A model is a tuple where is a relational frame and is a valuation function such that for each -ary predicate and

If , then (if is of arity );

If and , then (if is of arity ).

We uphold the convention in [8] and assume that for each world , , so or , for a propositional variable .

Rather than interpret formulae from in relational models, we follow [8] and introduce -sentences to be interpreted in relational models. This gives rise to a notion of validity for formulae in (see Def. 3). The definition of validity also depends on the universal closure of a formula: if a formula contains only as free variables, then the universal closure is taken to be the formula .

###### Definition 2 (Dw-Sentence)

Let be a relational model with of . We define to be the language expanded with parameters from the set . We define a -formula to be a formula in , and we define a -sentence to be a -formula that does not contain any free variables. Last, we use to denote parameters in a set .

###### Definition 3 (Semantic Clauses [8])

Let be a relational model with and . The satisfaction relation between and a -sentence is inductively defined as follows:

• If is a propositional variable, then iff ;

• If is an -ary predicate symbol with , then iff ;

• iff or ;

• iff and ;

• iff for all , if , then ;

• iff for all and all , ;

• iff there exists an such that .

We say that a formula is globally true on , written , iff for all worlds . A formula is valid, written , iff it is globally true on all relational models.

###### Lemma 1 (Persistency)

Let be a relational model with of . For any -sentence , if and , then .

###### Proof

See [8, Lem. 3.2.16] for details.

A sound and complete axiomatization for the logic is provided in Fig. 1. We define the substitution of the variable for the free variable on a formula in the standard way as the replacement of all free occurrences of in with . The substitution of the parameter for the free variable is defined similarly. Last, the side condition is free for (see Fig. 1) is taken to mean that does not become bound by a quantifier if substituted for .

For any , iff .

###### Proof

The forward direction follows from [8, Prop. 7.2.9] and [8, Prop. 7.3.6], and the backwards direction follows from [8, Lem. 3.2.31].

## 3 Soundness and Completeness of LNIF

Let us define linear nested sequents (i.e. sequents) to be syntactic objects given by the BNF grammar shown below:

 G::=Γ⊢Γ | G⫽G % where Γ::=A | Γ,Γ with A∈L.

Each sequent is of the form with . We refer to each (for ) as a component of and use to denote the number of components in .

We often use , , , and to denote sequents, and will use and to denote antecedents and consequents of components. Last, we take the comma operator to be commutative and associative; for example, we identify the sequent with . This interpretation lets us view an antecedent or consequent as a multiset of formulae.

To ease the proof of cut-elimination (Thm. 1), we follow [5, Sect. 2.3.1] and make a syntactic distinction between bound variables and parameters which will take the place of free variables occurring in formulae. Thus, our sequents make use of formulae from where each free variable has been replaced by a unique parameter. For example, we would use the sequent instead of the sequent in a derivation (where the parameter has been substituted for the free variable and has been substituted for ). We also use the notation to denote that the parameters occur in the formula , and write as shorthand for . This notation extends straightforwardly to sequents as well.

The linear nested calculus for is given in Fig. 2. (NB. The linear nested calculus introduced in [15] is the propositional fragment of , i.e. is the calculus without the quantifier rules and where propositional variables are used in place of atomic formulae.) The and rules in are particularly noteworthy; as will be seen in the next section, the rules play a vital role in ensuring the invertibility and admissibility of certain rules, ultimately permitting the elimination of (see Thm. 1).

To obtain soundness, we interpret each sequent as a formula in and utilize the notion of validity in Def. 3. The following definition specifies how each sequent is interpreted.

###### Definition 4 (Interpretation ι)

The interpretation of a sequent is defined inductively as follows:

 ι(Γ⊢Δ):=⋀Γ⊃⋁Δ ι(Γ⊢Δ⫽G):=⋀Γ⊃(⋁Δ∨ι(G))

We interpret a sequent as a formula in by taking the universal closure of and we say that is valid if and only if .

###### Theorem 1 (Soundness of LNIF)

For any linear nested sequent , if is provable in , then .

###### Proof

We prove the result by induction on the height of the derivation of and only present the more interesting quantifier cases in the inductive step. All remaining cases can be found in App. 0.A. Each inference rule considered is of one of the following two forms:

 G′ (r1) G G1 G2 (r2) G
 where G=Γ1⊢Δ1⫽⋯⫽Γn⊢Δn⫽Γn+1⊢Δn+1⫽⋯⫽Γm⊢Δm.

We argue by contraposition and prove that if is invalid, then at least one premise is invalid. Assuming that is invalid (i.e. ) implies the existence of a model with world such that , , and , where represents all parameters in . Hence, there is a sequence of worlds such that (for ), , and , for each . We assume all parameters in and (for ) are interpreted as elements of the associated domain.

-rule: By our assumption and . The latter implies that , meaning there exists a world such that and for some . If we interpret the eigenvariable of the premise as , then the premise is shown invalid.

-rule: It follows from our assumption that , , , and . The fact that implies that there exists a world such that and for some , . Since our frames are connected, there are two cases to consider: (i) , or (ii) . Case (i) falsifies the left premise, and case (ii) falsifies the right premise.

-rule: We know that and . Hence, for any world , if , then for all . Since , it follows that for any . If occurs in the conclusion , then by the constant domain condition (CD), we know that , so we may falsify the premise of the rule. If does not occur in , then it is an eigenvariable, and assigning to any element of will falsify the premise.

###### Theorem 2 (Completeness of LNIF)

If , then is provable in .

###### Proof

It is not difficult to show that can derive each axiom of and can simulate each inference rule. We refer the reader to App. 0.A for details.

## 4 Proof-Theoretic Properties of LNIF

In this section, we present the favorable proof-theoretic properties of , thus extending the results in [15] from the propositional setting to the first-order setting. (NB. We often leverage results from [15] to simplify our proofs.) Most results are proved by induction on the height of a given derivation , i.e. on the length (number of sequents) of the longest branch from the end sequent to an initial sequent in . Proofs of Lem. 14, Lem. 16, and Thm. 1 will be given by induction on the lexicographic ordering of pairs , where is the complexity of a certain formula (defined in the usual way as the number of logical operators in ) and is the height of the derivation. We present the proofs of Lem. 3, Lem. 12, Lem. 15, and Lem. 16 as well as the proof of cut-elimination (Thm. 1). All remaining proofs can be found in App. 0.A.

We say that a rule is admissible in iff derivability of the premise(s) implies derivability of the conclusion in . Additionally, a rule is height preserving (hp-)admissible iff if the premise of the rule has a derivation of a certain height, then the conclusion of the rule has a derivation of the same height or less. Last, a rule is invertible (hp-invertible) iff derivability of the conclusion implies derivability of the premise(s) (with a derivation of the same height or less). Admissible structural rules in are given in Fig. 3.

###### Lemma 2

For any , , , , and , .

###### Lemma 3

The rule is hp-admissible in .

###### Proof

By induction on the height of the given derivation. In the base case, applying to , , or gives an initial sequent, and for each case of the inductive step we apply IH followed by the corresponding rule.

###### Lemma 4

The rule is hp-admissible in .

###### Lemma 5

The rule is hp-admissible in .

###### Lemma 6

The rule is admissible in .

###### Lemma 7

The rule is hp-admissible in .

Our version of the rule necessitates a stronger form of invertibility, called m-invertibility, for the , , , , and rules (cf. [15]). We use to represent copies of a formula , with .

###### Lemma 8

If , then

 (i) (1) implies (2)(ii) (3) implies (4) and (5) (iii) (6) implies (7) and (8)(iv) (9) implies (10) (v) (11) implies (12)
 ⊢LNIF Γ1,(A∧B)k1⊢Δ1⫽ ⋯ ⫽Γn,(A∧B)kn⊢Δn (1) ⊢LNIF Γ1,Ak1,Bk1⊢Δ1⫽ ⋯ ⫽Γn,Akn,Bkn⊢Δn (2) ⊢LNIF Γ1,(A∨B)k1⊢Δ1⫽ ⋯ ⫽Γn,(A∨B)kn⊢Δn (3) ⊢LNIF Γ1,Ak1⊢Δ1⫽ ⋯ ⫽Γn,Akn⊢Δn (4) ⊢LNIF Γ1,Bk1⊢Δ1⫽ ⋯ ⫽Γn,Bkn⊢Δn (5) ⊢LNIF Γ1,(A⊃B)k1⊢Δ1⫽ ⋯ ⫽Γn,(A⊃B)kn⊢Δn (6) ⊢LNIF Γ1,Bk1⊢Δ1⫽ ⋯ ⫽Γn,Bkn⊢Δn (7) ⊢LNIF Γ1,(A⊃B)k1⊢Δ1,Ak1⫽ ⋯ ⫽Γn,(A⊃B)kn⊢Δn,Akn (8) ⊢LNIF Γ1,(∀xA)k1⊢Δ1⫽ ⋯ ⫽Γn,(∀xA)kn⊢Δn (9) ⊢LNIF Γ1,A[a/x]k1,(∀xA)k1⊢Δ1⫽ ⋯ ⫽Γn,A[a/x]kn,(∀xA)kn⊢Δn (10) ⊢LNIF Γ1,(∃xA)k1⊢Δ1⫽ ⋯ ⫽Γn,(∃xA)kn⊢Δn (11) ⊢LNIF Γ1,A[a/x]k1⊢Δ1⫽ ⋯ ⫽Γn,A[a/x]kn⊢Δn (12)

###### Lemma 9

The , , and rules are hp-invertible in .

###### Lemma 10

The rule is invertible in .

###### Lemma 11

The rule is invertible in .

###### Lemma 12

The rule is invertible in .

###### Proof

We extend the proof of [15, Lem. 5.11] to include the quantifier rules and thus prove the invertibility of in . The claim is proven by induction on the height of the given derivation. To show the inductive step when the last rule of the derivation is , , , or , we apply IH to the premise(s) of the inference and then apply the corresponding rule. When the last inference of the derivation is an application of the rule, then the inference is of the following form:

 G⫽Γ⊢Δ,A⊃B⫽⊢C[a/x] (∀r1) G⫽Γ⊢Δ,A⊃B,∀xC

We derive the desired conclusion as follows:

 G⫽Γ⊢Δ,A⊃B⫽⊢C[a/x] Lem. 7 G⫽Γ⊢Δ⫽⊢C[a/x],A⊃B IH G⫽Γ⊢Δ⫽⊢C[a/x]⫽A⊢B G⫽Γ⊢Δ,A⊃B⫽⊢C[a/x] Lem. 10 G⫽Γ⊢Δ⫽A⊢B⫽⊢C[a/x] (∀r1) G⫽Γ⊢Δ⫽A⊢B,∀xC (∀r2) G⫽Γ⊢Δ,∀xC⫽A⊢B

###### Lemma 13

The rule is invertible in .

###### Lemma 14

The rule is admissible in .

###### Lemma 15

The rule is admissible in .

###### Proof

We extend the proof of [15, Lem. 5.13], which proves that is admissible in , and prove the admissibility of in by induction on the height of the given derivation. We need only consider the quantifier rules due to [15, Lem. 5.13]. The , , , and cases are all resolved by applying IH to the premise of the rule followed by an application of the rule. If is applied to the principal components of the rule as follows:

then the desired conclusion is obtained by applying IH to the top right premise. In all other cases, we apply IH to the premises of followed by an application of the rule.

###### Lemma 16

The rule is admissible in .

###### Proof

We extend the proof of [15, Lem. 5.14] to include the quantifier rules and argue the claim by induction on the lexicographic ordering of pairs , where is the complexity of the contraction formula and is the height of the derivation. The case is easily handled by applying IH to the premise of the inference followed by an application of the rule. For the and cases, we evoke Lem. 8 and Lem. 9 respectively, apply IH, and then apply the corresponding rule to resolve the cases. The non-trivial case for the rule is shown below left, and the desired conclusion is derived as shown below right (where IH is applicable due to the decreased complexity of the contracted formulae).

 G⫽Γ⊢Δ,∀xA⫽⊢A[a/x] (∀r1) G⫽Γ⊢Δ,∀xA,∀xA (icr) G⫽Γ⊢Δ,∀xA G⫽Γ⊢Δ,∀xA⫽⊢A[a/x] Lem. 11 G⫽Γ⊢Δ⫽⊢A[a/x]⫽⊢A[a/x] Lem. 15 G⫽Γ⊢Δ⫽⊢A[a/x],A[a/x] IH G⫽Γ⊢Δ⫽⊢A[a/x] (∀r1) G⫽Γ⊢Δ,∀xA

When the contracted formulae are both non-principal in an inference, we apply IH to the premise followed by an application of the rule. Similarly, and with exception of the case below, if the contracted formulae in an inference are both non-principal, then we apply IH to the premises followed by an application of the rule. The non-trivial case occurs as follows:

 G⫽Γ1⊢Δ1,∀xA⫽⊢A[a/x]⫽Γ2⊢Δ2⫽H G⫽Γ1⊢Δ1,∀xA⫽Γ2⊢Δ2,∀xA⫽H (∀r2) G⫽Γ1⊢Δ1,∀xA,∀xA⫽Γ2⊢Δ2⫽H

The desired conclusion is derived as follows:

 G⫽Γ1⊢Δ1,∀xA⫽⊢A[a/x]⫽Γ2⊢Δ2⫽H Lem. 11 G⫽Γ1⊢Δ1⫽⊢A[a/x]⫽⊢A[a/x]⫽Γ2⊢Δ2⫽H Lem. 15 G⫽Γ1⊢Δ1⫽⊢A[a/x],A[a/x]⫽Γ2⊢Δ2⫽H IH G⫽Γ1⊢Δ1⫽⊢A[a/x]⫽Γ2⊢Δ2⫽H G⫽Γ1⊢Δ1,∀xA⫽Γ2⊢Δ2,∀xA⫽H Lem. 7 G⫽Γ1⊢Δ1⫽Γ2⊢Δ2,∀xA,∀xA⫽H IH G⫽Γ1⊢Δ1⫽Γ2⊢Δ2,∀xA⫽H (∀r2) G⫽Γ1⊢Δ1,∀xA⫽Γ2⊢Δ2⫽H

Note that we may apply IH in the left branch of the derivation since the complexity of the contracted formula is less than , and we may apply IH in the right branch since the height of the derivation is less than the original.

Before moving on to the cut-elimination theorem, we present the definition of the splice operation [15, 17]. The operation is used to formulate the rule.

###### Definition 5 (Splice [15])

The splice of two linear nested sequents and is defined as follows:

###### Theorem 1 (Cut-Elimination)

The rule

where , , and , is eliminable in .

###### Proof

We extend the proof of [15, Thm. 5.16] and prove the result by induction on the lexicographic ordering of pairs , where is the complexity of the cut formula and is the height of the derivation of the right premise of the rule. Moreover, we assume w.l.o.g. that is used once as the last inference of the derivation (given a derivation with multiple applications of , we may repeatedly apply the elimination algorithm described here to the topmost occurrence of , ultimately resulting in a cut-free derivation). By [15, Thm. 5.16], we know that is eliminable from any derivation in , and therefore, we need only consider cases which incorporate quantifier rules.

If , then the right premise of is an instance of , , or . If none of the cut formulae are principal in the right premise, then the conclusion is an instance of , , or . If, however, one of the cut formulae is principal in the right premise and is an atomic formula , then the top right premise of