1 Introduction
Intuitionistic fuzzy logic () has attracted considerable attention due to its unique nature as a logic blending approximate reasoning and constructive reasoning [1, 2, 21]. The logic, which was initially defined by Takeuti and Titani in [21], has its roots in the work of Kurt Gödel. Gödel introduced extensions of propositional intuitionistic logic (now called, “Gödel logics”) in order to prove that propositional intuitionistic logic does not possess a finite characteristic matrix [10]. These logics were later studied by Dummett who extended Gödel’s finitevalued semantics to include an infinite number of truthvalues [6]. Dummett additionally provided an axiomatization for the propositional fragment of [6]. The firstorder logic also admits a finite axiomatization, obtained by extending an axiomatization of firstorder intuitionistic logic with the linearity axiom and the quantifier shift axiom (where is not a free variable in ) [13].
Over the last few decades, propositional and firstorder Gödel logics (including the prominent logic ) have been applied in various areas of logic and computer science [1, 2, 4, 12, 15, 16, 24]. For example, Visser [24] applied the propositional fragment of while analyzing the provability logic of Heyting arithmetic, Lifschitz et al. [16]
employed a Gödel logic to model the strong equivalence of logic programs, and Borgwardt et al.
[4] studied standard reasoning problems of firstorder Gödel logics in the context of fuzzy description logics. Additionally—and quite significantly—the logic has been recognized as one of the fundamental formalizations of fuzzy logic [12].The question of whether or not a logic possesses an analytic
proof calculus—that is, a calculus which stepwise (de)composes the formula to be proved—is of critical importance. Such calculi are effective tools for designing automated reasoning procedures and for proving certain (meta)logical properties of a logic. For example, analytic calculi have been leveraged to provide decidability procedures for logics
[9], to prove that logics interpolate
[15], for countermodel extraction [22], and to understand the computational content of proofs [18].In his seminal work [9], Gentzen proposed the sequent calculus framework for classical and intuitionistic logic, and subsequently, proved his celebrated Hauptsatz (i.e. cutelimination theorem), which ultimately provided analytic calculi for the two logics. Gentzen’s sequent calculus formalism has become one of the preferred prooftheoretic frameworks for providing analytic calculi, and indeed, many logics of interest have been equipped with such calculi. Nevertheless, one of the alluring features of the formalism—namely, its simplicity—has also proven to be one of the formalism’s drawbacks; there remain many logics for which no cutfree, or analytic, sequent calculus (á la Gentzen) is known [11, 20]. In response to this, the sequent calculus formalism has been extended in various ways over the last 30 years to include additional structure, allowing for numerous logics to be supplied with cutfree, analytic calculi. Some of the most prominent extensions of Gentzen’s formalism include display calculi [3], labelled calculi [19, 23], hypersequent calculi [20], and nested calculi [7, 11].
In this paper, we employ the linear nested sequent formalism, introduced by Lellmann in [14]. Linear nested sequents fall within the nested calculus paradigm, but where sequents are restricted to linear, instead of treelike, structures. Linear nested sequents are based off of Masini’s 2sequent framework [17, 18] that was used to provide cutfree calculi for the modal logic as well as various other constructive logics. The linear nested formalism proves to be highly compatible with the wellknown firstorder Gödel logic (i.e. intuitionistic fuzzy logic), due to the fact that can be semantically characterized by linear relational frames (see Section 2). The present work provides the linear nested calculus for , which enjoys a variety of fruitful properties, such as:^{1}^{1}1We refer to [25] for a detailed discussion of favorable prooftheoretic properties.

Separation: Each logical rule exhibits no other logical connectives than the one to be introduced.

Symmetry: Each logical connective has a left and right introduction rule.

Internality: Each sequent translates into a formula of the logical language.

Cuteliminability: There exists an algorithm allowing the permutation of a rule (encoding reasoning with lemmata) upwards in a derivation until the rule is completely eliminated from the derivation.

Subformula property: Every formula occurring in a derivation is a subformula of some formula in the end sequent.

Admissibility of structural rules: Everything derivable with a structural rule (cf. and in Sec. 4) is derivable without the structural rule.

Invertibility of rules: If the conclusion of an inference rule is derivable, then so is the premise.
In [2], a cutfree hypersequent calculus for was introduced. In contrast to , the current approach of exploiting linear nested sequents has two main benefits. First, the admissibility of structural rules has not been shown in , and as such, the calculus does not offer formula driven derviability. Therefore, the calculus serves as a better basis for automated reasoning in —bottomup applications of the rules in simply decompose or propagate formulae, and so, the question of if/when structural rules need to be applied does not arise. Second, the calculus cannot be leveraged to prove interpolation for the logic via the socalled prooftheoretic method [15]. In [15], it was shown that the propositional fragment of can be harnessed to prove Lyndon interpolation for the propositional fragment of . This result suggests that , in conjunction with the aforementioned prooftheoretic method, may potentially be harnessed to solve the longstanding open problem of if interpolates or not.
The contributions and organization of this paper can be summarized as follows: In Section 2, we introduce the semantics and axiomatization for intuitionistic fuzzy logic (). Section 3 introduces linear nested sequents and the calculus , as well as proves the calculus sound and complete relative to . In Section 4, we provide invertibility, structural rule admissibility, and cutelimination results. Last, Section 5 concludes and discusses future work.
2 Logical Preliminaries
Our language consists of denumerably many variables , denumerably many ary predicates (with ), the connectives , , , , the quantifiers , , and parentheses and . We define the language via the BNF grammar below, and will use etc. to represent formulae from .
In the above grammar, is any ary predicate symbol and are variables. We refer to formulae of the form as atomic formulae, and (more specifically) refer to formulae of the form as propositional variables (i.e. a ary predicate is a propositional variable). The free variables of a formula are defined in the usual way as variables unbound by a quantifier, and bound variables as those bounded by a quantifier.
We opt for the relational semantics of —as opposed to the fuzzy semantics (cf. [2])—since the structure of linear nested sequents is wellsuited for interpretation via linear relational frames.
Definition 1 (Relational Frames, Models [8])
A relational frame is a triple such that: is a nonempty set of worlds , is a reflexive, transitive, antisymmetric, and connected binary relation on ,^{2}^{2}2The four properties of are defined as follows: (reflexivity) for all , ; (transitivity) for all , if and , then ; (antisymmetry) for all , if and , then ; and (connectedness) for all , if and , then either or . and is a function that maps a world to a nonempty set of parameters called the domain of such that the following condition is met:
If , then .
A model is a tuple where is a relational frame and is a valuation function such that for each ary predicate and
If , then (if is of arity );
If and , then (if is of arity ).
We uphold the convention in [8] and assume that for each world , , so or , for a propositional variable .
Rather than interpret formulae from in relational models, we follow [8] and introduce sentences to be interpreted in relational models. This gives rise to a notion of validity for formulae in (see Def. 3). The definition of validity also depends on the universal closure of a formula: if a formula contains only as free variables, then the universal closure is taken to be the formula .
Definition 2 (Sentence)
Let be a relational model with of . We define to be the language expanded with parameters from the set . We define a formula to be a formula in , and we define a sentence to be a formula that does not contain any free variables. Last, we use to denote parameters in a set .
Definition 3 (Semantic Clauses [8])
Let be a relational model with and . The satisfaction relation between and a sentence is inductively defined as follows:


If is a propositional variable, then iff ;

If is an ary predicate symbol with , then iff ;

iff or ;

iff and ;

iff for all , if , then ;

iff for all and all , ;

iff there exists an such that .
We say that a formula is globally true on , written , iff for all worlds . A formula is valid, written , iff it is globally true on all relational models.
Lemma 1 (Persistency)
Let be a relational model with of . For any sentence , if and , then .
Proof
See [8, Lem. 3.2.16] for details.
A sound and complete axiomatization for the logic is provided in Fig. 1. We define the substitution of the variable for the free variable on a formula in the standard way as the replacement of all free occurrences of in with . The substitution of the parameter for the free variable is defined similarly. Last, the side condition is free for (see Fig. 1) is taken to mean that does not become bound by a quantifier if substituted for .
Theorem 1 (Adequacy of )
For any , iff .
3 Soundness and Completeness of
Let us define linear nested sequents (i.e. sequents) to be syntactic objects given by the BNF grammar shown below:
Each sequent is of the form with . We refer to each (for ) as a component of and use to denote the number of components in .
We often use , , , and to denote sequents, and will use and to denote antecedents and consequents of components. Last, we take the comma operator to be commutative and associative; for example, we identify the sequent with . This interpretation lets us view an antecedent or consequent as a multiset of formulae.
To ease the proof of cutelimination (Thm. 1), we follow [5, Sect. 2.3.1] and make a syntactic distinction between bound variables and parameters which will take the place of free variables occurring in formulae. Thus, our sequents make use of formulae from where each free variable has been replaced by a unique parameter. For example, we would use the sequent instead of the sequent in a derivation (where the parameter has been substituted for the free variable and has been substituted for ). We also use the notation to denote that the parameters occur in the formula , and write as shorthand for . This notation extends straightforwardly to sequents as well.
The linear nested calculus for is given in Fig. 2. (NB. The linear nested calculus introduced in [15] is the propositional fragment of , i.e. is the calculus without the quantifier rules and where propositional variables are used in place of atomic formulae.) The and rules in are particularly noteworthy; as will be seen in the next section, the rules play a vital role in ensuring the invertibility and admissibility of certain rules, ultimately permitting the elimination of (see Thm. 1).
To obtain soundness, we interpret each sequent as a formula in and utilize the notion of validity in Def. 3. The following definition specifies how each sequent is interpreted.
Definition 4 (Interpretation )
The interpretation of a sequent is defined inductively as follows:


We interpret a sequent as a formula in by taking the universal closure of and we say that is valid if and only if .
Theorem 1 (Soundness of )
For any linear nested sequent , if is provable in , then .
Proof
We prove the result by induction on the height of the derivation of and only present the more interesting quantifier cases in the inductive step. All remaining cases can be found in App. 0.A. Each inference rule considered is of one of the following two forms:


where . 
We argue by contraposition and prove that if is invalid, then at least one premise is invalid. Assuming that is invalid (i.e. ) implies the existence of a model with world such that , , and , where represents all parameters in . Hence, there is a sequence of worlds such that (for ), , and , for each . We assume all parameters in and (for ) are interpreted as elements of the associated domain.
rule: By our assumption and . The latter implies that , meaning there exists a world such that and for some . If we interpret the eigenvariable of the premise as , then the premise is shown invalid.
rule: It follows from our assumption that , , , and . The fact that implies that there exists a world such that and for some , . Since our frames are connected, there are two cases to consider: (i) , or (ii) . Case (i) falsifies the left premise, and case (ii) falsifies the right premise.
rule: We know that and . Hence, for any world , if , then for all . Since , it follows that for any . If occurs in the conclusion , then by the constant domain condition (CD), we know that , so we may falsify the premise of the rule. If does not occur in , then it is an eigenvariable, and assigning to any element of will falsify the premise.
Theorem 2 (Completeness of )
If , then is provable in .
Proof
It is not difficult to show that can derive each axiom of and can simulate each inference rule. We refer the reader to App. 0.A for details.
4 ProofTheoretic Properties of
In this section, we present the favorable prooftheoretic properties of , thus extending the results in [15] from the propositional setting to the firstorder setting. (NB. We often leverage results from [15] to simplify our proofs.) Most results are proved by induction on the height of a given derivation , i.e. on the length (number of sequents) of the longest branch from the end sequent to an initial sequent in . Proofs of Lem. 14, Lem. 16, and Thm. 1 will be given by induction on the lexicographic ordering of pairs , where is the complexity of a certain formula (defined in the usual way as the number of logical operators in ) and is the height of the derivation. We present the proofs of Lem. 3, Lem. 12, Lem. 15, and Lem. 16 as well as the proof of cutelimination (Thm. 1). All remaining proofs can be found in App. 0.A.
We say that a rule is admissible in iff derivability of the premise(s) implies derivability of the conclusion in . Additionally, a rule is height preserving (hp)admissible iff if the premise of the rule has a derivation of a certain height, then the conclusion of the rule has a derivation of the same height or less. Last, a rule is invertible (hpinvertible) iff derivability of the conclusion implies derivability of the premise(s) (with a derivation of the same height or less). Admissible structural rules in are given in Fig. 3.
Lemma 2
For any , , , , and , .
Lemma 3
The rule is hpadmissible in .
Proof
By induction on the height of the given derivation. In the base case, applying to , , or gives an initial sequent, and for each case of the inductive step we apply IH followed by the corresponding rule.
Lemma 4
The rule is hpadmissible in .
Lemma 5
The rule is hpadmissible in .
Lemma 6
The rule is admissible in .
Lemma 7
The rule is hpadmissible in .
Our version of the rule necessitates a stronger form of invertibility, called minvertibility, for the , , , , and rules (cf. [15]). We use to represent copies of a formula , with .
Lemma 8
If , then


(1)  
(2)  
(3)  
(4)  
(5)  
(6)  
(7)  
(8)  
(9)  
(10)  
(11)  
(12) 
Lemma 9
The , , and rules are hpinvertible in .
Lemma 10
The rule is invertible in .
Lemma 11
The rule is invertible in .
Lemma 12
The rule is invertible in .
Proof
We extend the proof of [15, Lem. 5.11] to include the quantifier rules and thus prove the invertibility of in . The claim is proven by induction on the height of the given derivation. To show the inductive step when the last rule of the derivation is , , , or , we apply IH to the premise(s) of the inference and then apply the corresponding rule. When the last inference of the derivation is an application of the rule, then the inference is of the following form:
We derive the desired conclusion as follows:
Lemma 13
The rule is invertible in .
Lemma 14
The rule is admissible in .
Lemma 15
The rule is admissible in .
Proof
We extend the proof of [15, Lem. 5.13], which proves that is admissible in , and prove the admissibility of in by induction on the height of the given derivation. We need only consider the quantifier rules due to [15, Lem. 5.13]. The , , , and cases are all resolved by applying IH to the premise of the rule followed by an application of the rule. If is applied to the principal components of the rule as follows:
then the desired conclusion is obtained by applying IH to the top right premise. In all other cases, we apply IH to the premises of followed by an application of the rule.
Lemma 16
The rule is admissible in .
Proof
We extend the proof of [15, Lem. 5.14] to include the quantifier rules and argue the claim by induction on the lexicographic ordering of pairs , where is the complexity of the contraction formula and is the height of the derivation. The case is easily handled by applying IH to the premise of the inference followed by an application of the rule. For the and cases, we evoke Lem. 8 and Lem. 9 respectively, apply IH, and then apply the corresponding rule to resolve the cases. The nontrivial case for the rule is shown below left, and the desired conclusion is derived as shown below right (where IH is applicable due to the decreased complexity of the contracted formulae).

Lem. 11 Lem. 15 IH 
When the contracted formulae are both nonprincipal in an inference, we apply IH to the premise followed by an application of the rule. Similarly, and with exception of the case below, if the contracted formulae in an inference are both nonprincipal, then we apply IH to the premises followed by an application of the rule. The nontrivial case occurs as follows:
The desired conclusion is derived as follows:
Note that we may apply IH in the left branch of the derivation since the complexity of the contracted formula is less than , and we may apply IH in the right branch since the height of the derivation is less than the original.
Before moving on to the cutelimination theorem, we present the definition of the splice operation [15, 17]. The operation is used to formulate the rule.
Theorem 1 (CutElimination)
The rule
where , , and , is eliminable in .
Proof
We extend the proof of [15, Thm. 5.16] and prove the result by induction on the lexicographic ordering of pairs , where is the complexity of the cut formula and is the height of the derivation of the right premise of the rule. Moreover, we assume w.l.o.g. that is used once as the last inference of the derivation (given a derivation with multiple applications of , we may repeatedly apply the elimination algorithm described here to the topmost occurrence of , ultimately resulting in a cutfree derivation). By [15, Thm. 5.16], we know that is eliminable from any derivation in , and therefore, we need only consider cases which incorporate quantifier rules.
If , then the right premise of is an instance of , , or . If none of the cut formulae are principal in the right premise, then the conclusion is an instance of , , or . If, however, one of the cut formulae is principal in the right premise and is an atomic formula , then the top right premise of
Comments
There are no comments yet.