Commutative linear logic as a multiple context-free grammar

by   Sergey Slavnov, et al.

The formalism of multiple context-free grammars (MCFG) is a non-trivial generalization of context-free grammars (CFG), where basic constituents on which rules operate are discontinuous tuples of words rather than single words. Just as context-free ones, multiple context-free grammars have polynomial parsing algorithms, but their expressive power is strictly stronger. It is well known that CFG generate the same class of languages as type logical grammars based on Lambek calculus, which is, basically, a variant of noncommutative linear logic. We construct a system of type logical grammars based on ordinary commutative linear logic and show that these grammars are in the same relationship with MCFG as Lambek grammars with CFG. It turns out that tuples of words on which MCFG operate can be organized into a symmetric monoidal category, very similar to the category of topological cobordisms; we call it the category of word cobordisms. In particular, this category is compact closed and, thus, a model of linear logic. Using interpretation of linear logic proofs as word cobordisms allows us to define type logical grammars by adding extra axioms (a lexicon) and interpreting them as cobordisms as well. Such grammars turn out to be equivalent to MCFG.



page 1

page 2

page 3

page 4


Classical linear logic, cobordisms and categorial grammars

We propose a categorial grammar based on classical multiplicative linear...

Encoding Phases using Commutativity and Non-commutativity in a Logical Framework

This article presents an extension of Minimalist Categorial Gram- mars (...

Classical linear logic, cobordisms and categorical semantics of categorial grammars

We propose a categorial grammar based on classical multiplicative linear...

Cobordisms and commutative categorial grammars

We propose a concrete surface representation of abstract categorial gram...

Abstract categorial grammars with island constraints and effective decidability

A well-known approach to treating syntactic island constraints in the se...

Polymorphism and the free bicartesian closed category

We study two decidable fragments of System F, the polynomial and the Yon...

Universal Higher Order Grammar

We examine the class of languages that can be defined entirely in terms ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A prototypical example of categorial grammar is Lambek grammars [17]. These are based on logical Lambek calculus, which is, speaking in modern terms, a noncommutative variant of (intuitionistic) linear logic [12]. It is well known that Lambek grammars generate exactly the same class of languages as context-free grammars [25].

However, it is agreed that context-free grammar are, in general, not sufficient for modeling natural language. Therefore linguists consider various more expressive formalisms. Lambek calculus is extended to different complicated multimodal, mixed commutative and mixed nonassociative systems, see [21]. Many grammars operate with more complex constituents than just words. For example displacement grammars [24], extending Lambek grammars, operate on discontinuous tuples of words.

Especially interesting (to the author) are abstract categorial grammars (ACG) [10]. Unlike Lambek grammars, these are based on a more intuitive and familiar commutative logic, namely, the implicational fragment of linear logic. Yet their expressive power is much stronger [31]. This, however, comes with a certain drawback. The constituents are, basically, just linear -terms. It is not so easy to identify them with any elements of language. We should add also that there exist Hybrid type logical grammars [16], which extend ACG, mixing them with Lambek grammars.

Finally, we note, that, although the list of existing grammars seems sufficiently long, there exists a very interesting unifying approach of [22]. It turns out that many grammatical formalisms can be faithfully represented as fragments of first order multiplicative intuitionistic linear logic MILL1. This provides some common ground on which different systems can be compared. From the author’s point of view it is quite remarkable that a unifying logic is, again, commutative.

In this work we propose one more categorial grammar based on a commutative system, namely on classical linear logic. Linear logic grammars (LLG) of this paper can be seen as an extension of ACG to full multiplicative fragment. Although, as we just noted, the list of different formalisms is already sufficiently long, we think that our work deserves some interest at least for two reasons.

First, unlike the case of ACG, constituents of LLG are very simple. They are tuples of words with labeled endpoints, we call them multiwords. Multiwords are directly identified as basic elements of language, and apparently they are somewhat easier to deal with than abstract -terms. ACG embed into LLG, so at least we give a concrete and intuitive representation of ACG. (We don’t know if LLG have stronger expressive power as ACG, or just the same.)

Second, we identify on the class of multiwords a fundamental algebraic structure. This structure is a category (in the mathematical, rather than linguistic sense of the word), which is symmetric monoidal closed and compact closed. It is this categorical structure that allows us representing linear -calculus and ACG, as well as classical linear logic. And, apparently, at least some other formalisms can be represented in this setting as well. Possibly, this can give some common reference for different systems.

We now discuss it in a greater detail.

1.1 Algebraic considerations

The algebraic structure underlying linguistic interpretations of Lambek calculus is that of a monoid.

Indeed, the set of words over a given alphabet is a free monoid under concatenation, and Lambek calculus can be interpreted as a logic of the poset of this monoid subsets (i.e. of formal languages). Typically, the sequent

is interpreted as subset inclusion: the concatenation of languages is a sublanguage of .

When constituents of a grammar are more complicated, such as word tuples, there is no unique concatenation, since tuples can be glued together in many ways. Thus the algebra is more complex.

We consider tuples of words with labeled endpoints, we call them multiwords. Multiwords can be conveniently represented as very simple directed graphs with labeled edges and vertices. They are glued together along matching labels on vertices.

For example, we have a multiword with two components



and another multiword with one component.


These glue together and yield the following.

John likes Mary

The same multiword can be obtained by gluing a three-component multiword




with another multiword

whose all components are empty.

Unfortunately, nothing precludes us from gluing words cyclically, and thus obtaining cyclic sequences of letters with no endpoints. Consider gluing a word


with a “wrongly oriented” one.


For consistency we have to allow also such cyclic or singular multiwords, which can be represented as closed loops.

Multiwords can be organized in a monoidal category, very similar to the category of topological cobordisms (see [2]). Its objects, boundaries, are sets of vertex labels, and morphisms, word cobordisms, are (equivalence classes of) multiwords, composed by gluing.

Monoidal structure, “tensor product” is just disjoint union.

Thus, we shift from a non-commutative monoid of words to a symmetric monoidal category of word cobordisms. (We find it amusing to abbreviate the latter term as cowordism.)

1.2 Adding logic

The category of cowordisms (over a given alphabet) is not only symmetric monoidal, but also compact closed, just as the category of cobordisms. This makes it a model of classical multiplicative linear logic [27].

When interpreting logic in such a setting, logical consequence does no longer correspond to subset inclusion. A sequent

given together with its derivation, is now a particular cowordism of type

which can be explicitly computed from the derivation.

Adding a lexicon, which is a finite set of non-logical axioms, i.e. cowordisms together with their typing specifications, we obtain a linear logic grammar (LLG).

Syntactic derivations from the lexicon directly translate to cowordisms, (which are just tuples of words). This gives us a linear logic grammar; its language consists of all words that can be written as compositions of cowordisms in the lexicon and “natural” cowordisms coming from linear logic proofs.

Speaking more generally, with an LLG we get a subcategory of cowordism types generated by the grammar. This is, in general, no longer compact. It is, however, a categorical model of linear logic and linear -calculus.

Comparing with Lambek calculus, we shift from a poset of formal languages to a category of cowordism types.

1.3 Some wishful thinking on categorical semantics

LLG are at least as expressive as abstract categorial grammars (on the string signature). Indeed, ACG are based on a conservative fragment of classical linear logic, so they have direct translation to our setting. Thus, cowordisms and LLG provide a concrete categorical model of abstract categorial grammar.

In fact, cowordisms are essentially proof-nets, and passage from ACG to LLG is basically, a passage, from -terms to proof-nets. Now, forgetting about LLG, it seems reasonable that any formalism admitting some version of proof-nets has a representation in the category of cowordisms. (It does not necessarily mean that such a representation is useful.) Possibly, this might provide some common, syntax-independent ground, i.e. a model, for different systems. This might be compared with representation of different systems in MILL1 in [22].

One of the main features making categorial grammars interesting is that they allow a bridge between language syntax and language semantics (see [23]). Semantics is often modeled by means of a commutative logic, most notably, linear logic as in [9]. But the category of cowordisms itself is a symmetric monoidal category of language elements, which independent of any grammar. It might prove helpful for understanding this bridge.

An interesting approach is that of categorical compositional distributional models of meaning (DisCoCat)) [7], [8]. In DisCoCat it is proposed to model and analyze language semantics by a functorial mapping (“quantization”) of syntactic derivations in a categorial grammar to the (symmetric) compact closed category FDVec

of finite-dimensional vector spaces. The approach has been developed so far mainly on the base of Lambek grammars or pregroup grammars (see

[18]), which are, from the category-theoretical point of view, non-symmetric monoidal closed. On the other hand, the cowordism category is symmetric and compact closed, and in this sense it is a better mirror of FDVec. Thus it seems a more natural candidate for quantization. Possibly, cowordism representation may help to apply ideas of DisCoCat to LLG or ACG, thus going beyond context-free languages.

1.4 Structure of the paper

The paper is reasonably self-contained. We assume, however, that the reader has some basic acquaintance with categories, in particular, with monoidal categories, see [19] for background.

In the first section we define the category of word cobordisms (cowordisms). In the second section we discuss monoidal closed categories in general, and monoidal closed structures of cowordism categories in particular. Section 3 introduces linear logic, its categorical semantics and, finally, linear logic grammars. In Section 4, as an example, we show that multiple context-free grammars encode in LLG, and that every LLG with a -free lexicon generates a multiple context-free language. This result is similar to (and stronger than) the known result that all second order ACG generate multiple context-free languages [26]. The fifth section is the encoding of ACG to LLG. Finally, in the last section we show how LLG generates an NP-complete language. The purpose of this last piece is mainly illustrative. We try to convince the reader that the geometric language of cowordisms is indeed intuitive and convenient for analysing language generation.

2 Word cobordisms

2.1 Multiwords

Let be a finite alphabet. We denote the set of all finite words in as .

For consistency of definitions we will also have to consider cyclic words.

We say that two words in are cyclically equivalent if they differ by a cyclic permutation of letters. A cyclic word over is an equivalence class of cyclically equivalent words in .

For we denote the corresponding cyclic word as .

Observe that there exists a perfectly well-defined empty cyclic word.

Definition 1

A regular multiword over an alphabet is a finite directed graph with edges labelled by words in , such that each vertex is adjacent to exactly one edge (so that it is a perfect matching).

The left, respectively, right boundary of a multiword is the set of vertices of the underlying graph that are heads, respectively, tails of some edges.

We denote the left boundary of as and the right boundary, as .

The boundary of is the set .

Definition 2

A multiword over the alphabet is a pair , where , the regular part, is a regular multiword over , and , the singular or cyclic part, is a finite multiset of cyclic words over .

The boundaries , , of a multiword are defined as corresponding boundaries of its regular part .

The multiword is acyclic or regular if its singular part is empty. Otherwise it is singular.

A multiword can be pictured geometrically as the edge-labelled graph and a bunch of isolated loops labelled by elements of . The underlying geometric object is no longer a graph, but it is a topological space. It is even a manifold with boundary. In fact, we can equivalently define a multiword as a 1-dimensional compact oriented manifold with boundary (up to a boundary fixing homeomorphism), whose connected components are labelled by cyclic words, if they are closed, and by ordinary words otherwise.

2.1.1 Gluing

It should be clear from a geometric representation how to glue multiwords. We now give a boring accurate definition.

First, we define the disjoint union of multiwords in the most obvious way.

If , are multiwords then we define the disjoint union as the multiword

Next we define contraction, which corresponds to elementary gluing.

Let be a multiword and , .

The contraction of and in is obtained by identifying with in the underlying graph and gluing the corresponding edges into one. The words labeling the edges are also glued, i.e. concatenated.

This means the following.

If vertices , are not connected by an edge in , then let be the tail of the unique edge adjacent to and be the head of the unique edge adjacent to . Let be the word labeling and be the word labeling . We construct a new edge-labelled graph by removing and together with their adjacent edges from and drawing an edge from . The new edge is labelled by the concatenation .

We put .

If and are connected by an edge, let be its label. We remove , and from , which gives us the new edge-labelled graph , and we add to the cyclic word , which gives us the new multiset . We put .

Note that iterated contractions commute.

Note 1

Let be a multiword, and , . Then

In view of the above we can define multiple contractions.

Definition 3

Let be a multiword. Let

and let be a bijection.

The contraction of and along in is defined by

where is any enumeration of elements of .

(We omit the bijection from notation, because it will be clear from the context.)

Now let two multiwords , be given.

Assume that we have subsets

and two bijections

Let be the disjoint unions , .

The gluing of and along and is defined as the multiple contraction

2.2 Category of word cobordisms

2.2.1 Cowordisms

We remarked above that multiwords can be represented geometrically as very simple manifolds with boundary. Manifolds with boundary give rise to the category of cobordisms, see [2]. We are now going to define a similar category of word cobordisms. We find it amusing to abbreviate the latter term as cowordism, and we will do so.

Definition 4

A boundary is a finite set equipped with a partition into two disjoint subsets.

Now, we want to look at a multiword as a morphism between boundaries. For that, we need to understand which part of is the input, and which is the output. This leads to the following definition.

Definition 5

Let , be boundaries.

A cowordism

over an alphabet from to is a triple

where is a multiword over together with two bijective labeling functions

A cowordism is regular if its underlying multiword is regular. Otherwise the cowordism is singular.

For our purposed it is necessary to identify cowordisms that differ by inessential relabeling of boundaries. Therefor we supply our definition of a cowordism with a definition of cowordism equality.

Definition 6

Two cowordisms and are equal, if their singular part coincide,

and there is a pair of bijections

inducing an edge-labeled graph isomorphism of the regular parts, such that

In the sequel we will systematically abuse notation and denote a cowordism and its underlying multiword with a same letter.

Note, however, that, generally speaking, a cowordism and a multiword are two different structures. In particular, we can have two different non-equal multiwords representing the same cowordism (see the definition of cowordism equality above).

We are going to organise cowordisms into a compact closed category (to be discussed below). Since cowordisms, by definition, have geometric representation, it is natural to adapt the pictorial language (see [29]) used for such categories.

We can depict an abstract cowordism schematically as a box with incoming and outgoing wires, like the following.

Or, using fewer labels on the wires, like the following.

(Of course for a concrete there are as many wires as there are points in the boundaries , .)

2.2.2 Composition

Cowordisms are composed simply by gluing multiwords along matching boundary parts.

In the pictorial language of boxes and wires, given two cowordisms

the composition is represented in a most natural way.

An accurate definition is as follows.

Let , , be boundaries, and

be cowordisms from to and from to respectively.

Let .

We have the injective maps

obtained from restrictions of , respectively.

Denote the image of as and the image of as .

The composition is defined as the gluing of and along identified with by means of bijection , i.e.

Restrictions of to and of to provide necessary bijections

which makes the constructed multiword a cowordism from to .

It follows from Note 1 and definition of cowordism equality that composition is associative.

2.2.3 Identities

In order to construct a category we only need to find identities.

Let be a boundary.

The identity cowordism is constructed as follows.

Take two copies of and then draw a directed edge from each point of in the first copy to its image in the second copy and from each point of in the second copy to its image in the first copy. Label every constructed edge with the empty word. This gives us an acyclic multiword with the left and right boundaries isomorphic to .

In the pictorial language, looks as follows.

It is immediate now that the following is well defined.

Definition 7

The category of cowordisms over the alphabet has boundaries as objects and cowordisms over as morphisms.

2.3 Over the empty alphabet

Note that even when the alphabet is empty, the category of cowordisms is nontrivial. In fact, it becomes literally the category of oriented 1-dimensional cobordisms.

In the sequel we will use the term cobordism for a cowordism over the empty alphabet, and denote

Given two boundaries and a cowordism over some alphabet , we define the pattern of as the cobordism from to obtained by erasing from all letters.

3 Cowordisms and monoidal closed categories

3.1 Structure of cowordisms category

The category of cowordisms has a rich structure (which it inherits, in fact, from the underlying category of cobordisms).

It is a symmetric monoidal closed, -autonomous, and compact closed category, which makes it a model of linear -calculus and of classical multiplicative linear logic.

3.1.1 Monoidal structure

First, the operation of disjoint union makes this category monoidal.

The tensor product on is defined both on objects and morphisms as the disjoint union.

The monoidal unit is the empty boundary,

Obviously, tensor product of cowordisms is associative up to a natural transformation.

In order to avoid very cumbersome notations we will, as is quite customary in literature, treat the category of cowordisms as strict monoidal. That is we will write without brackets, as if the associativity isomorphisms were strict equalities. Similarly, we will usually identify and with . This is legitimate, because any monoidal category is equivalent to a strict monoidal category, see [19], Chapter VII for details.

In the pictorial language, given two cowordisms

we depict the tensor product as two disjoint boxes.

For an abstract cowordism of the form

it is convenient to depict as a box with different slots for different tensor factors, as follows.

When the cowordism is of the form

It is natural to represent it without wires on the left as follows.

3.1.2 Symmetry

The above monoidal structure is also symmetric.

The symmetry transformation

is given for any boundaries , by the following cowordism.

Take a copy of and a copy of . For each draw a directed edge from the image of in to the image of in , similarly for each . Then for each draw a directed edge from the image of in to the image of in , similarly for each . Label each constructed edge with the empty word. This gives an acyclic multiword, which is a cowordism from to in the obvious way.

In the pictorial language symmetry is the following.

Note 2

The above defined tensor product, monoidal unit and symmetry make a symmetric monoidal category.

3.1.3 Duality and internal homs

The category of cowordisms also has a well-behaved contravariant duality , defined by switching left and right.

Let be a boundary.

The dual of is defined by

On morphisms, duality amounts to relabeling boundary points.

Let be a cowordism.

By definition is a multiword together with two labeling functions


be the natural bijections.

Then the triple

is a cowordism from to .

In the pictorial language, given a cowordism , the dual cowordism looks as follows.

Note 3

The above defined duality is a contravariant functor commuting with the tensor product: .

Tensor and duality equip with a very rich categorical structure that we discuss in the next section.

3.2 Zoo of monoidal closed categories

Definition 8

Monoidal closed category is a symmetric monoidal category equipped with a bifunctor , contravariant in the first entry and covariant in the second entry, such that there exists a natural bijection


The functor in the above definition is called internal homs functor.

Definition 9

[4] -Autonomous category is a symmetric monoidal category equipped with a contravariant functor , such that there is a natural isomorphism

and a natural bijection

Duality equips a -autonomous category with a second monoidal structure. The cotensor product is defined by

The neutral object for the cotensor product is

Any -autonomous category is monoidal closed. The internal homs functor is defined by

Note that we have a natural isomorphism

Definition 10

[15] A compact closed or, simply, compact category is a -autonomous category for which duality commutes with tensor, i.e. such that

For compact categories it is convenient to define internal homs by


A prototypical example of a compact category is the category of finite-dimensional vector spaces with the usual tensor product and algebraic duality. Note, however, that in this case, and, in general, in the algebraic setting, duality is denoted as a star . Another example of a compact category widely used in mathematics and important for our discussion is the category of cobordisms.

Note 4

The category of cowordisms is compact closed (hence monoidal closed and -autonomous).

Proof exercise.

Compact structure provides a lot of important maps and constructions. A short and readable introduction into the subject can be found, for example, in [1].

We pick some necessary bits in the next section.

3.2.1 Names

Let be a monoidal closed category.

For any morphism

correspondence (1) together with the isomorphism

yields the morphism

sometimes called the name of .

In the case of cowordisms, the name of a cowordism can be depicted as follows.

3.2.2 Applications

As before, let be a monoidal closed category.

For any two objects , correspondence (1) composed with symmetry applied to yields the evaluation morphism

In a compact closed case, where we have identifications (3), evaluation is especially simple.

We have the natural pairing map

usually called counit, and evaluation can be computed as

In the case of cowordisms the pairing has the following shape (remember that and ).

The evaluation , accordingly, is pictured as follows.

Now given two morphisms

we can define the application

of to as

The following property holds for any monoidal closed category.

Note 5

For any two morphisms

it holds that

In the case of cowordisms, the property is evident from geometric representation.

3.2.3 Partial pairing

Now let be a -autonomous category.

For any objects there is a natural linear distributivity morphism [6]


In a compact closed case, where cotensor and tensor can be identified, linear distributivity is just associativity of tensor product.

Using linear distributivity, for any two morphisms

we can define the partial pairing

of and over by

In the case of cowordisms, given two cowordisms

the partial pairing has the following shape.

Partial pairing can be understood as a symmetrized composition, as the following observation shows.

Note 6

For all morphisms

it holds that

3.3 Categories of cowordism types

We know discuss subcategories of , which are no longer compact, but are monoidal closed. They will be helpful for understanding categorial grammars considered in this paper.

Definition 11

Given a boundary , a cowordism type over an alphabet or, simply, a type on the boundary is a set of cowordisms over from to .

A set of cowordisms over the alphabet is a cowordism type or, simply, a type, if it is a type on some boundary.

Given a type , we denote the corresponding boundary as .

Definition 12

Given two cowordism types over the same alphabet, a cowordism

is a morphism of types

if for any it holds that .

Obviously, morphisms of types compose, and identity cowordisms are morphisms of types. So, types over an alphabet form a category. We denote it as .

Categories of types inherit symmetrical monoidal, and even monoidal closed structure of .

For two types we define the tensor product type as the type on the tensor product of boundaries,

given by

We define the internal homs type as the type on the boundary

given by