Minimum Description Length and Compositionality

01/04/2000
by   Wlodek Zadrozny, et al.
0

We present a non-vacuous definition of compositionality. It is based on the idea of combining the minimum description length principle with the original definition of compositionality (that is, that the meaning of the whole is a function of the meaning of the parts). The new definition is intuitive and allows us to distinguish between compositional and non-compositional semantics, and between idiomatic and non-idiomatic expressions. It is not ad hoc, since it does not make any references to non-intrinsic properties of meaning functions (like being a polynomial). Moreover, it allows us to compare different meaning functions with respect to how compositional they are. It bridges linguistic and corpus-based, statistical approaches to natural language understanding.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/16/2020

EPIE Dataset: A Corpus For Possible Idiomatic Expressions

Idiomatic expressions have always been a bottleneck for language compreh...
11/02/2017

A Denotational Semantics for SPARC TSO

The SPARC TSO weak memory model is defined axiomatically, with a non-com...
09/14/2021

Talking Space: inference from spatial linguistic meanings

This paper concerns the intersection of natural language and the physica...
08/15/2019

A Multivariate Model for Representing Semantic Non-compositionality

Semantically non-compositional phrases constitute an intriguing research...
10/11/2021

Compositionality as we see it, everywhere around us

There are different meanings of the term "compositionality" within scien...
06/13/2018

Understanding the Meaning of Understanding

Can we train a machine to detect if another machine has understood a con...
05/18/2020

(Re)construing Meaning in NLP

Human speakers have an extensive toolkit of ways to express themselves. ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In [12] we have shown that the standard definition of compositionality is formally vacuous; that is, any semantics can be easily encoded as a compositional semantics. We have also shown that when compositional semantics is required to be ”systematic”, it is possible to introduce a non-vacuous concept of compositionality. However, a technical definition of systematicity was not given in that paper; only examples of systematic and non-systematic semantics were presented. As a result, although our paper clarified the concept of compositionality, it did not solve the problem of the systematic assignment of meanings. In other words, we have shown that the concept of compositionality is vacuous, but we have not replaced it with a better definition; a definition that would both be mathematically correct and would satisfy the common intuitions that there are parts of grammars which seem to have compositional semantics, and others, like idioms, that do not. We present such a non-vacuous definition of compositionality in this chapter.

Compositionality has been defined as the property that the meaning of a whole is a function of the meaning of its parts (cf. e.g. [6], pp.24-25). A slightly less general definition, e.g. [8], postulates the existence of a homomorphism from syntax to semantics. Although intuitively clear, these definitions are not restrictive enough. The fact that any semantics can be encoded as a compositional semantics has some strange consequences. We can find, for example, an assignment of meanings to phonems, or even the letters of the alphabet (as the cabalists wanted), and assure that the normal, intuitive, meaning of any sentence is a function of the meanings of the phonems or letters from which that sentence is composed (cf. [12]).

To address these kind of problems we have several options. We can:
(a) Avoid admitting that there is a problem (e.g. by claiming that compositionality was never intended to be expressible in mathematical terms);
(b) Add additional constraints on the shape or behavior of meaning functions (e.g. that they are ”polynomial”, preserve entailment, etc.);
(c) Re-analyze the concept of compositionality, and the associated intuitions. That is, that the meaning of a sentence is derived in a systematic way from the meanings of the parts; that the meanings of the parts have some intuitive simplicity associated with them; and that compositionality is a gradeable property, i.e. one way of building compositional semantics might be better than another.

We will follow course (c). The emphasis will be on simplicity, but the development of ideas will be formal. (The mathematics will be relatively simple). The bottom line will be that compositional semantics can be defined as the simplest semantics that obeys the compositionality principle.

2 Basic concepts and notations

In this section we discuss the issues in representing linguistic information, i.e. the relationship between languages and their models. The first, and the simplest case to discuss is when natural language is treated as set of words; then, the simplest formal model of a natural language corpus can be the corpus itself. A more complicated model would be a grammar generating the sentences of the corpus; this model is better because it is more compact.

A more interesting case arises when some semantics for the corpus is given. Then, representations become less obvious, and more complicated. Thus to keep the complexity of our presentation under control, we will discuss only very simple cases of natural language constructions. This should be enough to show how to define and build compositional semantics for small language fragments.

Although our methods do not depend on the size and shape of the corpora, we would like to point out that computing compositional semantics for a large and real corpus of natural language sentences would require a separate research project, and certainly goes beyond the the aims of this chapter.

The following issues will now be discussed: (1) representing corpora of sentences using grammars; (2) representing meaning functions; (3) the size and expressive power of representations.

2.1 Notation and essential concepts

2.1.1 Sentences, grammars, and meanings

A corpus is an unordered set (bag) of sentences; a sentence is a sequence of symbols from some alphabet.

A class is a set of sequences of symbols from the alphabet. In our notation, denotes a class consisting of and .

The length of an expression is the number of its symbols. To make our computations simpler, we will assume that all symbols of the alphabet are atomic, and hence of length 1; same for variables. Parentheses, commas, and most of the other notational devices also all have length 1; but we will not count semicolons which we will occasionally use as a typographical device standing for ”end of line”. In several cases, we will give the length (in parentheses) together with an expression, e.g. .

We define a (finite state) grammar rule as a sequence of classes. E.g. the rule describes all the combinations . We will go beyond finite state grammars when we discuss compositional semantics, and we introduce an extension of this notation then.

The reader should always remember that, mathematically, a function is defined as a set of pairs . Thus, a function does not have to be given by a formula. A formula is not a function, although it might define one: e.g. a description of one entity, like energy, depending on another, e.g. velocity, is typically given as a formula, which defines a function (a set of pairs).

A meaning function is a (possibly partial) function that maps sentences (and their parts) into (a representation of) their meanings; typically, some set-theoretic objects like lists of features or functions. A meaning function is compositional if for all elements in its domain:

We are restricting our interest to two argument functions: denotes the concatenation of symbols, and is a function of two arguments. However, the same concept can be defined if expressions are put together by other, not necessarily binary, operations. In literature, is often taken as a composition of functions; but in this chapter it will mostly be used as an operator for constructing a list, where some new attributes are added to and . This has the advantage of being both conceptually simpler (no need for type raising), and closer to the practice of computational linguistics.

2.1.2 Minimum description length

The minimum description length (MDL) principle was proposed by Rissanen [10]. It states that the best theory to explain a set of data is the one which minimizes the sum of

  • the length, in bits, of the description of the theory, and

  • the length, in bits, of data when encoded with the help of the theory.

In our case, the data is the language we want to describe, and the the encoding theory is its grammar (which includes the lexicon). The MDL principle justifies the intuition that a more compact grammatical description is better. At issue is what is the best encoding. To address it, we will be simply comparing classes of encodings. The formal side of the argument will be kept to the minimum; and the mathematics will be simple — counting symbols

111 We assume that the corpus contains no errors (noise), so we do not have to worry about defining prior distributions.. Counting symbols instead of bits does not change the line of MDL arguments, given an alternative formulation of the MDL principle: (p.310 of [7]):

”Given a hypothesis space H, we want to select the hypothesis such that the length of the shortest encoding of [i.e. the data] together with the hypothesis is minimal. ”In different applications, the hypothesis

can be about different things. For example, decision trees, finite automata, Boolean formulas, or polynomials.”


The important aspect of the MDL method has to do with the fact that this complexity measure is invariant with respect to the representation language (because of the invariance of the Kolmogorov complexity on which it is based). The existence of such invariant complexity measures is not obvious; for example, H.Simon (in [11], p.228), wrote ”How complex or simple a structure is depends critically upon the way in which we describe it. Most of the complex structures found in the world are enormously redundant, and we can use this redundancy to simplify their description. But to use it, to achieve this simplification, we must find the right representation”.

2.2 Encoding a corpus of sentences

Assume that we are given a text in an unknown language (containing lower and uppercase letters and numbers):

(We use the pluses to separate utterances, so there is no order implied.) We are interested in building a grammar describing the text. For a short text, the simplest grammar might in fact be the grammar consisting of the list of all valid sentences:

This grammar has only 25 symbols. However, if a new corpus is presented

The listing grammar would have 49 symbols, and a shorter grammar, with only 39 symbols, could be found:

2.3 How to encode semantics?

We will now examine a similar example that includes some simple semantics.

Consider a set of nouns , and a set of verbs , . Let be kick and be bucket; and all other noun-verb combinations are intended to have normal, ”compositional” meanings. If our corpus were to be the table consisting of all verb-noun combinations:

we could quickly use the previous example to write a simple finite state grammar that describes the corpus:

But in this subsection we are supposed to introduce some semantics. Thus, let our corpus consist of all those 1,000 sentences together with their meanings, which, to keep things as simple as possible, will be simplified to two attributes. Also, for the reason of simplicity, we assume that only ”kick bucket” has an idiomatic meaning, and all other entries are assigned the meaning consisting of the two attribute expression . Hence, our corpus will look as follows:

Now, notice that this corpus cannot be encoded by means of a short finite state grammar, because of the dependence of the meanings (i.e. the pair ) on the first two elements of each sentence. We will have to extend our grammar formalism to address this dependence (Section 3).

2.4 On meaning functions

Even though we cannot encode the corpus by a short, finite state grammar, we can easily provide for it a compositional semantics. To avoid the complications of type raising, we will build a homomorphic mapping from syntax to semantics. To do it, it is enough to build meaning functions in a manner ensuring that the meaning of each is composed from the meaning of and the meaning of . Since our corpus is simple, these meaning functions are simple, too: For the verbs the meaning function is given by the table:

For the nouns:

We have represented both meaning functions as tables of symbols. Since this chapter deals with sizes of objects, we compute them for the meaning functions: the size of the first function is , and for the second one it is . Therefore, the meaning function for the whole corpus could be represented as a table with 1,000 entries:

and the size of this table is . Finally, the total size of the tables that describe the compositional interpretation of the corpus is , i.e. roughly . Notice that if we had more verbs and nouns, the tables describing the meaning functions would be even larger.222 The reader familiar with [12] should notice that the meaning functions obtained by the solution lemma also consist of tables of element-value pairs. It is easy to see that for the corpus we are encoding the solution lemma produces the same meaning functions. In the other direction, the method for deriving compositional semantics using the minimum description length principle (Sections 3 and 4) are directly applicable to meaning functions obtained by the solution lemma in [12], provided they are finite (which covers the practically interesting cases); and it seems applicable to the infinite case, if it has a finite representation. However, we will not pursue this connection any further. Also, note that we have not counted the cost of encoding the positions of elements of the table, which would be the of the total number of symbols in the table. This simplifying assumption does not change anything in the strength of our arguments (as larger tables have longer encodings).

3 Compositional semantics through the Minimum Description Length principle

In this section we first extend our notation to deal with semantic grammars. Then we apply the minimum description length principle to construct a compact representation of our example corpus. This experience will motivate our new, non-vacuous definition of the notion of compositional semantics given in Section 4.

3.1 Representations

We have seen that it is impossible to efficiently encode our semantic corpus using a finite state grammar. Therefore, we have to make our representation of grammars more expressive (at the price of a slightly bigger interpreter). Namely, we will allow a simple form of unification.

Example. Assume we do not want to generate . We can do it by changing the notation:

The intention is simple: first, we define a class variable () for the class consisting of elements and ; then, we generate all strings using the rule with variable : and ; and finally we substitute for all its possible values, which produces , , and .

More generally, let us assume that we have an alphabet , and a set of (class) variables . A grammar term, denoted by , is either a sequence of symbols from the alphabet or a class variable. By a grammar rule we will understand one of the three expressions

A grammar is a collection of grammar rules. The language generated by the grammar is defined as above.

Thus, new classes are obtained from elements of the alphabet by either the merge operation, which on two classes and produces a new class consisting of the set theoretic union of the two: ; or by concatenating elements of two or more classes. We permit renaming of classes, because we want to be able to express constructions like :

3.2 An MDL algorithm for encoding semantic corpora

In [4] a greedy algorithm for clustering elements into classes is presented. The algorithm is trying333There is no guarantee that the algorithm will produce the minimum length description. to minimize the description length of grammars according to the MDL principle. This algorithm would not work properly on our semantic corpus, because Grunwald’s representation language is not expressive enough. However, the representation of grammars we introduced above allows us to use the same algorithm with only minor changes.

The basic steps of the greedy MDL algorithm are as follows:

  1. Assign separate class to each different word (symbol) in the corpus. Substitute the class for each word in the corpus. This is the initial grammar .

  2. Compute the total description length (DL) of the corpus. (I.e. the sum of the DL of the corpus given and the DL of ).

  3. Compute for all pairs of classes in the difference in in DL that would result from a merge of these two classes.

  4. Compute for all pairs of classes in the difference in in DL that would result from a construction of a new class given by the concatenation rules

  5. If there is one or more operations that result in a smaller new DL, perform the operation that produces the smallest new DL, and go to Step 2.

  6. Else Stop

3.3 Applying the MDL algorithm to encode a semantic corpus

We will now show how the algorithm applies to our corpus of 1,000 sentences. By Step 1, the initial grammar looks as follows:

Initial grammar :

Step 2. Computing the total length: The grammar describes the corpus. The size is of the initial grammar is 18,000 symbols (not counting the encoding of the positions of beginnings of each rule). For all the grammars obtained by the steps of the algorithm, the total length will be the size of the grammar plus the size of the machine that generates languages from grammars. But, since the size of this machine is constant, we can remove it from our considerations.

Step 3. Merging. Consider the merge operation for two nouns, and the new class , . The resulting new description of the corpus is shorter since it removes 20 entries with of total length 360, and adds two entries of total length 25


However, the merge operation for two verbs produces a better grammar. The new class , . removes 200 entries with of total length 3600, and adds one entry of length 25


Notice that merging another verb with kick would save only 199 rules, so it will not be done in the initial stages of the application of the algorithm.

Step 4. The reader may check that this step would not reduce the size of the grammar. (This is due to the corpus being so simple, and without substructures worth encoding).

Step 5. The successive merges of ’s ( ) will produce the following grammar:

Grammar :

What happens next depends on whether our algorithm is very greedy; namely, whether we insist that all instances of the merging classes are replaced by the result of the merge. If that is the case, we cannot do the merge , and we will do the merge of the nouns. These merges will produce

Grammar :

This is our final grammar (Step 6) (if the algorithm is very greedy). We can see that it is much smaller than the original grammar — its total length is less than 300 symbols (vs. 18,000); but it assumes an existence of a language generator. Interestingly, the grammar resembles the compositional semantics, as usually given. The rule with and describes the compositional part of the corpus; the rule with and – the idiomatic; other rules are in between.

3.4 Variations on the MDL algorithm

A similar result is obtained when we do not insist that all instances of merging classes are replaced by the result of the merge. Starting with the grammar

Grammar :


We can see that the merge will decrease the size of the grammar by 99 rules and result in:

Grammar :


The successive merging of nouns will then produce

Grammar :


If, however we do not do the merge, and proceed with the merging of the nouns (e.g. if there were reasons to modify the algorithm), we get:

Grammar :


Finally, if we allow some overgeneralization, we can replace the above grammars with an even shorter grammar:

Grammar :


Here, clearly is the idiomatic element. However, both idiomatic and non-idiomatic reading of kick bucket is allowed. (In the previously defined grammars, we can also see the distinction between the idiomatic and non-idiomatic elements).

4 A non-vacuous definition of compositionality

The fact that that the MDL principle can produce an object resembling a compositional semantics is crucial. It allows us to argue for a non-vacuous definition of compositionality.

Assume that we have a corpus of sentences and their parts, given either as a set or generated by a grammar. Let sentences and their parts be collections of symbols put together by some operations; in the simplest and most important case, by concatenation .

Definition. A meaning function is a compositional semantics for the set if its domain is contained in , and
a. it satisfies the postulate of compositionality: for all in its domain:

b. it is the shortest, in the sense of the Minimum Description Length principle, such an encoding.
c. it is maximal, i.e. there is no with a larger domain that satisfies a and b.

To see better what this definition entails, let us consider our semantic corpus again. The set consists of the 10 verbs and 100 nouns and all noun-verb combinations. The compositional function assigns to each word its category e.g. . The question is how to define the operator . Because of the idiom, it cannot be a total function; hence we have to exclude from the domain of the pair . The shortest description of can be given by translating the grammar of Section 3.2. First, map non-idiomatic verbs and nouns into pairs , , . Then, put

Thus defined and correspond to the grammar obtained by the algorithm of Section 3.2 and to the tables of Section 2. This correspondence is not exact, because functions and encode only the systematic, compositional part of the corpus. (But please note this clear distinction between the idiomatic and the compositional parts of the lexicon and the corpus).

However this description of the two functions is not maximal. We obtain the maximal compositional semantics for by extending the above defined mapping to all nouns , , and extending the domain of

It is easily checked that this is the shortest (in the sense of the MDL) and maximal assignment of meaning to the elements of set .444We are assuming that we have to assign the noun and verb categories to the lexical symbols of the corpus. Please compare this mapping with , and also note that now we have a formal basis for saying that (for this corpus) it is the verb kick, and not the noun bucket, that is idiomatic.

What are the advantages of defining compositionality using the Minimum Description Length principle? 1. It brings us back to the original definition of compositionality, but makes it non-vacuous. 2. It encodes the postulate that the meaning functions should be simple. 3. It allows us to distinguish between compositional and non-compositional semantics by means of systematicity, i.e. the minimality of encodings, as e.g. Hirst [5] wanted. 4. It does not make a reference to non-intrinsic properties of meaning functions (like being a polynomial). 5. It works for different models of language understanding: pipeline (syntax, semantics, pragmatics), construction grammars (cf. [3]), and even semantic grammars. 6. It allows us to compare different meaning functions with respect to how compositional they are — we can measure the size of their domains and the length of the encodings. Finally, this definition might even satisfy those philosophers of language who regard compositionality not as a formal property but as an unattainable ideal worth striving for. This hope is based on the fact that, given an appropriately rich model of language, its minimum description length is, in general, non-computable, and can only be approximated but never exactly computed.

5 Discussion and Conclusions

Lambdas, approximations, and the minimum description length

Assuming that we have a -expressions interpreter (e.g. a lisp program), we could describe the meaning functions of Section 3 as:

The approximate total size of this description is + 66 (the above definitions) + 110 (to describe the domains of the first two functions).

Clearly, the last lambda expression corresponds to an idiomatic meaning. But, note that this definition assigns also the non-idiomatic meaning to ”kick bucket”. Thus, although much simpler, it does not exactly correspond to the original meaning function. It does however correspond to grammar of the previous section. Also, representations that ignore exceptions are more often found in the literature. This point may be worth pursuing: Savitch in [9] argues that approximate representation in a more expressive language can be more compact. For approximate representations that overgeneralize, the idiomaticity of an expression can be defined as the existence of a more specific definition of its meaning.

Bridging linguistic and probabilistic approaches to natural language

The relationship between linguistics principles and the MDL method is not completely surprising. We used the MDL principle in [13] to argue for a construction-based approach to language understanding (cf. [3]). After setting up a formal model based on linguistic and computational evidence, we applied the MDL principle to prove that construction-based representations are at least an order of magnitude more compact that the corresponding lexicalized representations of the same linguistic data. The argument presented there suggests that in building compositional semantics we might be better off when the language is build by means of reach combinatorics (constructions), than by the concatenation of lexical items. However, this hypothesis remains to be proved.

It is known that the most important rules of statistical reasoning, the maximum likelihood method, the maximum entropy method, the Bayes rule and the minimum description length, are all closely related (cf. pp. 275-321 of [7]). From the material of Sections 3 and 4 we can see that compositionality is closely related to the MDL principle; thus, it is possible to imagine bringing together linguistic and statistical methods for natural language understanding. For example, starting with semantic classes of [2] continue derivation of semantic model for a large corpus using the method of Section 3 with the computational implementation along the lines of [1].

Conclusion

We have redefined the linguistic concept of compositionality as the simplest maximal description of data that satisfies the postulate that the meaning of the whole is a function of the meaning of its parts. By justifying compositionality by the minimum description length principle, we have placed the intuitive idea that the meaning of a sentence is a combination of the meanings of its constituents on a firm mathematical foundation.

This new, non-vacuous definition of compositionality is intuitive and allows us to distinguish between compositional and non-compositional semantics, and between idiomatic and non-idiomatic expressions. It is not ad hoc, since it does not make any references to non-intrinsic properties of meaning functions (like being a polynomial). It works for different models of language understanding. Moreover, it allows us to compare different meaning functions with respect to how compositional they are.

Finally, because of the close relationship between the minimum description length principle and probability, the approach proposed in this chapter should bridge logic-based and statistics-based approaches to language understanding.

References

  • [1] Brown, P.F. and V.J. Della Pietra and P.V.deSouza and J.C.Lai and R.L.Mercer.

    Class-based n-gram Models of Natural Language.

    Computational Linguistics 18(4):467-480. 1992.
  • [2] Dixon, R.M.W. A New Approach to English Grammar on Semantic Principles. Clarendon Press, Oxford, 1991.
  • [3] Fillmore, C.J., P. Kay, and M.C. O’Connor. Regularity and idiomaticity in grammatical constructions. Language, 64(3):501–538, 1988.
  • [4] Grunwald, P. A Minimum Description Length Approach to Grammar Inference. In S. Wermter et al., editors,

    Symbolic, Connectionist and Statistical Approach to Learning for Natural Language Processing

    , pages 203-216, Springer, Berlin, 1996.
  • [5] Hirst, G. Semantic interpretation and the resolution of ambiguity. Cambridge University Press, Cambridge, Great Britain, 1987.
  • [6] Keenan, E.L. and L.M. Faltz. Boolean Semantics for Natural Language. D Reidel, Dordrecht, Holland, 1985.
  • [7] Li, M., and P.Vitanyi. An Introduction to Kolmogorv Complexity and Its Applications. Springer, New York, 1993.
  • [8] Partee,B.H., A. ter Meulen, and R.E. Wall. Mathematical Methods in Lingusitics. Kluwer, Dordrecht, The Netherlands, 1990.
  • [9] Savitch, W.J. Why it might pay to assume that languages are infinite.

    Annals of Mathematics and Artificial Intelligence

    , 8(1,2):17–26, 1993.
  • [10] Rissanen, J.

    A universal prior for integers and estimation by minimum description length.

    Annals of Statistics, 11:416–431, 1982.
  • [11] Simon, H. The Sciences of the Artificial. MIT Press, Cambridge, MA, 1981.
  • [12] Zadrozny, W. From compositional to systematic semantics. Linguistic and Philosophy 17(4):329-342, 1994.
  • [13] Zadrozny, W. The Compactnes of Construction Grammars. IBM Research, T.J. Watson Research Center; Report RC 20003 (88493), 1995.