1 Introduction
In recent years, the abundance of text corpora and computing power has allowed the development of techniques to analyse statistical properties of words. For example techniques such as latent semantic analysis [Deerwester et al.1990] and its variants, and measures of distributional similarity [Lin1998, Lee1999] attempt to derive aspects of the meanings of words by statistical analysis, while statistical information is often used when parsing to determine sentence structure [Collins1997]
. These techniques have proved useful in many applications within computational linguistics and natural language processing
[Schütze1998, McCarthy et al.2004, Grefenstette1994, Lin2003, Bellegarda2000, Choi, WiemerHastings, and Moore2001], arguably providing evidence that they capture something about the nature of words that should be included in representations of their meaning. However, it is very difficult to reconcile these techniques with existing theories of meaning in language, which revolve around logical and ontological representations. The new techniques, almost without exception, can be viewed as dealing with vectorbased representations of meaning, placing meaning (at least at the word level) within the realm of mathematics and algebra; conversely the older theories of meaning dwell in the realm of logic and ontology. It seems there is no unifying theory of meaning to provide guidance to those making use of the new techniques.The problem appears to be a fundamental one in computational linguistics since the whole foundation of meaning seems to be in question. The older, logical theories often subscribe to a modeltheoretic philosophy of meaning [Kamp and Reyle1993, Blackburn and Bos2005] According to this approach, sentences should be translated to a logical form that can be interpreted as a description of the state of the world. The new vectorbased techniques, on the other hand, are often closer in spirit to the philosophy of “meaning as context”, that the meaning of an expression is determined by how it is used. This is an old idea with origins in the philosophy of Wittgenstein:53, who said that “meaning just is use” and Firth:57, “You shall know a word by the company it keeps”, and the distributional hypothesis of Harris:68, that words will occur in similar contexts if and only if they have similar meanings. This hypothesis is justified by the success of techniques such as latent semantic analysis as well as experimental evidence [Miller and Charles1991]. Whilst the two philosophies are not obviously incompatible — especially since the former applies mainly at the sentence level and the latter mainly at the word level — it is not clear how they relate to each other.
The problem of how to compose vector representations of meanings of words has recently received increased attention [Widdows2008, Clark, Coecke, and Sadrzadeh2008, Mitchell and Lapata2008, Erk and Pado2009, Preller and Sadrzadeh2009, Guevara2011, Baroni and Zamparelli2010] although the problem has been considered in earlier work [Smolensky1990, Landauer and Dumais1997, Foltz, Kintsch, and Landauer1998, Kintsch2001]. A solution to this problem would have practical as well as philosophical benefits. Current techniques such as latent semantic analysis work well at the word level, but we cannot extend them much beyond this, to the phrase or sentence level, without quickly encountering the datasparseness problem: there are not enough occurrences of strings of words to determine what their vectors should be merely by looking in corpora. If we knew how such vectors should compose then we would be able to extend the benefits of the vector based techniques to the many applications that require reasoning about the meaning of phrases and sentences.
This paper describes the results of our own efforts to identify a theory that can unite these two paradigms, and includes a summary of work described in the author’s DPhil thesis [Clarke2007]. In addition, we also discuss the relationship between this theory and methods of composition that have recently been proposed in the literature, showing that many of them can be considered as falling within our framework.
Our approach in identifying the framework is summarised in Figure 1:

Inspired by the philosophy of meaning as context and vector based techniques we developed a mathematical model of meaning as context, in which the meaning of a string is a vector representing contexts in which that string occurs in a hypothetical infinite corpus.

The theory on its own is not useful when applied to real world corpora because of the problem of data sparseness. Instead we examine the mathematical propertes of the model, and abstract them to form a framework which contains many of the properties of the model. Implementations of the framework are called context theories since they can be viewed as theories about the contexts in which strings occur. By analogy with the term “modeltheoretic” we use the term “contexttheoretic” for concepts relating to context theories, thus we call our framework the contexttheoretic framework.

In order to ensure that the framework was practically useful, context theories were developed in parallel with the framework itself. The aim was to be able to describe existing approaches to representing meaning within the framework as fully as possible.
In developing the framework we were looking for specific properties; namely, we wanted it to:

provide some guidelines describing in what way the representation of a phrase or sentence should relate to the representations of the individual words as vectors;

require information about the probability of a string of words to be incorporated into the representation;

provide a way to measure the degree of entailment between strings based on the particular meaning representation;

be general enough to encompass logical representations of meaning;

be able to incorporate the representation of ambiguity and uncertainty, including statistical information such as the probability of a parse or the probability that a word takes a particular sense.
The framework we present is abstract, and hence does not subscribe to a particular method for obtaining word vectors: they may be raw frequency counts, or vectors obtained by a method such as latent semantic analysis. Nor does the framework provide a recipe for how to represent meaning in natural language, instead it provides restrictions on the set of possibilities. The advantage of the framework is in ensuring that techniques are used in a way that is wellfounded in a theory of meaning. For example, given vector representations of words, there is not one single way of combining these to give vector representations of phrases and sentences, but in order to fit within the framework there are certain properties of the representation that need to hold. Any method of combining these vectors in which these properties hold can be considered within the framework and is thus justified according to the underlying theory; in addition the framework instructs us as to how to measure the degree of entailment between strings according to that particular method. We will attempt to show the broad applicability of the framework by applying it to problems in natural language processing.
The contribution of this paper is as follows:

We define the contexttheoretic framework and introduce the mathematics necessary to understand it. The description presented hear is cleaner than that of [Clarke2007], and in addition we provide examples which should provide intuition for the concepts we describe.

We relate the framework to methods of composition that have been proposed in the literature, namely:

vector addition [Landauer and Dumais1997, Foltz, Kintsch, and Landauer1998]

the tensor product [Smolensky1990, Clark and Pulman2007, Widdows2008]

the multiplicative models of Mitchell:08

matrix multiplication [Rudolph and Giesbrecht2010, Baroni and Zamparelli2010]

the approach of Clark:08.

2 Context Theory
In this section, we define the fundamental concept of our concern, a context theory and discuss its properties.
[Context Theory] A context theory is a tuple , where is a set (the alphabet), is a unital algebra over the real numbers, is a function from to , is an abstract Lebesgue space and is an injective linear map from to .
We will explain each part of this definition, introducing the necessary mathematics as we proceed. We assume the reader is familiar with linear algebra; see [Halmos1974] for definitions that are not included here.
2.1 Algebra over a field
We have identified an algebra over a field as an important construction since it generalises nearly all the methods of vectorbased composition that have been proposed.
[Algebra over a field] An algebra over a field (or simply algebra when there is no ambiguity) is a vector space over a field together with a binary operation on that is bilinear, i.e.
and associative, i.e. for all and all .^{1}^{1}1Some authors do not place the requirement that an algebra is associative, in which case our definition would refer to an associative algebra. An algebra is called unital if it has a distinguished unity element satisfying for all . We are generally only interested in real algebras, i.e. the situation where is the field of real numbers, .
The square realvalued matrices of order
form a real unital associative algebra under standard matrix multiplication. The vector operations are defined entrywise. The unity element of the algebra is the identity matrix.
This means that our proposal is more general than that of Rudolph:10, who suggest using matrix multiplication as a framework for distributional semantic composition. The main differences in our proposal are:

We allow dimensionality to be infinite, instead of restricting ourselves to finitedimensional matrices;

Matrix algebras form a algebra, whereas we do not currently place this requirement;

We emphasise the order structure that is inherent in real vector spaces when there is a distinguished basis.
The purpose of in the context theory is to associate elements of the algebra with strings of words. Considering only the multiplication of (and ignoring the vector operations), is a monoid, since we assumed that the multiplication on is associative. Then induces a monoid homomorphism from to . We denote the mapped value of by , which is defined as follows:
where for , and we define , where is the empty string. Thus, the mapping defined by allows us to associate an element of the algebra with every string of words.
The algebra is what tells us how meanings compose. A crucial part of our thesis is that meanings can be represented by elements of an algebra, and that the type of composition that can be defined using an algebra is general enough to describe the composition of meaning in natural language. To go some way towards justifying this, we give several examples of algebras that describe methods of composition that have been proposed in the literature: namely pointwise multiplication [Mitchell and Lapata2008], vector addition [Landauer and Dumais1997, Foltz, Kintsch, and Landauer1998] and the tensor product [Smolensky1990, Clark and Pulman2007, Widdows2008].
cat  0  2  3 

animal  2  1  2 
big  1  3  0 
[Pointwise multiplication] Consider the dimensional real vector space . We describe a vector in terms of its components as with each . We can define a multiplication on this space by
It is easy to see that this satisfies the requirements for an algebra specified above. Table 1 shows a simple example of possible occurrences for three terms in three different contexts, , and which may, for example, represent documents. We use this to define the mapping from terms to vectors. Thus, in this example, we have and . Under pointwise multiplication, we would have
One commonly used operation for composing vectorbased representations of meaning is vector addition. As noted by Rudolph:10, this can be described using matrix multiplication, by embedding an dimensional vector into a matrix of order :
where . The set of all such matrices, for all real values of , forms a subalgebra of the algebra of matrices of order . A subalgebra of an algebra is a subvector space of which is closed under the multiplication of . This subalgebra can be equivalently described as follows: [Additive algebra] For two vectors and in , we define the additive product by
To verify that this multiplication makes an algebra, we can directly verify the bilinear and associativity requirements, or check that it is isomorphic to the subalgebra of matrices discussed above.
Using the table from the previous example, we define so that it maps dimensional context vectors to , where the first component is , so and and
Pointwise multiplication and addition are not attractive as methods for composing meaning in natural language since they are commutative, whereas natural language is inherently noncommutative. One obvious method of composing vectors that is not commutative is the tensor product. This method of composition can be viewed as a product in an algebra by considering the tensor algebra, which is formed from direct sums of all tensor powers of a base vector space.
We assume the reader is familiar with the tensor product and direct sum (see [Halmos1974] for definitions); we recall their basic properties here. Let denote a vector space of dimensionality (note that all vector spaces of a fixed dimensionality are isomorphic). Then the tensor product space is isomorphic to a space of dimensionality ; moreover given orthonormal bases for and for there is an orthonormal basis for defined by
The multiplicative models of [Mitchell and Lapata2008] correspond to the class of finite dimensional algebras. Let be a finitedimensional vector space. Then every associative bilinear product on can be described by a linear function from to , as required in Mitchell and Lapata’s model. To see this, consider the action of the product on two orthonormal basis vectors and of . This is a vector in , thus we can define . By considering all basis vectors, we can define the linear function .
If the tensor product can loosely be viewed as “multiplying” vector spaces, then the direct sum is like adding them; the space has dimensionality and has basis vectors
it is usual to write as and as .
[Tensor algebra] If is a vector space, then we define , the free algebra of tensor algebra generated by as:
where we assume that the direct sum is commutative. We can think of it as the direct sum of all tensor powers of , with representing the zeroth power. In order to make this space an algebra, we define the product on elements of these tensor powers, viewed as subspaces of the tensor algebra, as their tensor product. This is enough to define the product on the whole space, since every element can be written as a sum of tensor powers of elements of . There is a natural embedding from to , where each element maps to an element in the first tensor power. Thus for example we can think of , , and as elements of , for all .
This product defines an algebra since the tensor product is a bilinear operation. Taking and using as the natural embedding from the context vector of a string , our previous example becomes
where the last two lines demonstrate how a vector in can be described in the isomorphic space .
2.2 Vector lattices
The next part of the definition specifies an abstract Lebesgue space. This is a special kind of vector lattice, or even more generally, a partially ordered vector space.
[Partially ordered vector space]
A partially ordered vector space is a real vector space together with a partial ordering such that:
if then
if then
for all , and for all . Such a partial ordering is called a vector space order on . An element of satisfying is called a positive element; the set of all positive elements of is denoted . If defines a lattice on then the space is called a vector lattice or Riesz space.
[Lattice operations on ] A vector lattice captures many properties that are inherent in real vector spaces when there is a distinguished basis. In , given a specific basis, we can write two vectors and as sequences of numbers: and . This allows us to define the lattice operations of meet and join as
i.e. the componentwise minimum and maximum, respectively. A graphical depiction of the meet operation is shown in figure 2. The vector operations of addition and multiplication by scalar, which can be defined in a similar componentwise fashion, are nevertheless independent of the particular basis chosen. This makes them particularly suited to physical applications, where it is often a requirement that there is no preferred direction. Conversely, the lattice operations depend on the choice of basis, so the operations as defined above would behave differently if the components were written using a different basis. We argue that it makes sense for us to consider these properties of vectors in the context of computational linguistics since we can often have a distinguished basis: namely the one defined by the contexts in which terms occur. Of course it is true that techniques such as latent semantic analysis introduce a new basis which does not have a clear interpretation in relation to contexts; nevertheless they nearly always identify a distinguished basis which we can use to define the lattice operations.
We argue that the mere association of words with vectors is not enough to constitute a theory of meaning. Vector representations allow the measurement of similarity or distance, through an inner product or metric, however we believe it is also important for a theory of meaning to model entailment, a relation which plays an important rôle in logical theories of meaning. In propositional and first order logic, the entailment relation is a partial ordering, in fact it is a Boolean algebra, which is a special kind of lattice. It seems natural to consider whether the lattice structure that is inherent in the vector representations used in computational linguistics can be used to model entailment.
We believe our framework is suited to all vectorbased representations of natural language meaning, however the vectors are obtained. Given this assumption, we can only justify our assumption that the partial order structure of the vector space is suitable to represent the entailment relation by observing that it has the right kind of properties we would expect from this relation.
There may, however, be more justification for this assumption, based on the case where the vectors for terms are simply their frequencies of occurrences in different contexts, so that they are vectors in . In this case, the relation means that occurs at least as frequently as in every context. This means that occurs in at least as wide a range of contexts as , and occurs as least as frequently as . Thus the statement “ entails if and only if ” can be viewed as a stronger form of the distributional hypothesis of Harris:68.
In fact, this idea can be related to the notion of “distributional generality”, introduced by Weeds:04 (see also [Geffet and Dagan2005]). A term is distributionally more general than another term if occurs in a subset of the contexts that occurs in. The idea is that distributional generality may be connected to semantic generality. An example of this is the hypernymy or “is a” relation that is used to express generality of concepts in ontologies, for example, the term animal is a hypernym of dog since a dog is an animal. They explain the connection to distributional generality as follows:
Although one can obviously think of counterexamples, we would generally expect that the more specific term dog can only be used in contexts where animal can be used and that the more general term animal might be used in all of the contexts where dog is used and possibly others. Thus, we might expect that distributional generality is correlated with semantic generality…
Our proposal, in the case where words are represented by frequency vectors, can be considered a stronger version of distributional generality, where the additional requirement is on the frequency of occurrences. In practice, this assumption is unlikely to be compatible with the ontological view of entailment. For example the term entity is semantically more general than the term animal, however entity is unlikely to occur more frequently in each context, since it is a rarer word. A more realistic foundation for this assumption might be if we were to consider the components for a word to represent the plausibility of observing the word in each context. The question then of course, is how such vectors might be obtained. Another possibility is to attempt to weight components in such a way that entailment becomes a plausible interpretation for the partial ordering relation.
Even if we allow for such alternatives, however, in general it is unlikely that the relation will hold between any two strings, since if and only if for each component, , of the two vectors. Instead, we propose to allow for degrees of entailment. We take a Bayesian perspective on this, and suggest that the degree of entailment should take the form of a conditional probability. In order to define this, however, we need some additional structure on the vector lattice that allows it to be viewed as a description of probability, by requiring it to be an “abstract Lebesgue space”.
[Banach lattice] A Banach lattice is a vector lattice together with a norm such that is complete with respect to .
[Abstract Lebesgue Space] An Abstract Lebesgue (or AL) space is a Banach lattice such that
for all in with , and .
[ spaces] Let be an infinite sequence of real numbers. We can view as components of the infinitedimensional vector . We call the set of all such vectors the sequence space; it is a vector space where the operations are defined componentwise. We define a set of norms, the norms, on the space of all such vectors by
The space of all vectors for which is finite is called the space. Considered as vector spaces, these are Banach spaces, since they are complete with respect to the associated norm, and under the componentwise lattice operations, they are Banach lattices. In particular, the space is an abstract Lebesgue space under the norm.
The finitedimensional real vector spaces can be considered as special cases of the sequence spaces (consisting of vectors in which all but components are zero) and, since they are finitedimensional, we can use any of the norms. Thus, our previous examples, in which mapped terms to vectors in can be considered as mapping to abstract Lebesgue spaces, if we adopt the norm.
2.3 Degrees of entailment
An abstract Lebesgue space has many of the properties of a measure space, where the set operations of a measure space are replaced by the lattice operations of the vector space. This means that we can think of an abstract Lebesgue space as a vectorbased probability space. Here, events correspond to positive elements with norm less than or equal to 1; the probability of an event is given by the norm (which we shall always assume is the norm), and the joint probability of two events and is .
[Degree of entailment] We consider the degree to which entails to be the conditional probability of given :
If we are only interested in degrees of entailment (i.e. conditional probabilities) and not probabilities, then we can drop the requirement that the norm should be less than or equal to one, since conditional probabilities are automatically normalised. This definition, together with the multiplication of the algebra, allows us to compute the degree of entailment between any two strings according to the context theory.
The vectors given in Table 1 give the following calculation for the degree to which cat entails animal:
An important question is how this contexttheoretic definition of the degree of entailment relates to more familiar notions of entailment.^{2}^{2}2Thanks are due to the anonymous reviewer who identified this question and related issues. There are three main ways in which the term entailment is used:

The modeltheoretic sense of entailment in which a theory entails a theory if every model of is also a model of . It was shown in [Clarke2007] that this type of entailment can be described using context theories, where sentences are represented as projections on a vector space.

Entailment between terms in the word net hierarchy, for example the hypernymy or isa relation between the terms cat and animal encodes the fact that a cat is an animal. In [Clarke2007] we showed that such relations can be encoded in the partial order structure of a vector lattice.

Human commonsense judgments as to whether one sentence entails or implies another sentence, as used in the Recognising Textual Entailment Challenges [Dagan, Glickman, and Magnini2005].
Our contexttheoretic notion of entailment is thus intended to generalise both the first two senses of entailment above. In addition, we hope that context theories will be useful in the practical application of recognising textual entailment.
Our definition is more general than the modeltheoretic and hypernymy notions of entailment however, as it allows the measurement of a degree of entailment between any two strings: as an extreme example, one may measure the degree to which “not a” entails “in the”. Whilst this may not be useful or philosophically meaningful, we view it as a practical consequence of the fact that every string has a vector representation in our model, which coincides with the current practice in vectorbased compositionality techniques.
2.4 Lattice ordered algebras
A large class of context theories make use of a lattice ordered algebra which merges the lattice ordering of the vector space with the product of . [Partially ordered algebra] A partially ordered algebra is an algebra which is also a partially ordered vector space, which satisfies for all . If the partial ordering is a lattice, then is called a lattice ordered algebra. [Lattice ordered algebra of matrices] The matrices of order form a lattice ordered algebra under normal matrix multiplication, where the lattice operations are defined as the entrywise minimum and maximum. [Operators on spaces] Operators on the spaces are also lattice ordered algebras, by the RieszKantorovich theorem [Abramovich and Aliprantis2002], with the operations defined by:
If is a lattice ordered algebra which is also an abstract Lebesgue space, then is a context theory. Many of the examples we discuss will be of this form, so we will use the shorthand notation, . It is tempting to adopt this as the definition of context theory, however, as we will see, this is not supported by our prototypical example of a context theory (which we will introduce in the next section) as in this case the algebra is not necessarily lattice ordered.
3 Context Algebras
In this section we describe the prototypical examples of a context theory, the context algebras. The definition of a context algebra originates in the idea that the notion of “meaning as context” can be extended beyond the word level to strings of arbitrary length. In fact, the notion of context algebra can be thought of as a generalisation of the syntactic monoid of a formal language: instead of a set of strings defining the language, we have a fuzzy set of strings, or more generally, a realvalued function on a free monoid.
[Realvalued language] Let be a finite set of symbols. A realvalued language (or simply a language when there is no ambiguity) on is a function from to . If the range of is a subset of then is called a positive language. If the range of is a subset of then is called a fuzzy language. If is a positive language such that then is a probability distribution over . The following inclusion relation applies amongst these classes of language:
Since is a countable set, the set of functions from to is isomorphic to the sequence space, and we shall treat them equivalently. We denote by the set of functions with a finite norm, when considered as sequences. There is another heirarchy of spaces given by the inclusion of the spaces: if . In particular,
where the norm gives the maximum value of the function and is the space of all bounded functions on .
Note that probability distributions are in and fuzzy languages are in . If (the space of positive functions on such that the sum of all values of the function is finite) then we can define a probability distribution over by . Similarly, if (the space of bounded positive functions on ) then we can define a fuzzy language by .
Given a finite set of strings , which we may imagine to be a corpus of documents, define if , or otherwise. Then is a probability distribution over .
Let be a language such that for all but a finite subset of . Then for all .
Let be the language defined by where is the length of (i.e. number of symbols in) string . Then is a positive language which is not bounded: for any string there exists a such that , for example for . Let be the language defined by for all . Then is a fuzzy language but
We will assume now that is fixed, and consider the properties of contexts of strings with respect to this language. [Context vectors] Let be a language on . For , we define the context of as a vector , i.e. a realvalued function on pairs of strings:
Our thesis is centred around these vectors, and it is their properties that form the inspiration for the contexttheoretic framework.
The question we are addressing is: does there exist some algebra containing the context vectors of strings in such that where and indicates multiplication in the algebra? As a first try, consider the vector space in which the context vectors live. Is it possible to define multiplication on the whole vector space such that the condition just specified holds? Consider the language on the alphabet defined by and for all other . Now if we take the shorthand notation of writing the basis vector in corresponding to a pair of strings as the pair of strings itself then
It would thus seem sensible to define multiplication of contexts so that . However we then find
showing that this definition of multiplication doesn’t provide us with what we are looking for. In fact, if there did exist a way to define multiplication on contexts in a satisfactory manner it would necessarily be far from intuitive, as, in this example, we would have to define meaning the product would have to have a nonzero component derived from the products of context vectors and which don’t relate at all to the contexts of . This leads us to instead define multiplication on a subspace of . [Generated Subspace ] The subspace of is the set defined by
Because of the way we define the subspace, there will always exist some basis where , and we can define multiplication on this basis by where . Defining multiplication on the basis defines it for the whole vector subspace, since we define multiplication to be linear, making an algebra.
However there are potentially many different bases we could choose, each corresponding to a different subset of , and each giving rise to a different definition of multiplication. Remarkably, this isn’t a problem:
[Context Algebra] Multiplication on is the same irrespective of the choice of basis . We say defines a basis for when is a basis such that . Assume there are two sets that define corresponding bases and for . We will show that multiplication in basis is the same as in the basis .
We represent two basis elements and of in terms of basis elements of :
for some , and . First consider multiplication in the basis . Note that means that for all . This includes the special case where so
for all . Similarly, we have for all which includes the special case , so for all . Inserting this into the above expression yields
for all which we can rewrite as
Conversely, the product of and using the basis is
thus showing that multiplication is defined independently of what we choose as the basis.
Returning to the previous example, we can see that in this case multiplication is in fact defined on since we can describe each basis vector in terms of context vectors:
thus confirming what we predicted about the product of and : the value is only correct because of the negative correction from . This example also serves to demonstrate an important property of context algebras: they do not satisfy the positivity condition; i.e. it is possible for positive vectors (those with all components greater than or equal to zero) to have a nonpositive product. This means they are not necessarily partially ordered algebras under the normal partial order. Compare this to the case of matrix multiplication, for example, where the product of two positive matrices is always positive.
The notion of a context theory is founded on the prototypical example given by context vectors. So far we have shown that multiplication can be defined on the vector space generated by context vectors of strings, however we have not discussed the lattice properties of the vector space. In fact, does not come with a natural lattice ordering that makes sense for our purposes, however, the original space does — it is isomorphic to the sequence space. Thus will form our context theory, where for and is the canonical map which simply maps elements of to themselves, but considered as elements of . There is an important caveat here however: we required that the vector lattice be an abstract Lebesgue space, which means we need to be able to define a norm on it. The norm on is an obvious candidate, however it is not guaranteed to be finite. This is where the nature of the underlying language becomes important.
We might hope that the most restrictive class of the languages we discussed, the probability distributions over would guarantee that the norm is finite. Unfortunately, this is not the case, as the following example demonstrates.
Let be the language defined by
for integer , and zero otherwise, where by we mean repetitions of , so for example, , , and . Then is a probability distribution over , since is positive and . However is infinite, since each string for which contributes to the value of the norm, and there are an infinite number of such strings.
The problem in the previous example is that the average string length is infinite. If we restrict ourselves to probability distributions over in which the average string length is finite, then the problem goes away.
Let be a probability distribution over such that
is finite, where is the number of symbols in string ; we will call such languges finite average length languges. Then is finite for each . Denote the number of occurrences of string as a substring of string by . Clearly for all . Moreover,
and so is finite for all .
If is finite average length, then , and so is a context theory, where is the canonical map from to . Thus context algebras of finite average length languages provide our prototypical examples of context theories.
3.1 Discussion
The benefit of the contexttheoretic framework is in providing a space of exploration for models of meaning in language. Our effort has been in finding principles by which to define the boundaries of this space. Each of the key boundaries, namely, bilinearity and associativity of multiplication, and entailment through vector lattice structure, can also be viewed as limitations of the model.
Bilinearity is a strong requirement to place, and has wideranging implications for the way meaning is represented in the model. It can be interpreted loosely as follows: components of meaning persist or diminish but do not spontaneously appear. This is particularly counterintuitive in the case of idiom and metaphor in language. It means that, for example, both red and herring must contain some components relating to the meaning of red herring which only come into play when these two words are combined in this particular order. Any other combination would give a zero product for these components. It is easy to see how this requirement arises from a contexttheoretic perspective, nevertheless from a linguistic perspective it is arguably undesirable.
One potential limitation of the model is that it does not explicitly model syntax, but rather syntactic restrictions are encoded into the vector space and product itself. For example, we may assume the word square has some component of meaning in common with the word shape. Then we would expect this component to be preserved in the sentences He drew a square and He drew a shape. However, in the case of the two sentences The box is square and *The box is shape we would expect the second to be represented by the zero vector since it is not grammatical; square can be a noun and an adjective, whereas shape cannot. Distributivity of meaning means that the component of meaning that square has in common with shape must be disjoint with the adjectival component of the meaning of square.
Associativity is also a very strong requirement to place; indeed Lambek:61 introducted nonassociativity into his calculus precisely to deal with examples that were not satisfactorily dealt with by his associative model [Lambek1958].
Whilst we hope that these features or boundaries are useful in their current form, it may be that with time, or for certain applications there is a reason to expand or contract certain of them, perhaps because of theoretical discoveries relating to the model of meaning as context, or for practical or linguistic reasons, if, for example, the model is found to be too restrictive to model certain linguistic phenomena.
4 Applications to Textual Entailment
The only existing framework for textual entailment that we are aware of is that of Glickman:05. However this framework does not seem to be general enough to deal satisfactorily with many techniques used to tackle the problem since it requires interpreting the hypothesis as a logical statement.
Conversely, systems that use logical representations of language are often implemented without reference to any framework, and thus deal with the problems of representing the ambiguity and uncertainty that is inherent in handling natural language in an adhoc fashion.
Thus it seems what is needed is a framework which is general enough to satisfactorily incorporate purely statistical techniques and logical representations, and in addition provide guidance as to how to deal with ambiguity and uncertainty in natural language. It is this that we hope our contexttheoretic framework will provide.
In this section we analyse approaches to the textual entailment problem, showing how they can be related to the contexttheoretic framework, and discussing potential new approaches that are suggested by looking at them within the framework. We first discuss some simple approaches to textual entailment based on subsequence matching and measuring lexical overlap. We then look at how Glickman and Dagan’s approach can be considered as a context theory in which words are represented as projections on the vector space of documents. This leads us to an implementation of our own in which we used latent Dirichlet allocation as an alternative approach to overcoming the problem of data sparseness.
4.1 Subsequence Matching and Lexical Overlap
We call a sequence a “subsequence” of if each element of occurs in in the same order, but with the possibility of other elements occurring in between, so for example is a subsequence of in . Subsequence matching compares the subsequences of two sequences: the more subsequences they have in common the more similar they are assumed to be. This idea has been used successfully in text classification [Lodhi et al.2002] and also formed the basis of the author’s entry to the second Recognising Textual Entailment Challenge [Clarke2006].
If is a semigroup, is a lattice ordered algebra under the multiplication of convolution:
where , . [Subsequence matching] Consider the algebra for some alphabet . This has a basis consisting of elements for , where the function that is on and elsewhere. In particular is a unity for the algebra. Define ; then is a context theory. Under this context theory, a sequence completely entails if and only if it is a subsequence of . In our experiments, we have shown that this type of context theory can perform significantly better than straightforward lexical overlap [Clarke2006]. Many variations on this idea are possible, for example using more complex mappings from to .
[Lexical overlap] The simplest approach to textual entailment is to measure the degree of lexical overlap: the proportion of words in the hypothesis sentence that are contained in the text sentence [Dagan, Glickman, and Magnini2005]. This approach can be described as a context theory in terms of a free commutative semigroup on a set , defined by where in if the symbols making up can be reordered to make . Then define by where is the equivalence class of in . Then is a context theory in which entailment is defined by lexical overlap. More complex definitions of can be used, for example to weight different words by their probabilities.
4.2 Document Projections
Glickman:05 give a probabilistic definition of entailment in terms of “possible worlds” which they use to justify their lexical entailment model based on occurrences of words in web documents. They estimate the lexical entailment probability
to bewhere and denote the number of documents that the word occurs in and the words and both occur in respectively. From the context theoretic perspective, we view the set of documents the word occurs in as its context vector. To describe this situation in terms of a context theory, consider the vector space where is the set of documents. With each word we associate an operator on this vector space by
where is the basis element associated with document . is a projection, that is ; it projects onto the space of documents that occurs in. These projections are clearly commutative (they are in fact band projections): projects onto the space of documents in which both and occur.
In their paper, Glickman and Dagan assume that probabilities can be attached to individual words, as we do, although they interpret these as the probability that a word is “true” in a possible world. In their interpretation, a document corresponds to a possible world, and a word is true in that world if it occurs in the document.
They do not, however, determine these probabilities directly; instead they make assumptions about how the entailment probability of a sentence depends on lexical entailment probability. Although they do not state this, the reason for this is presumably data sparseness: they assume that a sentence is true if all its lexical components are true: this will only happen if all the words occur in the same document. For any sizeable sentence this is extremely unlikely, hence their alternative approach.
It is nevertheless useful to consider this idea from a context theoretic perspective. The probability of a term being true can be estimated as the proportion of documents it occurs in. This is the same as the context theoretic probability defined by the linear functional , which we may think of as determined by a vector in given by for all . In general, for an operator on the context theoretic probability of is defined as
where and and the lattice operations are defined by the RieszKantorovich formula (Example 2.4). The probability of a term is then . More generally, the context theoretic representation of an expression is . This is clearly a semigroup homomorphism (the representation of is the product of the representations of and ), and thus together with the linear functional defines a context theory for the set of words.
The degree to which entails is then given by . This corresponds directly to Glickman and Dagan’s entailment “confidence”; it is simply the proportion of documents that contain all the terms of which also contain all the terms of .
4.3 Latent Dirichlet Projections
The formulation in the previous section suggests an alternative approach to that of Glickman and Dagan to cope with the data sparseness problem. We consider the finite data available as a sample from a corpus model ; the vector then becomes a probability distribution over the documents in . In our own experiments, we used latent Dirichlet allocation [Blei, Ng, and Jordan2003] to build a corpus model based on a subset of around 380,000 documents from the Gigaword corpus. Having the corpus model allows us to consider an infinite array of possible documents, and thus we can use our contexttheoretic definition of entailment since there is no problem of data sparseness.
Latent Dirichlet allocation (LDA) follows the same vein as Latent Semantic Analysis (LSA) [Deerwester et al.1990] and Probabilistic Latent Semantic Analysis (PLSA) [Hofmann1999]
in that it can be used to build models of corpora in which words within a document are considered to be exchangeable; so that a document is treated as a bag of words. LSA performs a singular value decomposition on the matrix of words and documents which brings out hidden “latent” similarities in meaning between words, even though they may not occur together.
In contrast PLSA and LDA provide probabilistic models of corpora using Bayesian methods. LDA differs from PLSA in that, while the latter assumes a fixed number of documents, LDA assumes that the data at hand is a sample from an infinite set of documents, allowing new documents to be assigned probabilities in a straightforward manner.
Figure 3 shows a graphical representation of the latent Dirichlet allocation generative model, and figure 4 shows how the model generates a document of length . In this model, the probability of occurrence of a word in a document is considered to be a multinomial variable conditioned on a dimensional “topic” variable . The number of topics is generally chosen to be much fewer than the number of possible words, so that topics provide a “bottleneck” through which the latent similarity in meaning between words becomes exposed.
The topic variable is assumed to follow a multinomial distribution parameterised by a dimensional variable , satisfying
and which is in turn assumed to follow a Dirichlet distribution. The Dirichlet distribution is itself parameterised by a dimensional vector . The components of this vector can be viewed as determining the marginal probabilities of topics, since:
This is just the expected value of , which is given by
The model is thus entirely specified by and the conditional probabilies which we can assume are specified in a matrix where is the number of words in the vocabulary. The parameters and can be estimated from a corpus of documents by a variational expectation maximisation algorithm, as described by Blei:03.
Latent Dirichlet allocation was applied by Blei:03 to the tasks of document modelling, document classification and collaborative filtering. They compare latent Dirichlet allocation to several techniques including probabilistic latent semantic analysis; latent Dirichlet allocation outperforms these on all of the applications. Recently, latent Dirichlet allocation has been applied to the task of word sense disambiguation [Cai, Lee, and Teh2007, BoydGraber, Blei, and Zhu2007] with significant success.
Consider the vector space for some alphabet , the space of all bounded functions on possible documents. In this approach, we define the representation of a string to be a projection on the subspace representing the (infinite) set of documents in which all the words in string occur. Again we define a vector for where is the probability of document in the corpus model, we then define a linear functional for an operator on as before by . is thus the probability that a document chosen at random contains all the words that occur in string . In order to estimate we have to integrate over the Dirichlet parameter :
where by we mean that the word occurs in string , and is the probability of observing word in a document generated by the parameter . We estimate this by
where we have assumed a fixed document length . The above formula is an estimate of the probability of a word occurring at least once in a document of length , the sum over the topic variable is the probability that the word occurs at any one point in a document given the parameter . We approximated the integral using MonteCarlo sampling to generate values of according to the Dirichlet distribution.
Model  Accuracy  CWS 

Dirichlet ()  0.584  0.630 
Dirichlet ()  0.576  0.642 
Bayer (MITRE)  0.586  0.617 
Glickman (Bar Ilan)  0.586  0.572 
Jijkoun (Amsterdam)  0.552  0.559 
Newman (Dublin)  0.565  0.6 
We built a latent Dirichlet allocation model using Blei:03’s implementation on documents from the British National Corpus, using 100 topics. We evaluated this model on the 800 entailment pairs from the first Recognising Textual Entailment Challenge test set.^{3}^{3}3We have so far only used data from the first challenge, since we performed the experiment before the other challenges had taken place. Results were comparable to those obtained by Glickman:05 (see Table 2). In this table, Accuracy is the accuracy on the test set, consisting of 800 entailment pairs, and CWS is the confidence weighted score; see [Dagan, Glickman, and Magnini2005] for the definition. The differences between the accuracy values in the table are not statistically significant because of the small dataset, although all accuracies in the table are significantly better than chance at the 1% level. The accuracy of the model is considerably lower than the state of the art, which is around 75% [BarHaim et al.2006]. We experimented with various document lengths and found very long documents ( and ) to work best.
It is important to note that because the LDA model is commutative, the resulting context algebra must also be commutative, which is clearly far from ideal in modelling natural language.
5 The Model of Clark, Coecke and Sadrzadeh
One of the most sophisticated proposals for a method of composition is that of Clark:08 and the more recent implementation of [Grefenstette et al.2011]. In this section, we will show how their model can be described as a context theory.
The authors describe the syntactic element of their construction using pregroups [Lambek2001], a formalism which simplifies the syntactic calculus of [Lambek1958]. These can be described in terms of partially ordered monoids, a monoid with a partial ordering satisfying implies and for all .
[Pregroup] Let be a partially ordered monoid. Then is called a pregroup if for each there are elements and in such that
(1)  
(2)  
(3)  
(4) 
If , we call a reduction of if can be obtained from using only rules (1) and (2) above.
Pregroup grammars are defined by freely generating a pregroup on a set of basic grammatical types. Words are then represented as elements formed from these basic types, for example:
where , and are the basic types for first person singular, statement and object, respectively. It is easy to see that the above sentence reduces to type under the pregroup reductions.
As Clark:08 note, their construction can be generalised by endowing the grammatical type of a word with a vector nature, in addition to its semantics. We use this slightly more general construction to allow us to formulate it in the contexttheoretic framework. We define an elementary meaning space to be the tensor product space where is a vector space representing meanings of words and is a vector space with an orthonormal basis corresponding to the basic grammatical types in a pregroup grammar and their adjoints. We assume that meanings of words live in the tensor algebra space , defined by
For an element in a particular tensor power of , such that , where the are basis vectors of , then we can recover a complex grammatical type for as the product , where is the basic grammatical type corresponding to . We will call the vectors such as this which have a single complex type (i.e. they are not formed from a weighted sum of more than one type) unambiguous.
We also assume that words are represented by vectors whose grammatical type is irreduceable, i.e. there is no pregroup reduction possible on the type. We define as the vector space generated by all such vectors.
We will now define a product on that will make it an algebra. To do this, it suffices to define the product between two elements which are unambiguous and whose grammatical type is basic, i.e. they can be viewed as elements of . The definition of the product on the rest of the space follows from the assumption of distributivity. We define:
This product is bilinear, since for a particular pair of basis elements, only one of the above two conditions will apply, and both the tensor and inner products are bilinear functions. Moreover, it corresponds to composed and reduced word vectors, as defined in [Clark, Coecke, and Sadrzadeh2008].
To see how this works on our example sentence above, we assume we have vectors for the meanings of the three words, which we write as . We assume for the purpose of this example that the word like is represented as a product state composed of three vectors, one for each basic grammatical type. This removes any potentially interesting semantics, but allows us to demonstrate the product in a simple manner. We write this as follows:
Comments
There are no comments yet.