Symbolic, Distributed and Distributional Representations for Natural Language Processing in the Era of Deep Learning: a Survey

02/02/2017
by   Lorenzo Ferrone, et al.
0

Natural language and symbols are intimately correlated. Recent advances in machine learning (ML) and in natural language processing (NLP) seem to contradict the above intuition: symbols are fading away, erased by vectors or tensors called distributed and distributional representations. However, there is a strict link between distributed/distributional representations and symbols, being the first an approximation of the second. A clearer understanding of the strict link between distributed/distributional representations and symbols will certainly lead to radically new deep learning networks. In this paper we make a survey that aims to draw the link between symbolic representations and distributed/distributional representations. This is the right time to revitalize the area of interpreting how symbols are represented inside neural networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

02/07/2021

Representation Learning for Natural Language Processing

This book aims to review and present the recent advances of distributed ...
05/24/2017

Parsing with CYK over Distributed Representations: "Classical" Syntactic Parsing in the Novel Era of Neural Networks

Syntactic parsing is a key task in natural language processing which has...
06/10/2019

A Survey of Reinforcement Learning Informed by Natural Language

To be successful in real-world tasks, Reinforcement Learning (RL) needs ...
01/04/2022

Discrete and continuous representations and processing in deep learning: Looking forward

Discrete and continuous representations of content (e.g., of language or...
05/02/2019

Learning Programmatically Structured Representations with Perceptor Gradients

We present the perceptor gradients algorithm -- a novel approach to lear...
09/11/2018

New models for symbolic data analysis

Symbolic data analysis (SDA) is an emerging area of statistics based on ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Natural language and symbols are intimately correlated. Sounds are transformed in letters or ideograms and these symbols are composed to obtain words. Words then form sentences and sentences form texts, discourses, dialogs, which ultimately convey knowledge, emotions, and so on. This composition of symbols in words and of words in sentences follow rules that both the hearer and the speaker know [Chomsky (1957)]. Hence, thinking to natural language understanding systems, which are not based on symbols, seems to be an heresy.

Recent advances in machine learning (ML) and in natural language processing (NLP) seem to contradict the above intuition: symbols are fading away, erased by vectors or tensors called distributed and distributional representations. In ML, distributed representations are pushing deep learning models [LeCun et al. (2015), Schmidhuber (2015)] towards amazing results in many high-level tasks such as image recognition [He et al. (2016), Simonyan and Zisserman (2014), Zeiler and Fergus (2014)], image generation [Goodfellow et al. (2014)] and image captioning [Vinyals et al. (2015b), Xu et al. (2015)], machine translation [Bahdanau et al. (2014), Zou et al. (2013)], syntactic parsing [Vinyals et al. (2015a), Weiss et al. (2015)] and even game playing at human level [Silver et al. (2016), Mnih et al. (2013)]. In NLP, distributional representations are pursued as a more flexible way to represent semantics of natural language, the so-called distributional semantics (see [Turney and Pantel (2010)]). Distributed representations are vectors or tensors of real number representing the meaning of words, phrases and sentences. Vectors for words are obtained using corpora. Instead, as a complete semantic representation, vectors for phrases [Mitchell and Lapata (2008), Baroni and Zamparelli (2010), Clark et al. (2008), Grefenstette and Sadrzadeh (2011), Zanzotto et al. (2010)] and sentences [Socher et al. (2011), Socher et al. (2012), Kalchbrenner and Blunsom (2013)] are obtained by composing vectors for words. Hence, reasoning with symbols in natural language applications seem to be a relic of an ancient past.

The success of distributed and distributional representations over symbolic approaches is mainly due to the advent of new parallel paradigms that pushed neural networks [Rosenblatt (1958), Werbos (1974)] towards deep learning [LeCun et al. (2015), Schmidhuber (2015)]. Massively parallel algorithms running on Graphic Processing Units (GPUs) [Chetlur et al. (2014), Cui et al. (2015)] crunch vectors, matrices and tensors faster than decades ago. The back-propagation algorithm can be now computed for complex and large neural networks. Symbols are not needed any more during “resoning”, that is, the neural network learning and its application. They only survive as input and output of these wonderful learning machines.

However, there is a strict link between distributed/distributional representations and symbols, being the first an approximation of the second [Plate (1994), Plate (1995), Ferrone and Zanzotto (2014), Ferrone et al. (2015)]. The representation of the input and the output of these networks is not that far from their internal representation. The similarity and the interpretation of the internal representation is clearer in image processing. In fact, networks are generally interpreted visualizing how subparts represent salient subparts of target images. Both input images and subparts are tensors of real number. Hence, these networks can be examined and understood. The same does not apply to natural language processing with its symbols. NLP-related networks are a deep mystery.

A clearer understanding of the strict link between distributed/distributional representations and symbols will certainly lead to radically new deep learning networks. It is then the dawn of a new range of possibilities: understanding what part of the current symbolic techniques for natural language processing have a sufficient representation in deep neural networks; and, ultimately, understanding whether a more brain-like model – the neural networks – is compatible with methods for syntactic parsing or semantic processing that have been defined in these decades of studies in computational linguistics and natural language processing. There is thus a tremendous opportunity to understand whether and how symbolic representations are used and emitted in a brain model.

In this paper we make a survey that aims to draw the link between symbolic representations and distributed/distributional representations. This is the right time to revitalize the area of interpreting how symbols are represented inside neural networks. In our opinion, this survey will help to devise new deep neural networks that can exploit existing and novel symbolic models of classical natural language processing tasks.

The paper is structured as follow: first we give an introduction to the very general concept of representations and the difference between local and distributed representations [Plate (1995)]. After that we present each techniques in detail. Afterwards, we focus on distributional representations [Turney and Pantel (2010)], which we treat as a specific example of a distributed representation. Finally we discuss more in depth the general issue of compositionality, analyzing three different approaches to the problem: compositional distributional semantics [Clark et al. (2008), Baroni et al. (2014)], holographic reduced representations [Plate (1994), Neumann (2001)]

, and recurrent neural networks

[Kalchbrenner and Blunsom (2013), Socher et al. (2012)].

2 Symbolic and Distributed Representations: Interpretability and Composability

The winning models for learning and searching over textual data have imposed distributed representations as a way to observe symbolic representations, that is, symbols, words, word sequences, syntactic structures and semantic assertions.

Symbolic representations have two important features:

  • interpretability – symbolic representations are intepretable in the sense that we can read them;

  • composability – more complex symbolic representations can be obtained composing basic symbols with strong combination rules and basic symbols are still recognizable in complex representations.

An example of composability is the following. Given the set of basic symbols = {mouse,cat,a,catches,(,)}, we can compose symbols as sequences, e.g.:

a cat catches a mouse,

or as structures, e.g.:

((a cat) (catches (a mouse)))

If symbols and composing rules are interpretable, sequences or structures of symbols are still interpretable. Moreover, composing symbols can be recovered from these complex representations.

Distributed representations are vectors or tensors in metric spaces which underly learning models such as neural networks and also some models based on kernel methods [Zanzotto and Dell’Arciprete (2012a)]. Hence, observing textual data as distributed representations is the only way to reason about texts in these metric spaces.

To clarify what are distributed representations, we first describe what are local representations (as referred in [Plate (1995)]), that is, common feature vectors used to represent symbols, sequences of symbols [Lodhi et al. (2002), Cancedda et al. (2003)] or symbolic structures [Haussler (1999), Collins and Duffy (2001)] in kernel machines [Cristianini and Shawe-Taylor (2000)] or in searching models such as the vector space model [Salton (1989)]. Given a set of symbols , a local representation maps the -th symbol in to the -th unit vector in , where is the cardinality of . Hence, the -th unit vector represents the -th symbol. For example, the above set may be represented as:

Representing a sequence of symbols with local representations is generally done in two ways: (1) a sequence of vectors and (2) a bag-of-symbols. In the first way, a sequence is represented with the sequence of vectors representing the symbols in the sequence, for example, the sequence a cat catches a mouse is:

This first way is used in recurrent neural networks. In the second way, a sequence is represented with one vector generally obtained with a weighted sum of vectors representing symbols, that is, it is generally referred as bag-of-symbols. For example, the previous sequence is:

where vectors have been summed with a weight of 1. This second way is often referred as bag-of-word model in information retrieval [Salton (1989)] when basic symbols are words. The part of the name bag-of wants to stress that words or other symbols are treated independently although these words are in sequences.

A local representation is a wonderful proxy for a symbolic representation. When sequences are sequence of vectors, this is straightforward. Sequences of vectors represents sequences as they are. Each vector has a one-to-one mapping to a symbol. Hence, symbolic sequences can be fully reconstructed. When sequences are bag-of-symbol vectors, symbolic sequences cannot be fully reconstructed but it is possible to know which symbols were in the sequence. Hence, local representations are a convenient way to map a symbolic representation in a form that can be used in a learning model.

However, local representations are extremely inefficient in situations when the size of the symbol set grows and where sparse vectors cannot be used (for example, in deep neural networks). Moreover, these local representations lack in the ability to capture interesting properties of datasets. This is where distributed representations come from.

Instead, distributed representations hide symbols and sequences of symbols in vectors where each dimension is not a symbol but the whole vector represent a symbol or a sequence of symbols. To give an intuition on how this can be pursued, let’s take the above set . A different way to represent the set is to treat each element not as a unanalyzable entity, uncorrelated to everything else, but as something which possess some properties, and then use those properties to represent each item. Words and symbols is can be distinguished with these properties: number of vowels, number of consonants and, finally, number of non-alphabetic symbols. Given these properties one could then devise the following representation:

This is a simple example of a distributed representation. In a distributed representation [Plate (1995), Hinton et al. (1986)] the informational content is distributed (hence the name) among multiple units, and at the same time each unit can contribute to the representation of multiple elements.

A distributed representation has two evident advantages with respect to a local representation: it is more efficient (in the example, the representation uses only 3 numbers instead of 6) and it does not treat each element as being equally different to any other. In fact, mouse and cat in this representation are more similar than mouse and a. In other words, this representation captures by construction something interesting about the set of symbols.

Unfortunately, distributed representations are harder to be interpreted than symbolic structures and have not a clear model to compose symbols in larger structures.

We then re-define the two properties Interpretability and Composability for distributed representations. These two properties want to measure how far are distributed representations from symbolic representations.

Interpretability

Interpretability is the possibility of decoding distributed representations, that is, extracting the embedded symbolic representations. This is an important characteristic but it must be noted that it’s not a simple yes-or-no classification. It is more a degree associated to specific representations. In fact, even if each component of a vector representation does not have a specific meaning, this does not mean that the representation is not interpretable as a whole, or that symbolic information cannot be recovered from it. For this reason, we can categorize the degree of interpretability of a representation as follows:

  • human-interpretable – each dimension of a representation has a specific meaning;

  • decodable – the representation may be obscure, but it can be decoded into an interpretable, symbolic representation.

Composability

Composability is the possibility of composing basic distributed representations with strong rules and of decomposing back composed representations with inverse rules. Generally, in NLP, basic distributed representations refer to basic symbols. It is worth noticing that that composability is related but is not the same concept of compositionality. Composability describes how vectors or symbols can be composed to represent larger structures. Compositionality instead is describes how the semantic of large structures can be obtained using the semantic of the components. In distributed or, better, in distributional semantic representations [Baroni et al. (2014), Mitchell and Lapata (2010)] these two concepts often coincide.

The two axes of Interpretability and Composability will be used to describe the presented distributed representations as we are interested in understanding whether or not a representation can be used to represent structures or sequences and whether it is possible to extract back the underlying structure or sequence given a distributed representation. It is clear that a local representation is more interpretable than a distributed representation. Yet, both representations lack in composability when sequences or structures are collapsed in vectors or tensors that do not depend on the length of represented sequences or structures. For example, the bag-of-symbol local representation does not take into consideration the order of the symbols in the sequence.

3 Strategies to obtain distributed representations from symbols

There is a wide range of techniques to transform symbolic representations in distributed representations. When combining natural language processing and machine learning, this is a major issue: transforming symbols, sequences of symbols or symbolic structures in vectors or tensors that can be used in learning machines. These techniques generally propose a function to transform a local representation with a large dimensionality in a distributed representation with a low dimensionality:

This function is often called encoder.

The techniques described in this section generally starts from a collection of vectors obtained from a collection of sequences of symbols. These vectors generally represent sequences or structures with a bag-of-symbol representation. As running example, we give the following where bag-of-symbol representations are obtained by simple counting the occurrences of symbols in the sequence. Symbols are words. The collection of sequences, that is the corpus, is Table 1.

a cat catches a mouse
a dog eats a mouse
a dog catches a cat
Table 1: A very small corpus

Hence, the local representation of these sequences is represented by the following matrix:

(1)

where rows represent symbols and columns are the bag-of-symbol representations of the sequences of symbols in the collection.

We propose to categorize techniques to obtain distributed representations in two broad categories, showing some degree of overlapping:

  • representations derived from dimensionality reduction techniques;

  • learned representations

In the rest of the section, we will introduce the different strategies according to the proposed categorization. Moreover, we will emphasize its degree of interpretability for each representation and its related function by answering to two questions:

  • Has a specific dimension in a clear meaning?

  • Can we decode an encoded symbolic representation? In other words, assuming a decoding function , how far is , which represents a symbolic representation, from ?

Instead, composability of the resulting representations will be analyzed in Sec. 5.

3.1 Dimensionality reduction

Dimensionality reduction techniques were born in order to deal with high-dimensional data. High-dimensional data present two main challenges to work with. The first is computational in nature: some algorithm scale poorly with the increasing of the dimensions, making some problems intractable. The second is theoretical and is usually called the

curse of dimensionality [Friedman (1997), Daum and Huang (2003), Keogh and Mueen (2011), Bellman and Corporation (1957)]: the number of samples one needs in order to perform useful inference on a dataset increases exponentially with the dimensionality of the representation.

Dimensionality reduction techniques are thus algorithms that generally extract a linear function that transform local representations in a space into distributed representations

in a lower dimensional space , while preserving as much as possible the properties of the original space. The properties that algorithms preserve vary among techniques and can be divided statistical properties and in topological properties:

3.1.1 Principal Component Analysis

Principal Component Analysis (PCA) [Markovsky (2012), Pearson (1901)] is a linear method which reduces the number of dimensions by projecting into the “best” linear subspace of a given dimension by using the a set of data points. The “best” linear subspace is a subspace where dimensions maximize the variance of the data points in the set. PCA can be interpreted either as a probabilistic method or as a matrix approximation and is then usually known as

truncated singular value decomposition

. We are here interested in describing PCA as probabilistic method as it related to the interpretability of the related distributed representation.

As a probabilistic method, PCA finds an orthogonal projection matrix such that the variance of the projected set of data points is maximized. The set of data points is referred as a matrix where each row is a single observation. Hence, the variance that is maximized is .

More specifically, let’s consider the first weight vector , which maps an element of the dataset into a single number . Maximizing the variance means that is such that:

and it can be shown that the optimal value is achieved when

is the eigenvector of

with largest eigenvalue. This then produces a projected dataset:

The algorithm can then compute iteratively the second and further components by first subtracting the components already computed from :

and then proceed as before. However, it turns out that all subsequent components are related to the eigenvectors of the matrix , that is, the -th weight vector is the eigenvector of with the -th largest corresponding eigenvalue.

The encoding matrix for distributed representations derived with a PCA method is the matrix:

where are eigenvectors with eigenvalues decreasing with . Hence, local representations are represented in distributed representations in as:

Hence, vectors are human-interpretable as their dimensions represent linear combinations of dimensions in the original local representation and these dimensions are ordered according to their importance in the dataset, that is, their variance. Moreover, each dimension is a linear combination of the original symbols. Then, the matrix reports on which combination of the original symbols is more important to distinguish data points in the set.

Moreover, vectors are decodable. The decoding function is:

and if is the rank of the matrix , otherwise it is a degraded approximation (for more details refer to [Fodor (2002), Sorzano et al. (2014)]). Hence, distributed vectors in can be decoded back in the original symbolic representation with a degree of approximation that depends on the distance between and the rank of the matrix .

The compelling limit of PCA is that all the data points have to be used in order to obtain the encoding/decoding matrices. This is not feasible in two cases. First, when the model has to deal with big data. Second, when the set of symbols to be encoded in extremely large. In this latter case, local representations cannot be used to produce matrices for applying PCA.

Moreover, PCA is not composable. In fact, PCA-related distributed representation can encode only bag-of-symbols.

3.1.2 Random Projections

Random projection (RP) [Bingham and Mannila (2001), Fodor (2002)] is a technique based on random matrices . Generally, the rows of the matrix

are sampled from a Gaussian distribution with zero mean, and normalized as to have unit length

[Johnson and Lindenstrauss (1984)] or even less complex random vectors [Achlioptas (2003)]. Random projections from Gaussian distributions approximately preserves pairwise distance between points (see the Johnsonn-Lindenstrauss Lemma [Johnson and Lindenstrauss (1984)]), that is, for any vector :

where the approximation factor depends on the dimension of the projection, namely, to assure that the approximation factor is , the dimension must be chosen such that:

Constraints for building the matrix can be significantly relaxed to less complex random vectors [Achlioptas (2003)]. Rows of the matrix can be sampled from very simple zero-mean distributions such as:

without the need to manually ensure unit-length of the rows, and at the same time providing a significant speed up in computation due to the sparsity of the projection.

Unfortunately, vectors are not human-interpretable as, even if their dimensions represent linear combinations of dimensions in the original local distribution, these dimensions have not an interpretation or particular properties.

On the contrary, vectors are decodable. The decoding function is:

and when is derived using Gaussian random vectors. Hence, distributed vectors in can be approximately decoded back in the original symbolic representation with a degree of approximation that depends on the distance between .

The major advantage of RP with respect to PCA is that the matrix of all the data points is not needed to derive the matrix . Moreover, the matrix can be produced à-la-carte starting from the symbols encountered so far in the encoding procedure. In fact, it is sufficient to generate new Gaussian vectors for new symbols when they appear.

However, also RP as described in this section are not composable. As PCA-related distributed representation, RP can model only bag-of-symbols.

3.2 Learned representation

Learned representations differ from the dimensionality reduction techniques by the fact that: (1) encoding/decoding functions may not be linear; (2) learning can optimize functions that are different with respect to the target of PCA; and, (3) solutions are not derived in a closed form but are obtained using optimization techniques such as stochastic gradient decent.

Learned representation can be further classified into:

  • task-independent representations learned with a standalone algorithm (as in autoencoders [Socher et al. (2011), Liou et al. (2014)]) which is independent from any task, and which learns a representation that only depends from the dataset used;

  • task-dependent representations learned as the first step of another algorithm (this is called end-to-end training), usually the first layer of a deep neural network. In this case the new representation is driven by the task.

3.2.1 Autoencoder

Autoencoders are a task independent technique to learn a distributed representation encoder by using local representations of a set of examples [Socher et al. (2011), Liou et al. (2014)]. The distributed representation encoder is half of an autoencoder.

An autoencoder is a neural network that aims to reproduce an input vector in as output by passing in a hidden layer(s) that are in . Given and as the encoder and the decoder, respectively, an autoencoder aims to maximize the following function:

where

The encoding and decoding module are two neural networks, which means that they are functions depending on a set of parameters of the form

where the parameters of the entire model are with matrices, vectors and is a function that can be either a non-linearity sigmoid shaped function, or in some cases the identity function. In some variants the matrices and are constrained to

. This model is different with respect to PCA due to the target loss function and the use of non-linear functions.

Autoencoders have been further improved with denoising autoencoders [Vincent et al. (2010), Vincent et al. (2008), Masci et al. (2011)] that are a variant of autoencoders where the goal is to reconstruct the input from a corrupted version. The intuition is that higher level features should be robust with regard to small noise in the input. In particular, the input gets corrupted via a stochastic function:

and then one minimizes again the reconstruction error, but with regard to the original (uncorrupted) input:

Usually can be either:

  • adding gaussian noise: , where ;

  • masking noise: where a given a fraction of the components of the input gets set to

For what concerns intepretability, as for random projection, distributed representations obtained with encoders from autoencoders and denoising autoencoders are not human-interpretable but are decodable as this is the nature of autoencoders.

Moreover, composability is not covered by this formulation of autoencoders.

3.2.2 Embedding layers

Embedding layers are generally the first layers of more complex neural networks which are responsible to transform an initial local representation in the first internal distributed representation. The main difference with autoencoders is that these layers are shaped by the entire overall learning process. The learning process is generally task dependent. Hence, these first embedding layers depend on the final task.

It is argued that each layers learn a higher-level representation of its input. This is particularly visible with convolutional network [Krizhevsky et al. (2012)]

applied to computer vision tasks. In these suggestive visualizations

[Zeiler and Fergus (2014)], the hidden layers are seen to correspond to abstract feature of the image, starting from simple edges (in lower layers) up to faces in the higher ones.

However, these embedding layers produce encoding functions and, thus, distributed representations that are not interpretable when applied to symbols. In fact, these distributed representations are not human-interpretable as dimensions are not clearly related to specific aggregations of symbols. Moreover, these embedding layers do not naturally provide decoders. Hence, this distributed representation is not decodable.

4 Distributional Representations as another side of the coin

Distributional semantics is an important area of research in natural language processing that aims to describe meaning of words and sentences with vectorial representations (see [Turney and Pantel (2010)] for a survey). These representations are called distributional representations.

It is a strange historical accident that two similar sounding names – distributed and distributional – have been given to two concepts that should not be confused for many. Maybe, this has happened because the two concepts are definitely related. We argue that distributional representation are nothing more than a subset of distributed representations, and in fact can be categorized neatly into the divisions presented in the previous section

Distributional semantics is based on a famous slogan – “you shall judge a word by the company it keeps” [Firth (1957)] – and on the distributional hypothesis [Harris (1964)] – words have similar meaning if used in similar contexts, that is, words with the same or similar distribution. Hence, the name distributional as well as the core hypothesis comes from a linguistic rather than computer science background.

Distributional vectors represent words by describing information related to the contexts in which they appear. Put in this way it is apparent that a distributional representation is a specific case of a distributed representation, and the different name is only an indicator of the context in which this techniques originated. Representations for sentences are generally obtained combining vectors representing words.

Hence, distributional semantics is a special case of distributed representations with a restriction on what can be used as features in vector spaces: features represent a bit of contextual information. Then, the largest body of research is on what should be used to represent contexts and how it should be taken into account. Once this is decided, large matrices representing words in context are collected and, then, dimensionality reduction techniques are applied to have treatable and more discriminative vectors.

In the rest of the section, we present how to build matrices representing words in context, we will shortly recap on how dimensionality reduction techniques have been used in distributional semantics, and, finally, we report on word2vec [Mikolov et al. (2013)], which is a novel distributional semantic techniques based on deep learning.

4.1 Building distributional representations for words from a corpus

The major issue in distributional semantics is how to build distributional representations for words by observing word contexts in a collection of documents. In this section, we will describe these techniques using the example of the corpus in Table 1.

A first and simple distributional semantic representations of words is given by word vs. document matrices as those typical in information retrieval [Salton (1989)]. Word context are represented by document indexes. Then, words are similar if these words similarly appear in documents. This is generally referred as topical similarity [Landauer and Dumais (1997)] as words belonging to the same topic tend to be more similar. An example of this approach is given by the matrix in Eq. 1. In fact, this matrix is already a distributional and distributed representation for words which are represented as vectors in rows.

A second strategy to build distributional representations for words is to build word vs. contextual feature matrices. These contextual features represent proxies for semantic attributes of modeled words [Baroni and Lenci (2010)]. For example, contexts of the word dog will somehow have relation with the fact that a dog has four legs, barks, eats, and so on. In this case, these vectors capture a similarity that is more related to a co-hyponymy, that is, words sharing similar attributes are similar. For example, dog is more similar to cat than to car as dog and cat share more attributes than dog and car. This is often referred as attributional similarity [Turney (2006)].

A simple example of this second strategy are word-to-word matrices obtained by observing n-word windows of target words. For example, a word-to-word matrix obtained for the corpus in Table 1 by considering a 1-word window is the following:

(2)

Hence, the word cat is represented by the vector and the similarity between cat and dog is higher than the similarity between cat and mouse

as the cosine similarity

is higher than the cosine similarity .

The research on distributional semantics focuses on two aspects: (1) the best features to represent contexts; (2) the best correlation measure among target words and features.

How to represent contexts is a crucial problem in distributional semantics. This problem is strictly correlated to the classical question of feature definition and feature selection in machine learning. A wide variety of features have been tried. Contexts have been represented as set of relevant words, sets of relevant syntactic triples involving target words

[Pado and Lapata (2007), Rothenhäusler and Schütze (2009)] and sets of labeled lexical triples [Baroni and Lenci (2010)].

Finding the best correlation measure among target words and their contextual features is the other issue. Many correlation measures have been tried. The classical measures are term frequency-inverse document frequency (tf-idf) [Salton (1989)] and point-wise mutual information (). These, among other measures, are used to better capture the importance of contextual features for representing distributional semantic of words.

This first formulation of distributional semantics is a distributed representation that is interpretable. In fact, features represent contextual information which is a proxy for semantic attributes of target words [Baroni and Lenci (2010)].

4.2 Compacting distributional representations

As distributed representations, distributional representations can undergo the process of dimensionality reduction (see Sec. 3.1) with Principal Component Analysis and Random Indexing. This process is used for two issues. The first is the classical problem of reducing the dimensions of the representation to obtain more compact representations. The second instead want to help the representation to focus on more discriminative dimensions. This latter issue focuses on the feature selection and merging which is an important task in making these representations more effective on the final task of similarity detection.

Principal Component Analysis (PCA) is largely applied in compacting distributional representations: Latent Semantic Analysis (LSA) is a prominent example [Landauer and Dumais (1997)]. LSA were born in Information Retrieval with the idea of reducing word-to-document matrices. Hence, in this compact representation, word context are documents and distributional vectors of words report on the documents where words appear. This or similar matrix reduction techniques have been then applied to word-to-word matrices.

In Distributional Semantics, random indexing has been used to solve some issues that arise naturally with PCA when working with large vocabularies and large corpora. PCA has some scalability problems:

  • The original co-occurrence matrix is very costly to obtain and store, moreover, it is only needed to be later transformed;

  • Dimensionality reduction is also very costly, moreover, with the dimensions at hand it can only be done with iterative methods;

  • The entire method is not incremental, if we want to add new words to our corpus we have to recompute the entire co-occurrence matrix and then re-perform the PCA step.

Random Indexing [Sahlgren (2005)] solves these problems: it is an incremental method (new words can be easily added any time at low computational cost) which creates word vector of reduced dimension without the need to create the full dimensional matrix.

Interpretability of compacted distributional semantic vectors is comparable to the interpretability of distributed representations obtained with the same techniques.

4.3 Learning representations: word2vec

Figure 1: word2vec: CBOW model

Recently, distributional hypothesis has invaded neural networks: word2vec [Mikolov et al. (2013)] uses contextual information to learn word vectors. Hence, we discuss this technique in the section devoted to distributional semantics.

The name word2Vec comprises two similar techniques, called skip grams and continuous bag of words (CBOW). Both methods are neural networks, the former takes input a word and try to predict its context, while the latter does the reverse process, predicting a word from the words surrounding it. With this technique there is no explicitly computed co-occurrence matrix, and neither there is an explicit association feature between pairs of words, instead, the regularities and distribution of the words are learned implicitly by the network.

We describe only CBOW because it is conceptually simpler and because the core ideas are the same in both cases. The full network is generally realized with two layers and

plus a softmax layer to reconstruct the final vector representing the word. In the learning phase, the input and the output of the network are local representation for words. In CBOW, the network aims to predict a target word given context words. For example, given the sentence

of the corpus in Table 1, the network has to predict catches given its context (see Figure 1).

Hence, CBOW offers an encoder , that is, a linear word encoder from data where is the size of the vocabulary and is the size of the distributional vector. This encoder models contextual information learned by maximizing the prediction capability of the network. A nice description on how this approach is related to previous techniques is given in [Goldberg and Levy (2014)].

Clearly, CBOW distributional vectors are not easily human and machine interpretable. In fact, specific dimensions of vectors have not a particular meaning and, differently from what happens for auto-encoders (see Sec. 3.2.1), these networks are not trained to be invertible.

5 Composing distributed representations

In the previous sections, we described how one symbol or a bag-of-symbols can be transformed in distributed representations focusing on whether these distributed representations are interpretable. In this section, we want to investigate a second and important aspect of these representations, that is, are these representation composable as symbolic representations? And, if these representations are composed, are still interpretable?

Composability is the ability of a symbolic representation to describe sequences or structures by composing symbols with specific rules. In this process, symbols remain distinct and composing rules are clear. Hence, final sequences and structures can be used for subsequent steps as knowledge repositories.

Composability is an important aspect for any representation and, then, for a distributed representation. Understanding to what extent a distributed representation is composable and how information can be recovered is then a critical issue. In fact, this issue has been strongly posed by Plate [Plate (1995), Plate (1994)] who analyzed how same specific distributed representations encode structural information and how this structural information can be recovered back.

Current approaches for treating distributed/distributional representation of sequences and structures mix two aspects in one model: a “semantic” aspect and a representational aspect. Generally, the semantic aspect is the predominant and the representational aspect is left aside. For “semantic” aspect, we refer to the reason why distributed symbols are composed: a final task in neural network applications or the need to give a distributional semantic vector for sequences of words. This latter is the case for compositional distributional semantics [Clark et al. (2008), Baroni et al. (2014)]. For the representational aspect, we refer to the fact that composed distributed representations are in fact representing structures and these representations can be decoded back in order to extract what is in these structures.

Although the “semantic” aspect seems to be predominant in models-that-compose, the convolution conjecture [Zanzotto et al. (2015)] hypothesizes that the two aspects coexist and the representational aspect plays always a crucial role. According to this conjecture, structural information is preserved in any model that composes and structural information emerges back when comparing two distributed representations with dot product to determine their similarity.

Hence, given the convolution conjecture, models-that-compose produce distributed representations for structures that can be interpreted back. Interpretability is a very important feature in these models-that-compose which will drive our analysis.

In this section we will explore the issues faced with the compositionality of representations, and the main “trends”, which correspond somewhat to the categories already presented. In particular we will start from the work on compositional distributional semantics, then we revise the work on holographic reduced representations [Plate (1995), Neumann (2001)] and, finally, we analyze the recent approaches with recurrent and recursive neural networks. Again, these categories are not entirely disjoint, and methods presented in one class can be often interpreted to belonging into another class.

5.1 Compositional Distributional Semantics

In distributional semantics, models-that-compose have the name of compositional distributional semantics models (CDSMs) [Baroni et al. (2014), Mitchell and Lapata (2010)] and aim to apply the principle of compositionality [Frege (1884), Montague (1974)] to compute distributional semantic vectors for phrases. These CDSMs produce distributional semantic vectors of phrases by composing distributional vectors of words in these phrases. These models generally exploit structured or syntactic representations of phrases to derive their distributional meaning. Hence, CDSMs aim to give a complete semantic model for distributional semantics.

As in distributional semantics for words, the aim of CDSMs is to produce similar vectors for semantically similar sentences regardless their lengths or structures. For example, words and word definitions in dictionaries should have similar vectors as discussed in [Zanzotto et al. (2010)]. As usual in distributional semantics, similarity is captured with dot products (or similar metrics) among distributional vectors.

The applications of these CDSMs encompass multi-document summarization, recognizing textual entailment

[Dagan et al. (2013)] and, obviously, semantic textual similarity detection [Agirre et al. (2013)].

Apparently, these CDSMs are far from composable distributed representations that can be interpreted back. In some sense, their nature wants that resulting vectors forget how these are obtained and focus on the final distributional meaning of phrases. There is some evidence that this is not exactly the case.

The convolution conjecture [Zanzotto et al. (2015)] suggests that many CDSMs produce distributional vectors where structural information and vectors for individual words can be still intepreted. Hence, many CDSMs are composable and interpretable.

In the rest of this section, we will show some classes of these CDSMs and we focus on describing how these morels are interpretable.

5.1.1 Additive Models

Additive models for compositional distributional semantics are important examples of models-that-composes where semantic and representational aspects is clearly separated. Hence, these models can be highly interpretable.

These additive models have been formally captured in the general framework for two words sequences proposed by Mitchell&Lapata mitchell-lapata:2008:ACLMain. The general framework for composing distributional vectors of two word sequences “u v” is the following:

(3)

where is the composition vector, and are the vectors for the two words u and v, is the grammatical relation linking the two words and is any other additional knowledge used in the composition operation. In the additive model, this equation has the following form:

(4)

where and are two square matrices depending on the grammatical relation which may be learned from data [Zanzotto et al. (2010), Guevara (2010)].

Before investigating if these models are interpretable, let introduce a recursive formulation of additive models which can be applied to structural representations of sentences. For this purpose, we use dependency trees. A dependency tree can be defined as a tree whose nodes are words and the typed links are the relations between two words. The root of the tree represents the word that governs the meaning of the sentence. A dependency tree is then a word if it is a final node or it has a root and links where is the i-th subtree of the node and is the relation that links the node with . The dependency trees of two example sentences are reported in Figure 2. The recursive formulation is then the following:

According to the recursive definition of the additive model, the function results in a linear combination of elements where is a product of matrices that represents the structure and is the distributional meaning of one word in this structure, that is:

where are the relevant substructures of . In this case, contains the link chains. For example, the first sentence in Fig. 2 has a distributed vector defined in this way:

Each term of the sum has a part that represents the structure and a part that represents the meaning, for example:

Hence, this recursive additive model for compositional semantics is a model-that-composes which, in principle, can be highly interpretable. By selecting matrices such that:

(5)

it is possible to recover distributional semantic vectors related to words that are in specific parts of the structure. For example, the main verb of the sample sentence in Fig. 2 with a matrix , that is:

In general, matrices derived for compositional distributional semantic models [Guevara (2010), Zanzotto et al. (2010)] do not have this property but it is possible to obtain matrices with this property by applying thee Jonson-Linderstrauss Tranform [Johnson and Lindenstrauss (1984)] or similar techniques as discussed also in [Zanzotto et al. (2015)].

Figure 2: A sentence and its dependency graph

5.1.2 Lexical Functional Compositional Distributional Semantic Models

Lexical Functional Models are compositional distributional semantic models where words are tensors and each type of word is represented by tensors of different order. Composing meaning is then composing these tensors to obtain vectors. These models have solid mathematical background linking Lambek pregroup theory, formal semantics and distributional semantics [Coecke et al. (2010)]. Lexical Function models are composable distributed representations, yet, in the following, we will examine whether these models produce vectors that my be interpreted.

To determine whether these models produce interpretable vectors, we start from a simple Lexical Function model applied to two word sequences. This model has been largely analyzed in [Baroni and Zamparelli (2010)] as matrices were considered better linear models to encode adjectives.

In Lexical Functional models over two words sequences, there is one of the two words which as a tensor of order 2 (that is, a matrix) and one word that is represented by a vector. For example, adjectives are matrices and nouns are vectors [Baroni and Zamparelli (2010)] in adjective-noun sequences. Hence, adjective-noun sequences like “black cat” or “white dog” are represented as:

where and are matrices representing the two adjectives and and are the two vectors representing the two nouns.

These two words models are partially interpretable: knowing the adjective it is possible to extract the noun but not vice-versa. In fact, if matrices for adjectives are invertible, there is the possibility of extracting which nouns has been related to particular adjectives. For example, if is invertible, the inverse matrix can be used to extract the vector of cat from the vector :

This contributes to the interpretability of this model. Moreover, if matrices for adjectives are built using Jonson-Lindestrauss Transforms [Johnson and Lindenstrauss (1984)], that is matrices with the property in Eq. 5, it is possible to pack different pieces of sentences in a single vector and, then, select only relevant information, for example:

On the contrary, knowing noun vectors, it is not possible to extract back adjective matrices. This is a strong limitation in term of interpretability.

Lexical Functional models for larger structures are composable but not interpretable at all. In fact, in general these models have tensors in the middle and these tensors are the only parts that can be inverted. Hence, in general these models are not interpretable. However, using the convolution conjecture [Zanzotto et al. (2015)], it is possible to know whether subparts are contained in some final vectors obtained with these models.

5.2 Holographic Representations

Holographic reduced representations (HRRs) are models-that-compose expressly designed to be interpretable [Plate (1995), Neumann (2001)]. In fact, these models to encode flat structures representing assertions and these assertions should be then searched in oder to recover pieces of knowledge that is in. For example, these representations have been used to encode logical propositions such as . In this case, each atomic element has an associated vector and the vector for the compound is obtained by combining these vectors. The major concern here is to build encoding functions that can be decoded, that is, it should be possible to retrieve composing elements from final distributed vectors such as the vector of .

In HRRs, nearly orthogonal unit vectors [Johnson and Lindenstrauss (1984)] for basic symbols, circular convolution and circular correlation guarantees composability and intepretability. HRRs are the extension of Random Indexing (see Sec. 3.1.2

) to structures. Hence, symbols are represented with vectors sampled from a multivariate normal distribution

. The composition function is the circular convolution indicated as and defined as:

where subscripts are modulo . Circular convolution is commutative and bilinear. This operation can be also computed using circulant matrices:

where and are circulant matrices of the vectors and . Given the properties of vectors and , matrices and have the property in Eq. 5. Hence, circular convolution is approximately invertible with the circular correlation function () defined as follows:

where again subscripts are modulo . Circular correlation is related to inverse matrices of circulant matrices, that is . In the decoding with , parts of the structures can be derived in an approximated way, that is:

Hence, circular convolution and circular correlation allow to build interpretable representations. For example, having the vectors , , and for , and , respectively, the following encoding and decoding produces a vector that approximates the original vector for :

The “invertibility” of these representations is important because it allow us not to consider these representations as black boxes.

However, holographic representations have severe limitations as these can encode and decode simple, flat structures. In fact, these representations are based on the circular convolution, which is a commutative function; this implies that the representation cannot keep track of composition of objects where the order matters and this phenomenon is particularly important when encoding nested structures.

Distributed trees [Zanzotto and Dell’Arciprete (2012b)] have shown that the principles expressed in holographic representation can be applied to encode larger structures, overcoming the problem of reliably encoding the order in which elements are composed using the shuffled circular convolution function as the composition operator. Distributed trees are encoding functions that transform trees into low-dimensional vectors that also contain the encoding of every substructures of the tree. Thus, these distributed trees are particularly attractive as they can be used to represent structures in linear learning machines which are computationally efficient.

Distributed trees and, in particular, distributed smoothed trees [Ferrone and Zanzotto (2014)] represent an interesting middle way between compositional distributional semantic models and holographic representation.

5.3 Compositional Models in Neural Networks

When neural networks are applied to sequences or structured data, these networks are in fact models-that-compose. However, these models result in models-that-compose which are not interpretable. In fact, composition functions are trained on specific tasks and not on the possibility of reconstructing the structured input, unless in some rare cases [Socher et al. (2011)]. The input of these networks are sequences or structured data where basic symbols are embedded in local representations or distributed representations obtained with word embedding (see Sec. 4.3). The output are distributed vectors derived for specific tasks. Hence, these models-that-compose are not interpretable in our sense for their final aim and for the fact that non linear functions are adopted in the specification of the neural networks.

In this section, we revise some prominent neural network architectures that can be interpreted as models-that-compose: the recurrent neural networks [Krizhevsky et al. (2012), He et al. (2016), Vinyals et al. (2015a), Graves (2013)] and the recursive neural networks [Socher et al. (2012)].

5.3.1 Recurrent Neural Networks

Recurrent neural networks form a very broad family of neural networks architectures that deal with the representation (and processing) of complex objects. At its core a recurrent neural network (RNN) is a network which takes in input the current element in the sequence and processes it based on an internal state which depends on previous inputs. At the moment the most powerful network architectures are convolutional neural networks

[Krizhevsky et al. (2012), He et al. (2016)] for vision related tasks and LSTM-type network for language related task [Vinyals et al. (2015a), Graves (2013)].

A recurrent neural network takes as input a sequence and produce as output a single vector which is a representation of the entire sequence. At each step 111we can usually think of this as a timestep, but not all applications of recurrent neural network have a temporal interpretation the network takes as input the current element , the previous output and performs the following operation to produce the current output

(6)

where is a non-linear function such as the logistic function or the hyperbolic tangent and denotes the concatenation of the vectors and . The parameters of the model are the matrix

and the bias vector

.

Hence, a recurrent neural network is effectively a learned composition function, which dynamically depends on its current input, all of its previous inputs and also on the dataset on which is trained. However, this learned composition function is basically impossible to analyze or interpret in any way. Sometime an “intuitive” explanation is given about what the learned weights represent: with some weights representing information that must be remembered or forgotten.

Even more complex recurrent neural networks as long-short term memory (LSTM)

[Hochreiter and Schmidhuber (1997)] have the same problem of interpretability. LSTM are a recent and successful way for neural network to deal with longer sequences of inputs, overcoming some difficulty that RNN face in the training phase. As with RNN, LSTM network takes as input a sequence and produce as output a single vector which is a representation of the entire sequence. At each step the network takes as input the current element , the previous output and performs the following operation to produce the current output and update the internal state .

where stands for element-wise multiplication, and the parameters of the model are the matrices and the bias vectors .

Generally, the interpretation offered for recursive neural networks is functional or “psychological” and not on the content of intermediate vectors. For example, an interpretation of the parameters of LSTM is the following:

  • is the forget gate: at each step takes in consideration the new input and output computed so far to decide which information in the internal state must be forgotten (that is, set to );

  • is the input gate: it decides which position in the internal state will be updated, and by how much;

  • is the proposed new internal state, which will then be updated effectively combining the previous gate;

  • is the output gate: it decides how to modulate the internal state to produce the output

These models-that-compose have high performance on final tasks but are definitely not interpretable.

5.3.2 Recursive Neural Network

[.S [.cows ] [.VP [.eat ] [.NP [.animal ] [.extracts ] ] ] ]

Figure 3: A simple binary tree
Figure 4: Recursive Neural Networks

The last class of models-that-compose that we present is the class of recursive neural networks [Socher et al. (2012)]. These networks are applied to data structures as trees and are in fact applied recursively on the structure. Generally, the aim of the network is a final task as sentiment analysis or paraphrase detection.

Recursive neural networks is then a basic block (see Fig. 4) which is recursively applied on trees like the one in Fig. 3. The formal definition is the following:

where

is a component-wise sigmoid function or

, and is a matrix that maps the concatenation vector to have the same dimension.

This method deals naturally with recursion: given a binary parse tree of a sentence , the algorithm creates vectors and matrices representation for each node, starting from the terminal nodes. Words are represented by distributed representations or local representations. For example, the tree in Fig. 3 is processed by the recursive network in the following way. First, the network in Fig. 4 is applied to the pair (animal,extracts) and is obtained. Then, the network is applied to the result and eat and is obtained and so on.

Recursive neural networks are not easily interpretable even if quite similar to the additive compositional distributional semantic models as those presented in Sec. 5.1.1. In fact, the non-linear function is the one that makes final vectors less interpretable.

6 Conclusions

Natural language and symbols are intimately correlated. Thinking to natural language understanding systems which are not based on symbols seems to be an heresy. However, recent advances in machine learning (ML) and in natural language processing (NLP) seem to contradict the above intuition: symbols are fading away, erased by vectors or tensors called distributed and distributional representations.

We made this survey to show the not-surprising link between symbolic representations and distributed/distributional representations. This is the right time to revitalize the area of interpreting how symbols are represented inside neural networks. In our opinion, this survey will help to devise new deep neural networks that can exploit existing and novel symbolic models of classical natural language processing tasks. We believe that a clearer understanding of the strict link between distributed/distributional representations and symbols will certainly lead to radically new deep learning networks.

References

  • [1]
  • Achlioptas (2003) Dimitris Achlioptas. 2003. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of computer and System Sciences 66, 4 (2003), 671–687.
  • Agirre et al. (2013) Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic Textual Similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity. Association for Computational Linguistics, Atlanta, Georgia, USA, 32–43. http://www.aclweb.org/anthology/S13-1004
  • Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014).
  • Baroni et al. (2014) Marco Baroni, Raffaela Bernardi, and Roberto Zamparelli. 2014. Frege in space: A program of compositional distributional semantics. LiLT (Linguistic Issues in Language Technology) 9 (2014).
  • Baroni and Lenci (2010) Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Comput. Linguist. 36, 4 (Dec. 2010), 673–721. DOI:http://dx.doi.org/10.1162/coli_a_00016 
  • Baroni and Zamparelli (2010) Marco Baroni and Roberto Zamparelli. 2010. Nouns are Vectors, Adjectives are Matrices: Representing Adjective-Noun Constructions in Semantic Space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Cambridge, MA, 1183–1193. http://www.aclweb.org/anthology/D10-1115
  • Belkin and Niyogi (2001) Mikhail Belkin and Partha Niyogi. 2001. Laplacian eigenmaps and spectral techniques for embedding and clustering.. In NIPS, Vol. 14. 585–591.
  • Bellman and Corporation (1957) R. Bellman and Rand Corporation. 1957. Dynamic Programming. Princeton University Press. https://books.google.it/books?id=wdtoPwAACAAJ
  • Bingham and Mannila (2001) Ella Bingham and Heikki Mannila. 2001. Random projection in dimensionality reduction: applications to image and text data. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 245–250.
  • Cancedda et al. (2003) Nicola Cancedda, Eric Gaussier, Cyril Goutte, and Jean Michel Renders. 2003. Word Sequence Kernels. J. Mach. Learn. Res. 3 (March 2003), 1059–1082. http://dl.acm.org/citation.cfm?id=944919.944963
  • Chetlur et al. (2014) Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. 2014. cudnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759 (2014).
  • Chomsky (1957) Naom Chomsky. 1957. Aspect of Syntax Theory. MIT Press, Cambridge, Massachussetts.
  • Clark et al. (2008) Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. 2008. A Compositional Distributional Model of Meaning. Proceedings of the Second Symposium on Quantum Interaction (QI-2008) (2008), 133–140.
  • Coecke et al. (2010) Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical Foundations for a Compositional Distributional Model of Meaning. CoRR abs/1003.4394 (2010).
  • Collins and Duffy (2001) Michael Collins and Nigel Duffy. 2001. Convolution Kernels for Natural Language. In NIPS. 625–632.
  • Cristianini and Shawe-Taylor (2000) Nello Cristianini and John Shawe-Taylor. 2000.

    An Introduction to Support Vector Machines and Other Kernel-based Learning Methods

    .
    Cambridge University Press. http://www.amazon.ca/exec/obidos/redirect?tag=citeulike09-20& path=ASIN/0521780195
  • Cui et al. (2015) Henggang Cui, Gregory R Ganger, and Phillip B Gibbons. 2015. Scalable deep learning on distributed GPUs with a GPU-specialized parameter server. Technical Report. CMU PDL Technical Report (CMU-PDL-15-107).
  • Dagan et al. (2013) Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing Textual Entailment: Models and Applications. Morgan & Claypool Publishers. 1–220 pages.
  • Daum and Huang (2003) Fred Daum and Jim Huang. 2003. Curse of dimensionality and particle filters. In Aerospace Conference, 2003. Proceedings. 2003 IEEE, Vol. 4. IEEE, 4_1979–4_1993.
  • Ferrone and Zanzotto (2014) Lorenzo Ferrone and Fabio Massimo Zanzotto. 2014. Towards Syntax-aware Compositional Distributional Semantic Models. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 721–730. http://www.aclweb.org/anthology/C14-1068
  • Ferrone et al. (2015) Lorenzo Ferrone, Fabio Massimo Zanzotto, and Xavier Carreras. 2015. Decoding Distributed Tree Structures. In Statistical Language and Speech Processing - Third International Conference, SLSP 2015, Budapest, Hungary, November 24-26, 2015, Proceedings. 73–83. DOI:http://dx.doi.org/10.1007/978-3-319-25789-1_8 
  • Firth (1957) John R. Firth. 1957. Papers in Linguistics. Oxford University Press., London.
  • Fodor (2002) Imola Fodor. 2002. A Survey of Dimension Reduction Techniques. Technical Report.
  • Frege (1884) Gottlob Frege. 1884. Die Grundlagen der Arithmetik (The Foundations of Arithmetic): eine logisch-mathematische Untersuchung über den Begriff der Zahl. Breslau.
  • Friedman (1997) Jerome H Friedman. 1997. On bias, variance, 0/1—loss, and the curse-of-dimensionality. Data mining and knowledge discovery 1, 1 (1997), 55–77.
  • Goldberg and Levy (2014) Yoav Goldberg and Omer Levy. 2014. word2vec Explained: deriving Mikolov et al.’s negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722 (2014).
  • Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672–2680.
  • Graves (2013) Alex Graves. 2013. Generating Sequences With Recurrent Neural Networks. CoRR abs/1308.0850 (2013). http://arxiv.org/abs/1308.0850
  • Grefenstette and Sadrzadeh (2011) Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP ’11). Association for Computational Linguistics, Stroudsburg, PA, USA, 1394–1404. http://dl.acm.org/citation.cfm?id=2145432.2145580
  • Guevara (2010) Emiliano Guevara. 2010. A Regression Model of Adjective-Noun Compositionality in Distributional Semantics. In Proceedings of the 2010 Workshop on GEometrical Models of Natural Language Semantics. Association for Computational Linguistics, Uppsala, Sweden, 33–37. http://www.aclweb.org/anthology/W10-2805
  • Harris (1964) Zellig Harris. 1964. Distributional Structure. In The Philosophy of Linguistics, Jerrold J. Katz and Jerry A. Fodor (Eds.). Oxford University Press, New York.
  • Haussler (1999) David Haussler. 1999. Convolution kernels on discrete structures. Technical Report. University of California at Santa Cruz. http://www.cbse.ucsc.edu/staff/haussler_pubs/convolutions.pdf
  • He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027 (2016).
  • Hinton et al. (1986) G. E. Hinton, J. L. McClelland, and D. E. Rumelhart. 1986. Distributed representations. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations, D. E. Rumelhart and J. L. McClelland (Eds.). MIT Press, Cambridge, MA.
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
  • Johnson and Lindenstrauss (1984) W. Johnson and J. Lindenstrauss. 1984. Extensions of Lipschitz mappings into a Hilbert space. Contemp. Math. 26 (1984), 189–206.
  • Kalchbrenner and Blunsom (2013) Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Convolutional Neural Networks for Discourse Compositionality. Proceedings of the 2013 Workshop on Continuous Vector Space Models and their Compositionality (2013).
  • Keogh and Mueen (2011) Eamonn Keogh and Abdullah Mueen. 2011. Curse of dimensionality. In Encyclopedia of Machine Learning. Springer, 257–258.
  • Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097–1105.
  • Landauer and Dumais (1997) Thomas K. Landauer and Susan T. Dumais. 1997. A Solution to Plato’s Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge. Psychological Review 104, 2 (April 1997), 211–240. http://www.sciencedirect.com/science/article/B6X04-46P4NMC-Y/2/f82804d09e673bd79321d50d30279792
  • LeCun et al. (2015) Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (2015), 436–444.
  • Liou et al. (2014) Cheng-Yuan Liou, Wei-Chen Cheng, Jiun-Wei Liou, and Daw-Ran Liou. 2014. Autoencoder for words. Neurocomputing 139 (2014), 84 – 96. DOI:http://dx.doi.org/10.1016/j.neucom.2013.09.055 
  • Lodhi et al. (2002) Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. J. Mach. Learn. Res. 2 (March 2002), 419–444. DOI:http://dx.doi.org/10.1162/153244302760200687 
  • Markovsky (2012) Ivan Markovsky. 2012. Low Rank Approximation: Algorithms, Implementation, Applications. (January 2012). http://eprints.soton.ac.uk/273101/
  • Masci et al. (2011) Jonathan Masci, Ueli Meier, Dan Cireşan, and Jürgen Schmidhuber. 2011.

    Stacked convolutional auto-encoders for hierarchical feature extraction. In

    International Conference on Artificial Neural Networks. Springer, 52–59.
  • Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. CoRR abs/1301.3781 (2013). http://arxiv.org/abs/1301.3781
  • Mitchell and Lapata (2008) Jeff Mitchell and Mirella Lapata. 2008. Vector-based Models of Semantic Composition. In Proceedings of ACL-08: HLT. Association for Computational Linguistics, Columbus, Ohio, 236–244. http://www.aclweb.org/anthology/P/P08/P08-1028
  • Mitchell and Lapata (2010) Jeff Mitchell and Mirella Lapata. 2010. Composition in Distributional Models of Semantics. Cognitive Science (2010). DOI:http://dx.doi.org/10.1111/j.1551-6709.2010.01106.x 
  • Mnih et al. (2013) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013).
  • Montague (1974) Richard Montague. 1974. English as a Formal Language. In Formal Philosophy: Selected Papers of Richard Montague, Richmond Thomason (Ed.). Yale University Press, New Haven, 188–221.
  • Neumann (2001) Jane Neumann. 2001. Holistic processing of hierarchical structures in connectionist networks. Ph.D. Dissertation. University of Edinburgh.
  • Pado and Lapata (2007) Sebastian Pado and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics 33, 2 (2007), 161–199.
  • Pearson (1901) Karl Pearson. 1901. Principal components analysis. The London, Edinburgh and Dublin Philosophical Magazine and Journal 6, 2 (1901), 566.
  • Plate (1994) T. A. Plate. 1994. Distributed Representations and Nested Compositional Structure. Ph.D. Dissertation. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.5527
  • Plate (1995) T. A. Plate. 1995. Holographic reduced representations. IEEE Transactions on Neural Networks 6, 3 (1995), 623–641. DOI:http://dx.doi.org/10.1109/72.377968 
  • Rosenblatt (1958) Frank Rosenblatt. 1958.

    The perceptron: a probabilistic model for information storage and organization in the brain.

    Psychological Reviews 65, 6 (November 1958), 386–408. http://www.ncbi.nlm.nih.gov/pubmed/13602029
  • Rothenhäusler and Schütze (2009) Klaus Rothenhäusler and Hinrich Schütze. 2009. Unsupervised Classification with Dependency Based Word Spaces. In Proceedings of the Workshop on Geometrical Models of Natural Language Semantics (GEMS ’09). Association for Computational Linguistics, Stroudsburg, PA, USA, 17–24. http://dl.acm.org/citation.cfm?id=1705415.1705418
  • Sahlgren (2005) Magnus Sahlgren. 2005. An introduction to random indexing. In

    Proceedings of the Methods and Applications of Semantic Indexing Workshop at the 7th International Conference on Terminology and Knowledge Engineering TKE

    . Copenhagen, Denmark.
  • Salton (1989) G. Salton. 1989. Automatic text processing: the transformation, analysis and retrieval of information by computer. Addison-Wesley.
  • Schmidhuber (2015) Jürgen Schmidhuber. 2015. Deep learning in neural networks: An overview. Neural Networks 61 (2015), 85–117.
  • Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, and others. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (2016), 484–489.
  • Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
  • Socher et al. (2011) Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. In Advances in Neural Information Processing Systems 24.
  • Socher et al. (2012) Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic Compositionality Through Recursive Matrix-Vector Spaces. In Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing (EMNLP).
  • Sorzano et al. (2014) Carlos Oscar Sánchez Sorzano, Javier Vargas, and A Pascual Montano. 2014. A survey of dimensionality reduction techniques. arXiv preprint arXiv:1403.2877 (2014).
  • Turney (2006) Peter D. Turney. 2006. Similarity of Semantic Relations. Comput. Linguist. 32, 3 (2006), 379–416. DOI:http://dx.doi.org/10.1162/coli.2006.32.3.379 
  • Turney and Pantel (2010) Peter D. Turney and Patrick Pantel. 2010. From Frequency to Meaning: Vector Space Models of Semantics. J. Artif. Intell. Res. (JAIR) 37 (2010), 141–188.
  • Vincent et al. (2008) Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning. ACM, 1096–1103.
  • Vincent et al. (2010) Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. J. Mach. Learn. Res. 11 (Dec. 2010), 3371–3408. http://dl.acm.org/citation.cfm?id=1756006.1953039
  • Vinyals et al. (2015a) Oriol Vinyals, Ł ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015a. Grammar as a Foreign Language. In Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Eds.). Curran Associates, Inc., 2755–2763. http://papers.nips.cc/paper/5635-grammar-as-a-foreign-language.pdf
  • Vinyals et al. (2015b) Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015b. Show and tell: A neural image caption generator. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    . 3156–3164.
  • Weiss et al. (2015) David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. arXiv preprint arXiv:1506.06158 (2015).
  • Werbos (1974) Paul Werbos. 1974. Beyond regression: New tools for prediction and analysis in the behavioral sciences. (1974).
  • Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044 2, 3 (2015), 5.
  • Zanzotto and Dell’Arciprete (2012a) F.M. Zanzotto and L. Dell’Arciprete. 2012a. Distributed tree kernels, In Proceedings of International Conference on Machine Learning. Proceedings of the 29th International Conference on Machine Learning, ICML 2012 (2012), 193–200. http://www.scopus.com/inward/record.url?eid=2-s2.0-84867126965&partnerID=40&md5=0d51c0ed7070baf730f887c818a8c177
  • Zanzotto and Dell’Arciprete (2012b) Fabio Massimo Zanzotto and Lorenzo Dell’Arciprete. 2012b. Distributed Tree Kernels. In Proceedings of International Conference on Machine Learning. –.
  • Zanzotto et al. (2015) Fabio Massimo Zanzotto, Lorenzo Ferrone, and Marco Baroni. 2015. When the Whole is Not Greater Than the Combination of Its Parts: A ”Decompositional” Look at Compositional Distributional Semantics. Comput. Linguist. 41, 1 (March 2015), 165–173. DOI:http://dx.doi.org/10.1162/COLI_a_00215 
  • Zanzotto et al. (2010) Fabio Massimo Zanzotto, Ioannis Korkontzelos, Francesca Fallucchi, and Suresh Manandhar. 2010. Estimating Linear Models for Compositional Distributional Semantics. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING).
  • Zeiler and Fergus (2014) Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European Conference on Computer Vision. Springer, 818–833.
  • Zou et al. (2013) Will Y Zou, Richard Socher, Daniel M Cer, and Christopher D Manning. 2013. Bilingual Word Embeddings for Phrase-Based Machine Translation.. In EMNLP. 1393–1398.