Predicting the Semantic Textual Similarity with Siamese CNN and LSTM

Semantic Textual Similarity (STS) is the basis of many applications in Natural Language Processing (NLP). Our system combines convolution and recurrent neural networks to measure the semantic similarity of sentences. It uses a convolution network to take account of the local context of words and an LSTM to consider the global context of sentences. This combination of networks helps to preserve the relevant information of sentences and improves the calculation of the similarity between sentences. Our model has achieved good results and is competitive with the best state-of-the-art systems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/30/2019

Text Steganalysis with Attentional LSTM-CNN

With the rapid development of Natural Language Processing (NLP) technolo...
04/30/2019

Model Comparison for Semantic Grouping

We introduce a probabilistic framework for quantifying the semantic simi...
09/24/2021

Rethinking Crowd Sourcing for Semantic Similarity

Estimation of semantic similarity is crucial for a variety of natural la...
03/19/2017

Métodos de Otimização Combinatória Aplicados ao Problema de Compressão MultiFrases

The Internet has led to a dramatic increase in the amount of available i...
04/18/2017

Semantic Similarity from Natural Language and Ontology Analysis

Artificial Intelligence federates numerous scientific fields in the aim ...
12/17/2018

Siamese Networks for Semantic Pattern Similarity

Semantic Pattern Similarity is an interesting, though not often encounte...
07/15/2020

Logic Constrained Pointer Networks for Interpretable Textual Similarity

Systematically discovering semantic relationships in text is an importan...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Semantic Text Similarity (STS) is an important task in Natural Language Processing (NLP) applications such as information retrieval, classification, extraction, question answering, and plagiarism detection. The STS task measures the degree of similarity between two texts and can be expressed as follows: given two sentences, a system returns a continuous score on a scale from 1 to 5, with 1 indicating that the semantics of the sentences are completely independent and 5 meaning that there is a semantic equivalence.

STS is a difficult issue since languages have numerous ambiguities and synonymous expressions, while sentences may have variable lengths and complex structures. Therefore basic models, e.g. bag-of-words or TF-IDF models, are constrained by their specificities that put aside the role played by the word order and ignore syntactic as well as semantic relationships. Recent successes in sentence similarity have been obtained using Neural Networks (RNNs: Recurrent Neural Networks Siamese_LSTM; Kiros; Tai

and CNNs: Convolutional Neural Networks

Similarity_Convolutional). Neural Networks (NNs) use a deep analysis of sentences and words to take better into account both the semantics and the structure of sentences in order to predict the sentence similarity.

In this paper, we describe our technique based on NNs to measure similarity. First, we use a Siamese CNN to analyze the local context of words in a sentence and to generate a representation of the relevance of a word and its neighborhood. Then, we use a Siamese LSTM to analyze the entire sentence based on its words and its local contexts. At last, we predict the semantic similarity of pairs of sentences using the Manhattan distance.

We applied our framework on the SemEval information for STS assignment and we acquired competitive outcomes demonstrating that our model can give helpful information to enhance the sentence analysis.

This paper is organized as follows: we make an overview of relevant work for STS in Section 2. Next, we detail our approach in Section 3. The experimental setup and results are presented in Sections 4 and 5, respectively. Finally, we give our conclusion and some last remarks in Section 6.

2 Related Work

To deal with the STS task, previous studies have resorted to various features (e.g. word overlap, synonym/antonym), linguistic resources (e.g. WordNet and pre-trained word embeddings) and a wide assortment of learning algorithms (e.g. Support Vector Regression (SVR), regression functions and NNs). Among these works, several techniques extract multiple features of sentences and apply regression functions to estimate these similarity scores

lai:2014; zhao:2014; bjerva:2014; Severyn. lai:2014 analyzed distinctive word relations (e.g. synonyms, antonyms, and hyperonyms) with features based on counts of co-occurences with other words and similarities between captions of images. zhao:2014 predicted the sentence similarity from syntactic relationship, distinctive content similitudes, length and string features. bjerva:2014 also utilized a regression algorithm to foresee the STS from different features (WordNet, word overlap, and so forth). Finally, Severyn combined relational syntactic structures with SVR.

The development of NNs has improved the results of many NLP applications and especially the STS task Similarity_Convolutional; Siamese_LSTM; Tsubaki; Rychalska. Architectures such as RNNs and CNNs further improve the semantic analysis and the prediction of sentence relatedness.

RNNs differ from other NN models in their ability to process sequential information. They update a memory cell to make sense of data read in a sentence over time. Rychalska

used a Recursive AutoEncoder (RAE) and a WordNet grant framework to produce sentence embeddings. They consolidated these embeddings with a Support Vector Machine (SVM) classifier to compute a semantic relatedness score. Long Short Term Memory (LSTM) enhances RNNs to handle long-term dependencies

Siamese_LSTM; greff:2015; Tai. The LSTM engineering is made out of a memory cell and non-direct gating units that update its state over time and manage the data stream into/out the cell. Siamese_LSTM used a Siamese LSTM to encode sentences using pre-trained word embedding vectors. Siamese LSTMs used the same weights to encode sentences and to produce comparable sentence representations for similar sentences. Then, they predicted the closeness of pair of sentences using the Manhattan distance between the sentence representations. Tai introduced the Tree-LSTM that is a generalization of LSTM for tree-structured network topologies. They utilized this Tree-LSTM to encode a couple of sentences and to predict their closeness with a NN that analyzes the distance and the angle between the sentence embeddings.

CNNs have accomplished excellent outcomes in classification Kim:2014 and other NLP tasks Collobert:2011. Similarity_Convolutional generated sentence embedding using a Siamese CNN architecture with various convolution and pooling operations to extract distinctive granularities of information. Their convolution uses filters that analyze entire word embeddings and each dimension of word embeddings with multiple window sizes. For output of the convolution operation, they applied several pooling types (max, mean, and min). Finally, they predicted the sentence similarity from numerous measurements (horizontal and vertical comparison) to compare local regions of sentence representation.

In this work, we join the ideas examined in Siamese_LSTM and Kim:2014 to produce more accurate semantic sentence embeddings. The next section presents our model and its characteristics w.r.t. previous work.

3 Our model

A sentence is composed of words which can form phrases and clauses. Examining a sentence and its components helps us to comprehend its meaning. NNs are structures that can inspect relationships between words from multiple points of view. On the one hand, LSTMs can recognize and process the semantics of a sentence by investigating the words through time. They update their state to get the gist of the sentence (global context) in the order of words. In this procedure, LSTMs filter unimportant data by retaining just the main information. On the other hand, CNNs use layers with convolution filters that are connected to local features Kim:2014. They enable the analysis of a sentence from multiple perspectives (filters). This type of NNs does not have the same concern with the sentence length as LSTMs since CNNs examine all the words of the sentence together. Nonetheless, CNNs do not consider the order of words in their analysis, so these structures cannot investigate sequence relationships in the sentence.

Differently from Siamese_LSTM that only analyze the general context of words and from Similarity_Convolutional that do not consider the order of words in the sentences, we analyze the words in two perspectives: general and local contexts. Words are considered through time from the general information of a word (word embedding) and its specific semantic and syntactic features (local context) based on its previous and its following words. We apply a CNN to investigate the local context for each word in a sentence. The CNN analyzes together all the words of the local context and generates their representation as a unique structure. Then, we utilize an LSTM to examine the words of the sentence one by one (Figure 1). Our NN has a Siamese structure Siamese_LSTM; Similarity_Convolutional, i.e. our and our are equal to our and our , respectively. The following subsection describes our CNN, our LSTM, and our similarity metrics to predict the sentence similarity.

Figure 1: Siamese CNN+LSTM to calculate the similarity of a pair of sentences.

3.1 Neural Network Architecture

Kim trained a simple CNN on top of pre-trained word vectors for the sentence classification task Kim:2014. His simple model composed of one layer of convolution achieved excellent results on multiple benchmarks. Inspired by the good results of CNNs in the sentence classification Kim:2014, we use a Siamese CNN to generate local contexts for each word in a sentence from its previous and following words. We utilize pre-trained word embeddings111Publicly available at: code.google.com/p/word2vec to represent these words. Let be the -dimensional word vector corresponding to the -th word in a sentence. A local context of length (e.g. ) is represented as:

(1)

where is the concatenation operator. Our convolution operation involves a filter , which is applied to a window of words to produce a local context. In more details, our CNN generates the local context of word by:

(2)

where is a bias term and is the hyperbolic tangent function. This filter is connected to every sequence of words in a sentence to deliver a local context for all words.

In order to analyze the general and the local contexts of the word , we concatenate its pre-trained word embeddings (general semantic and syntactic features that were learned on a large corpus) and its local context . Our LSTM updates its state and produces an output at time step in a sentence using the equations described in Siamese_LSTM. The last output of our LSTM represents the meaning of a sentence.

Diverse similarity metrics (cosine, Euclidean and Manhattan distances) were tested and we acquired the best outcome with the Manhattan distance . Since these scores are not optimized for the similarity metric range (1-5), we apply in a post-processing step a regression method using local regression and bandwidth to project our predictions in the correct scale, similarly to Li2003.

4 Experimental Setup

We use the SICK dataset to analyze and to test the performance of our system. This dataset contains 9,927 sentence pairs sick and we split it in 4,927/2,000/3,000 for training/validation/test. Each sentence pair is annotated with a relatedness label [1, 5] corresponding to the average relatedness judged by 10 different individuals. The gold scores for relatedness are composed of: 923 pairs within the [1,2) range, 1,373 pairs within the [2,3) range, 3,872 pairs within the [3,4) range, and 3,672 pairs within the [4,5] range.

We initialize our CNN and our LSTM weights with small random Gaussian entries. Our CNN has filters

and our LSTM has 50-dimensional hidden representations

and memory cells . We use a forget bias of 2.5 to model long-range dependencies, Adadelta method to optimize the parameters, and a learning rate of 0.01. We did not identify any improvement with deep LSTMs because of the small amount of data. Like Siamese_LSTM, we also augmented our training dataset and we pre-trained our network using the dataset of SemEval 2013 STS task.

5 Results

In order to understand the relevance of the local context for the sentence similarity, we investigated the original Siamese LSTM without local context and compared it with our method using various lengths for the local context: 3, 5, 7, and 9 (Table 1). The original Siamese LSTM analyzes a sentence considering only the general context of words. As expected, the analysis of general and local contexts of words improved the sentence analysis, according to the Pearson’s and Pearman’s correlation coefficients and the Mean Squared Error (MSE) scores. Short or long local contexts did not generate the best results, which shows that short local context (3 words) did not get enough information about the neighborhood of words and long local context (7 words) includes irrelevant information.

Method MSE
Siamese LSTM Siamese_LSTM 0.8822 0.8345 0.2286
Siamese LSTM (publicly available version)222We used the public version of Siamese LSTM Siamese_LSTM available at https://github.com/aditya1503/Siamese-LSTM, however, we did not get the same results as the ones described in their paper. 0.8500 0.7860 0.3017
Siamese #local context: 3 + Siamese LSTM 0.8536 0.7909 0.2915
Siamese #local context: 5 + Siamese LSTM 0.8549 0.7933 0.2898
Siamese #local context: 7 + Siamese LSTM 0.8540 0.7922 0.2911
Siamese #local context: 9 + Siamese LSTM 0.8533 0.7890 0.2923
Non-Linear Similarity Tsubaki 0.8480 0.7968 0.2904
Constituency Tree LSTM Tai 0.8582 0.7966 0.2734
Skip-thought+COCO (Kiros et al. 2015) 0.8655 0.7995 0.2561
Dependency Tree LSTM Tai 0.8676 0.8083 0.2532
ConvNets Similarity_Convolutional 0.8686 0.8047 0.2606
Table 1: Pearson () and Spearman () correlation coefficients, and Mean Squared Error for the test set of STS task.

The bottom part of Table 1 compares the results of our system and the best state-of-the-art systems. Although our method did not generate the best results, our system is among the top systems and the results were improved with respect to the publicly available version of the original Siamese LSTM.

In order to illustrate how our local context acts on sentence analysis, Table 2 shows at the word level the similarity a pair of paraphrases: “Her life spanned years of incredible change for women.” and “Mary lived through an era of liberating reform for women.” For each pair of words taken in both sentences, the similarity measured as a cosine distance 333The cosine distance between two vectors and is defined by . is computed either from general word embeddings (table a) or local contexts of length 5 (table b). The first things to notice is that the two tables have different ranges of values because they each represent a different dimensional space; this means that values must be compared inside each table. Analyzing Table 2a shows that word embeddings preserve general semantic and syntactic relationships of words. In this case, the words are more similar to the words that have similar semantics (1-"Her", 2-"Mary" and 2-"women"; 1-"life" and 2-"lived"; 1-"change" and 2-"reform") and/or have similar syntactic roles (1-"of" and 2-"for"). Table 2b highlights that the local context of a word has its semantic and syntactic features based on the words in its window; e.g. the nearest contexts to 1-"life" are 2-"Mary", 2-"lived", 2-through and 2-"women" since these local contexts have directly (2-"lived") and indirectly (2-"Mary", 2-"through" and 2-"women") similar semantics. This analysis is similar to the syntactic features for the local contexts, e.g. the nearest local context of 1-"for" are 2-"lived", 2-"of", 2-"for" and 2-"woman". The relevance of local context is strengthened when we analyze phrasal verbs or multi-word expressions in which meaning depends strongly on their previous and their following words.

Mary lived through an era of liberating reform for women
Her 0.77 0.93 0.90 0.81 1.04 0.92 0.95 0.91 0.80 0.80
life 0.91 0.70 0.89 0.90 0.82 1.00 0.71 0.86 0.88 0.86
spanned 0.88 0.76 0.81 1.01 0.80 0.85 0.92 1.00 0.89 0.93
years 0.88 0.70 0.94 0.88 0.72 0.86 0.92 0.93 0.81 0.86
of 0.93 0.96 0.96 1.09 0.91 0.00 0.99 1.02 0.82 0.91
incredible 0.94 0.89 0.83 0.94 0.84 0.95 0.74 1.04 0.83 0.97
change 0.97 0.90 0.93 0.92 0.85 0.99 0.80 0.67 0.83 0.92
for 0.96 0.97 0.67 0.79 0.89 0.82 0.88 0.92 0.00 0.89
women 0.81 0.96 0.99 0.93 0.92 0.91 0.79 0.88 0.89 0.00
a. Cosine distance between word embeddings.
Mary lived through an era of liberating reform for women
Her 0.06 0.08 0.09 0.11 0.16 0.12 0.13 0.13 0.09 0.08
life 0.10 0.08 0.09 0.12 0.11 0.13 0.13 0.14 0.10 0.10
spanned 0.15 0.14 0.11 0.11 0.18 0.14 0.14 0.16 0.13 0.12
years 0.13 0.11 0.08 0.13 0.10 0.12 0.11 0.16 0.09 0.09
of 0.12 0.11 0.10 0.12 0.11 0.09 0.12 0.14 0.13 0.11
incredible 0.12 0.12 0.13 0.14 0.19 0.13 0.03 0.16 0.14 0.09
change 0.14 0.13 0.18 0.15 0.18 0.15 0.16 0.02 0.15 0.13
for 0.10 0.09 0.10 0.11 0.12 0.08 0.11 0.12 0.04 0.08
women 0.09 0.07 0.09 0.11 0.11 0.08 0.09 0.14 0.07 0.01
b. Cosine distance between local contexts of length 5.
Table 2: Cosine distance measured between word embeddings (a.) and between the local contexts of length 5 (b.) for each pair of words of two paraphrases.

Table 3 shows four examples of STS scores for multiple levels of similarities. The first pair of sentences describes an example of active and passive voice, with the same meaning (4.9 golden score). The second case is an example of positive and negative sentences (3.3 golden score). The third example is composed of sentences that do not share the same meaning, having 1.0 golden score. Finally, our method helps to determine the semantic relationship of the phrasal verb "wipe off" and the verb "clean" in the last example. Our approach improves the Siamese LSTM analysis by generating better scores. The local context helps to better identify not only similar sentences but also the negation and sentences with different meanings. This local information provides LSTM with a smoother analysis of words and how they connect in a sentence.

Pair of sentences Golden score Siamese LSTM Our approach
Fish is being cooked by a woman. 4.9 3.84 4.05
A woman is cooking fish.
The bearded man is not sitting on a train. 3.3 3.49 3.35
The bearded man is sitting on a train.
Someone is playing with a toad. 1.0 1.51 1.46
The trumpet is being played by a man.
I will wash up if you wipe off the table. 5.0 3.67 4.08
I will wash up if you clean the table.
Table 3: Examples of semantic textual similarities using Siamese LSTM and our approach (Siamese #local context: 5 + Siamese LSTM).

To sum up, the local context of words refined the general context analysis. Our approach identified more details about the words and their local as well as general contexts, which usually leads to improved STS scores.

6 Conclusion

STS is an important task for various NLP applications, e.g. Automatic Text Summarization (ATS), Question-Answering, Information Retrieval, etc. Our system combines CNN and LSTM structures to analyze, to identify and to preserve the relevant information in each part of sentences and in the whole sentences. The local context turned out to be useful to get complement information about a word in a sentence and to improve the sentence analysis. In our experiments, the local context improved the prediction of the sentence similarity, by reducing the mean squared error and increasing the correlation scores.

We plan to test other methods to analyze the local context Ermakova; Zhu. Unfortunately, the dataset we used for the experiments is of a modest size and we did not find larger annotated corpora for this task. Therefore, we also want to lead extrinsic evaluations by measuring how STS acts on ATS systems, depending on whether the original or the modified Siamese LSTM model is used.

Acknowledgments

This work was partially financed by the European Project CHISTERA-AMIS ANR-15-CHR2-0001.

Références