Methodology and Results for the Competition on Semantic Similarity Evaluation and Entailment Recognition for PROPOR 2016

09/19/2017
by   Luciano Barbosa, et al.
0

In this paper, we present the methodology and the results obtained by our teams, dubbed Blue Man Group, in the ASSIN (from the Portuguese Avaliação de Similaridade Semântica e Inferência Textual) competition, held at PROPOR 2016[International Conference on the Computational Processing of the Portuguese Language - http://propor2016.di.fc.ul.pt/]. Our team's strategy consisted of evaluating methods based on semantic word vectors, following two distinct directions: 1) to make use of low-dimensional, compact, feature sets, and 2) deep learning-based strategies dealing with high-dimensional feature vectors. Evaluation results demonstrated that the first strategy was more promising, so that the results from the second strategy have been discarded. As a result, by considering the best run of each of the six teams, we have been able to achieve the best accuracy and F1 values in entailment recognition, in the Brazilian Portuguese set, and the best F1 score overall. In the semantic similarity task, our team was ranked second in the Brazilian Portuguese set, and third considering both sets.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

11/12/2018

Classifying Patent Applications with Ensemble Methods

We present methods for the automatic classification of patent applicatio...
12/28/2019

Tha3aroon at NSURL-2019 Task 8: Semantic Question Similarity in Arabic

In this paper, we describe our team's effort on the semantic text questi...
04/27/2019

Experiments in Cuneiform Language Identification

This paper presents methods to discriminate between languages and dialec...
06/10/2022

The 1st Data Science for Pavements Challenge

The Data Science for Pavement Challenge (DSPC) seeks to accelerate the r...
09/17/2019

SocialNLP EmotionX 2019 Challenge Overview: Predicting Emotions in Spoken Dialogues and Chats

We present an overview of the EmotionX 2019 Challenge, held at the 7th I...
01/04/2018

ICFVR 2017: 3rd International Competition on Finger Vein Recognition

In recent years, finger vein recognition has become an important sub-fie...
07/15/2020

Are We There Yet? Evaluating State-of-the-Art Neural Network based Geoparsers Using EUPEG as a Benchmarking Platform

Geoparsing is an important task in geographic information retrieval. A g...

1 Introduction

In this work, we present the methodology and results obtained by our team, dubbed Blueman group, in the Avaliação de Similaridade e Inferência Textual (ASSIN) competition, jointly held with the International Conference on the Computational Processing of Portuguese (PROPOR) 2016.

The ASSIN competition assigned two tasks to participants: evaluation of semantic similarity, and entailment recognition. Given sentences and , the first task consists of providing a score ranging from 1 to 5, representing the strength of the semantic relationship between and . The second task involves determining whether entails (a sentence entails another sentence if, after reading both and knowing that is true, a person concludes that must also be true). Given these two tasks, researchers are invited to form teams and participate in the competition by developing systems that solve either or both of them, by making use of labeled data provided by the organization of the competition, and submit their results on a blind test data, the accuracy of which is used to rank the teams and define the winners. It is worth mentioning that sets with text in Portuguese from both Brazil and Portugal were available, i.e. PT-BR and PT-PT, and teams could choose to submit results for either or both sets.

Our team (Blueman group) focused on word vectors-based approaches to solve both tasks (see details in section 3). By considering word vectors created with the entire Portuguese Wikipedia, we have followed two distinct directions. In the first, we implement a state-of-the-art feature set, proposed in [Kenter e de Rijke2015]

, to train both support vector regression/classification models and Lasso regression. In the second direction, we exploit deep-learning setups of siamese neural networks. Preliminary evaluations on the training and trial data sets demonstrated that the first direction was more promising, and we have decided to demonstrate the results of that methodology only.

In total, six teams participated in the competition. By considering the best run of each team, our system worked best in the entailment recognition task, ranking first in both accuracy and F1 for the PT-BR set, while ranking second in accuracy and first in F1 overall. In the semantic similarity evaluation, our best results were ranked second in both Pearson correlation and Mean Squared Error (MSE) for the PT-BR set, while ranking second in Pearson and third in MSE overall. For the PT-PT set, the system performed better for entailment recognition, achieving the second best F1 score, while achieving only the 4th place in semantic similarity.

In the remainder of this document we present details on how our system was developed and evaluated.

2 ASSIN Competition

The ASSIN competition, a.k.a Avaliação de Similaridade Semântica e Inferência Textual, consists of an evaluation forum for two NLP-related tasks, i.e. semantic similarity and textual entailment recognition, where registered participants (or teams) could develop systems and submitted their results on the data provided by the organizing committee. A large dataset containing pairs of sentences, in both Portugal’s and Brazil’s variants of Portuguese, has been created to allow participants to both develop and evaluate the systems. And the participants could submit results to either or both task, and also either or both variations of Portuguese. Then, the teams would be ranked by the results of their systems on the evaluation dataset, namely test set. Both the metrics and the sets, as well as task, are explained in detail, as follows.

The ASSIN dataset, containing a total of 10,000 pairs of sentences, can be divided in the following subsets. The Brazilian training set contains 3,000 labelled pairs of sentences collected from Google News, from Brazilian sources. The Portuguese training set also contains 3,000 labelled pairs of sentences collected from Google News, but from Portuguese sources. And the Brazilian and Portuguese blind test sets, contain 2,000 unlabeled pairs of sentences each, from the same sources. It is worth mentioning that the labels of the test sets have been released to the participants only after they had submitted their results.

For the first task, i.e. semantic similarity, the semantic relatedness is measure in a scale from 1 to 5, where 1 stands for completely different sentences, and 5 sentences that means essentially the same thing. The scales in between are gradual variations of these two concepts. In the light of this, this task consists of building a model which, given the pair of sentences , containing sentence and sentence , predicts the semantic similarity score . Given the manually-labeled similarity scores , systems are evaluated by means of the Pearson correlation between the set containing all and , for , and the Mean Squared Error (MSE).

The second task, i.e. recognizing textual entailment (RTE), consists of determining whether the meaning of the hypothesis is entailed from the text [RTE2011]. That is, suppose is the text and is the hypothesis, entails if, after reading both and knowing that is true, a person concluded that must also be true. Given that the dataset provided by ASSIN also distiguishes bidirectional entailment cases, or paraphrases, the pair of sentences and

must be classified into one of the following classes:

entailment, paraphrase, and no relation. Given the ground-truth labels, systems are measured by means of accuracy and F1 score.

More details regarding ASSIN are available at [ASS2016].

3 Methodology

As already mentioned, the strategy employed by our team consisted in evaluating word vector-based approaches, where the word vectors represent the semantic meaning of words (see Section 3.1). As a result, two distinct directions have been followed. The first, presented in Section 3.2

, consists of implementing a state-of-the-art feature set for representing the similarity relatedness of pairs of sentences, and using regression models such as support vector regression (SVR) for semantic similarity evaluation, and support vector machines (SVM) for entailment recognition. And the second, in Section 

3.3, exploits deep-learning siamese neural networks, with the goal of learning better representation from raw data, i.e. the word vectors of the pair of sentences.

3.1 Word vectors

Word vectors (or word embeddings) have been successfully used over the past years to learn useful word representations, encoding the semantic meaning of words by means of continuous vectors [Collobert et al.2011]. In other words, even if two words are lexically written in two very distinct ways, if these two words present similar semantic meaning, their corresponding word vectors should be very similar. These vectors make it possible not only to create NLP methods that rely more on the semantic meaning of the words than on their lexical form, but to take advantage of large corpora of text since the learning of word vectors can be done in an unsupervised fashion.

The learning of word vectors is done in the following way. Given a large corpus of text, word vectors are learned by considering the distributional frequency of words. That is, given a word and its preceding and subsequent words in a sentence, a machine learning model such as a neural network can be learned by using the neighbouring words are input, and the central word as output.

In this work, word vectors have been created with the word2vec tool222http://code.google.com/archive/p/word2vec/, using the entire Portuguese Wikipedia as input. This set used contains a total of 636,597 lines of texts, with 229,658,430 word occurrences, and a vocabulary of size 540,638. The word2vec

tool was setup with: skip n-grams model; word vector size equals to 300; maximum skip length between words set to 5; 10 negative samples; hierarchical softmax not used; threshold of occurrence of words set to 10e-5; and 15 training iterations.

3.2 Strategy 1: Kenter’s features

3.2.1 Feature set

The feature set proposed in [Kenter e de Rijke2015], consists of extracting a single feature vector, denoted , to encode the semantic similarity from the pair of sentences and . In this work, we propose the use of such feature set for both tasks in the competition, i.e. semantic similarity evaluation and entailment recognition.

Given the sets word vectors and , computed from sentences and , the feature set is composed of two types of features. 1) semantic networks; and 2) text-level features.

In short, semantic networks consist of building a network considering the distances of pairs of terms that appear in and . In this case, two types of networks are built. The first, namely Saliency-weighted Semantic Network, combines both similarity and inverse document frequency (IDF) to create the links between the nodes, by considering, for each term in , the most similar term in , i.e. the terms with the most similar word vector. The second type of network, referred to as Unweighted Semantic Network, in contrast, does not rely on IDF, and two different unweighted networks are derived from this. One contains the distance of the word vector of all terms to the other ones, and the other only the maximum distance. In the end, the information in these networks is used to create histograms, which are concatenated to compose a single feature vector.

Text-level features are defined in two ways: 1) distance between word vectors, where both the cosine and Euclidean distances are computed between the mean word vectors of and ; and 2) bins of dimensions, where a histogram is computed from the real values presented in the mean word vectors of the pair of sentences.

The boundaries for the aforementioned histograms have been defined in the following way. For the features calculated from the saliency-weighted semantic network, the values are 0-.15, .15-.4, and .4-. For the unweighted semantic network, the values are -1-.45, .45-.8, and .8-. And for the bins of dimension, the values are -.001, .001-.01, .01-.02, and .02-. Details on how these boundaries have been defined, along with values for other parameters, can be found in [Kenter e de Rijke2015].

The resulting feature set consists of a 15-position vector, based on: 3 features from histogram of saliency-weighted semantic networks, 2 3 from the histograms from the unweighted semantic networks, 2 from the distances of the mean word vectors, and 4 from the bins of dimension. It is worth mentioning that these 15 features can be replicate other set of word vectors, but in this work, we consider only the word vectors described in Section 3.1.

3.2.2 Support Vector Regression and Support Vector Machines

Support vector machines (SVM), and their corresponding method for regression problems, i.e. Support Vector Regression (SVR), have become popular in the past years given the good performance in a high number of tasks [citation]. SVM and SVR employ the following idea: input vectors, denoted , are non-linearly mapped to a very high-dimension feature space. In this feature space, a linear decision surface is constructed, in order to predict the class value , in the case of classification, or the target real value , in the case of regression. Special properties of the decision surface ensures high generalization ability of the learning machine [Cortes e Vapnik1995].

For this work, both SVR and SVM have been implement with the Scikit Learn library333http://scikit-learn.org. For both methods, we used the Gaussian kernel after a few preliminary experiments. And the configuration parameters of both have been setup by means of a grid search with five-fold cross validation.

3.2.3 Lasso

Let denote the response and let denote the features calculated for each observation . We considered the following regression model:

where denotes the error associated with observation . The above model is linear in the features and includes all possible two-way interactions, , between pairs of features. Let denote the set of all parameters and . By correctly specifying a design matrix (whose columns are the features and corresponding two-way interactions) we may formulate the above regression in a more simple matrix notation:

where and are the response and error vectors respectively.

Note that if we were to estimate the above model using the method of least squares we could easily have problems with over-fitting due to the large amount of parameters to be estimated:

Lasso regression is designed to tackle this potential problem of over-fitting and falls into a class of models called regularized regression. By applying least squares with an additional -constraint on the parameters, , for some , we are able to guard against over-fitting. This method has an advantage in that it serves as a method for variable selection as well, since the -penalty effectively forces some of the parameter estimates to be exactly equal to .

3.3 Strategy 2: Siamese Networks

Siamese networks  have been widely used in image and text processing to learn a similarity metric from data. For the specific task proposed on ASSIN, we use siamese networks to learn the similarity between two sentences in Portuguese. Essentially, given a pair of sentences, a siamese network projects each sentence in a new representation space using, for instance, convolutional or recurrent networks. The parameters W of each sentence branch are shared. These representations are then given as input to a pre-defined similarity metric such as cosine or euclidean that calculates the similarity between the two representations. During training, the network learns the values of W that minimize a given loss function. In our experiments, we use Mean Squared Error as the loss function. The error is the difference between the true similarity value given in the training data and the predicted one. From this framework, we tried different configurations. For instance, to project the sentences we tried convolutional and recurrent networks, and as similarity metrics cosine and dot product. In the experimental evaluation, we present the siamese networks that obtained the best results over the test set.

lstm 0.47, 0.04

4 Evaluation Results

In this section, we discuss the results obtained with the methods described in Section 3. For such an evaluation, we consider the Trial dataset as test set, and both PT-BR and PT-PT training sets. Note that we have removed from PT-BR the samples that also appear in Trial.

A comparison of the results for each method is presented in Table 1. In this case, the best results have been achieved with Kenter’s features with either SVRs or Lasso for semantic similarity evaluation, and SVMs for entailment recognition. With SVR, Pearson correlation of 0.51, 0.49, and 0.50 have been reached for PT-BR, PT-PT, and Overall sets, respectively. In the entailment recognition task, F1 scores of 0.45, 0.50, and 0.51, have been achieved on the same sets, respectively. In addition, we observe that with Lasso, the results are very similar to those of SVR.

Configuration Similarity Entailment
Baseline: Bag of Words Overall 0.47
Kenter’s features - SVR(M) PT-BR 0.51 79.60/0.45
Kenter’s features - SVR(M) PT-PT 0.49 74.20/0.50
Kenter’s features - SVR(M) Overall 0.50 77.00/0.51
Kenter’s features - Lasso PT-BR
Kenter’s features - Lasso PT-PT
Kenter’s features - Lasso Overall
LSTM várias camadas + reg L2 0.26
LSTM várias camadas + reg L2 + features 0.23
CNN 0.13
LSTM várias camadas + reg L2 + features + Full Data 0.49
CNN + Cos PT-BR 0.35
LSTM + Cos PT-BR 0.41
LSTM + Cos Overall 0.38
LSTM + Concat + Kenter’s features PT-BR 0.33
LSTM + Cos + Kenter’s features PT-BR 0.33
LSTM + Dot + Kenter’s features PT-BR 0.29
LSTM (Cos) + BOW (Cos) + Kenter’s features PT-BR 0.39
CNN (Cos) + BOW (Cos) + Kenter’s features PT-BR 0.40
BOW (Cos) + BR 0.34
LSTM (Cos) + CNN (Cos) + BOW (Cos) + Kenter’s features PT-BR 0.38
Tabela 1: Evaluation results, considering Trial as test set.

The second strategy, making use of Siamese networks, has not achieved good results. The best results with this method were 0.11 points below that from strategy 1. For this reason, we decided to submit the results only with Kenter’s features, one run with SVR and another run with Lasso for semantic similarity, and one run with SVM in entailment recognition.

5 Competition Results

In this section we discuss the results of our methods in the blind test data, and how it compared with the other competitors.

In total, six teams participated in the competition. In addition our team, only two other teams submitted results for both tasks and both PT-BR and PT-PT sets. From the remaining three teams, two have focused only on the semantic similarity task, considering both sets, and the other one only on the PT-PT set, for both similarity and entailment recognition tasks.

The best result of each team444Each team was allowed to submit up to three different runs, i.e. the best run, is listed in Table 2, and the ranking of each team, considering only the best run, is presented in Table 3. Considering only the best run of each team, we have managed to achieve very good results with the PT-BR set and Overall, being far from the first place only in the PT-PT set. With PT-BR, we ranked first in both accuracy and F1 metrics for entailment recognition, and second best in semantic similarity evaluation. Besides the good results, it was surprising that Kenter’s features performed better in entailment recognition than semantic similarity evaluation, since the feature set has been originally proposed for the latter task. Overall, we ranked first in entailment recognition in F1, and second in accuracy. In semantic similarity, our team presented the second best Person correlation values, and the third best MSE value. In the PT-BR set, we have been able to be ranked second in F1 for the entailment recognition, and third in accuracy. But for semantic similarity, only the fourth place (tied with another team) has been reached.

PT-BR PT-PT Overall
Sim RTE Sim RTE Sim RTE
Team P MSE Acc F1 P MSE Acc F1 P MSE Acc F1
Solo Queue 0.70 0.38 - - 0.70 0.66 - - 0.68 0.52 - -
Reciclagem 0.59 1.31 79.05 0.39 0.54 1.10 73.10 0.43 0.54 1.23 75.58 0.40
ASAPP 0.65 0.44 81.65 0.47 0.68 0.70 78.90 0.58 0.65 0.58 80.23 0.54
LEC-UNIFOR 0.62 0.47 - - 0.64 0.72 - - 0.62 0.59 - -
L2F/INESC-ID - - - - 0.73 0.61 83.85 0.70 - - - -
Blue Man Group 0.65 0.44 81.65 0.52 0.64 0.72 77.60 0.61 0.63 0.59 79.62 0.58
Tabela 2: Best results of each team in the competition.

One observation that is worth mentioning, is that in some tasks or sets that teams that achieved the best results were those that focused only in one task or set. For instance, the Solo Queue team submitted results only for semantic similarity, and they won the task for PT-BR and Overall, and ranked second for PT-PT. The L2F/INESC-ID team, on the other hand, submitted results only for PT-PT, for both tasks, and they won both. In our case, we submitted a single method, we almost no difference from one set to another, or from one task to another. As lessons learned, in a future competition, we believe we shall invest more on fine tuning the algorithms to specific tasks and sets.

PT-BR PT-PT Overall
Sim RTE Sim RTE Sim RTE
Team P MSE Acc F1 P MSE Acc F1 P MSE Acc F1
Solo Queue 1st 1st - - 2nd 2nd - - 1st 1st - -
Reciclagem 5th 5th 3rd 3rd 6th 6th 4th 4th 5th 5th 3rd 3rd
ASAPP 2nd 2nd 1st 2nd 3rd 3rd 2nd 3rd 2nd 2nd 1st 2nd
LEC-UNIFOR 4th 4th - - 4th 4th - - 4th 3rd - -
L2F/INESC-ID - - - - 1st 1st 1st 1st - - - -
Blue Man Group 2nd 2nd 1st 1st 4th 4th 3rd 2nd 2nd 3rd 2nd 1st
Tabela 3: Teams ranking considering the best run.

In Table 4, we list the results of all methods that we evaluated, considering the labels of the blind test data made available after the competition.

Configuration Similarity Entailment
Baseline: Bag of Words Overall 0.47
Kenter’s features - SVR(M) PT-BR 0.64 81.65/0.52
Kenter’s features - SVR(M) PT-PT 0.64 77.60/0.61
Kenter’s features - SVR(M) Overall 0.63 79.62/0.58
Kenter’s features - Lasso PT-BR 0.65
Kenter’s features - Lasso PT-PT 0.63
Kenter’s features - Lasso Overall 0.63
LSTM várias camadas + reg L2
LSTM várias camadas + reg L2 + features
CNN
LSTM várias camadas + reg L2 + features + Full Data
CNN + Cos PT-BR
LSTM + Cos PT-BR
LSTM + Cos Overall
LSTM + Concat + Kenter’s features PT-BR
LSTM + Cos + Kenter’s features PT-BR
LSTM + Dot + Kenter’s features PT-BR
LSTM (Cos) + BOW (Cos) + Kenter’s features PT-BR
CNN (Cos) + BOW (Cos) + Kenter’s features PT-BR
BOW (Cos) + BR
LSTM (Cos) + CNN (Cos) + BOW (Cos) + Kenter’s features PT-BR
Tabela 4: Competition results, considering the blind test set.

6 Conclusions and Future Work

In this paper we presented the methods and results followed by our team, to participate in the ASSIN competition, and evaluate the results obtained compared with the other teams. In our case, we decided to exploit word vector-based approaches, following two distinct strategies. But given the bad results of the second strategy in the evaluation datasets, we pursued in the competition only the method from the first strategy, based on a state-of-the-art feature set for semantic similarity encoding. With this approach, we have been ranked best in the entailment recognition task and in semantic similarity evaluation, achieving the best F1 score overall, and the best accuracy and F1 score in the PT-BR dataset. In semantic similarity, our best result was the second place in the PT-BR set.

The experience of participating in the competition has been very valuable, and we expect to continue working in the problems to improve our method and the results. One future work is to understand better why siamese networks have not perform as well as the first strategy in these problems. Also, we would like to better investigate Kenter’s features, in order to improve this feature set on these tasks.

Referências

  • [RTE2011] 2011. PASCAL recognizing textual entailment challenge (rte-7) at tac 2011. http://www.nist.gov/tac/2011/RTE/. Accessed: 2016-04-26.
  • [ASS2016] 2016. Avaliação de similaridade semântica e inferência textual. http://propor2016.di.fc.ul.pt/?page_id=381. Accessed: 2016-04-26.
  • [Collobert et al.2011] Collobert, R., J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, e P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537.
  • [Cortes e Vapnik1995] Cortes, Corinna e Vladimir Vapnik. 1995. Support-vector networks. Mach. Learn., 20(3):273–297, September, 1995.
  • [Kenter e de Rijke2015] Kenter, Tom e Maarten de Rijke. 2015. Short text similarity with word embeddings. Em CIKM 2015: 24th ACM Conference on Information and Knowledge Management. ACM, October, 2015.