Towards Universal Paraphrastic Sentence Embeddings

by   John Wieting, et al.

We consider the problem of learning general-purpose, paraphrastic sentence embeddings based on supervision from the Paraphrase Database (Ganitkevitch et al., 2013). We compare six compositional architectures, evaluating them on annotated textual similarity datasets drawn both from the same distribution as the training data and from a wide range of other domains. We find that the most complex architectures, such as long short-term memory (LSTM) recurrent neural networks, perform best on the in-domain data. However, in out-of-domain scenarios, simple architectures such as word averaging vastly outperform LSTMs. Our simplest averaging model is even competitive with systems tuned for the particular tasks while also being extremely efficient and easy to use. In order to better understand how these architectures compare, we conduct further experiments on three supervised NLP tasks: sentence similarity, entailment, and sentiment classification. We again find that the word averaging models perform well for sentence similarity and entailment, outperforming LSTMs. However, on sentiment classification, we find that the LSTM performs very strongly-even recording new state-of-the-art performance on the Stanford Sentiment Treebank. We then demonstrate how to combine our pretrained sentence embeddings with these supervised tasks, using them both as a prior and as a black box feature extractor. This leads to performance rivaling the state of the art on the SICK similarity and entailment tasks. We release all of our resources to the research community with the hope that they can serve as the new baseline for further work on universal sentence embeddings.


page 1

page 2

page 3

page 4


Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings

We consider the problem of learning general-purpose, paraphrastic senten...

Suffix Bidirectional Long Short-Term Memory

Recurrent neural networks have become ubiquitous in computing representa...

Charagram: Embedding Words and Sentences via Character n-grams

We present Charagram embeddings, a simple approach for learning characte...

Evaluating Compositionality in Sentence Embeddings

An important frontier in the quest for human-like AI is compositional se...

DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations

We present DeCLUTR: Deep Contrastive Learning for Unsupervised Textual R...

Assessing State-of-the-Art Sentiment Models on State-of-the-Art Sentiment Datasets

There has been a good amount of progress in sentiment analysis over the ...

Paraphrase Detection on Noisy Subtitles in Six Languages

We perform automatic paraphrase detection on subtitle data from the Opus...

Code Repositories


Some experimentations with word embeddings

view repo