Exploiting Twitter as Source of Large Corpora of Weakly Similar Pairs for Semantic Sentence Embeddings

10/05/2021
by   Marco Di Giovanni, et al.
0

Semantic sentence embeddings are usually supervisedly built minimizing distances between pairs of embeddings of sentences labelled as semantically similar by annotators. Since big labelled datasets are rare, in particular for non-English languages, and expensive, recent studies focus on unsupervised approaches that require not-paired input sentences. We instead propose a language-independent approach to build large datasets of pairs of informal texts weakly similar, without manual human effort, exploiting Twitter's intrinsic powerful signals of relatedness: replies and quotes of tweets. We use the collected pairs to train a Transformer model with triplet-like structures, and we test the generated embeddings on Twitter NLP similarity tasks (PIT and TURL) and STSb. We also introduce four new sentence ranking evaluation benchmarks of informal texts, carefully extracted from the initial collections of tweets, proving not only that our best model learns classical Semantic Textual Similarity, but also excels on tasks where pairs of sentences are not exact paraphrases. Ablation studies reveal how increasing the corpus size influences positively the results, even at 2M samples, suggesting that bigger collections of Tweets still do not contain redundant information about semantic similarities.

READ FULL TEXT

page 8

page 9

research
02/07/2022

Comparison and Combination of Sentence Embeddings Derived from Different Supervision Signals

We have recently seen many successful applications of sentence embedding...
research
01/19/2018

A Resource-Light Method for Cross-Lingual Semantic Textual Similarity

Recognizing semantically similar sentences or paragraphs across language...
research
08/27/2019

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new ...
research
11/15/2017

Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations

We extend the work of Wieting et al. (2017), back-translating a large pa...
research
12/23/2018

Improving Context-Aware Semantic Relationships in Sparse Mobile Datasets

Traditional semantic similarity models often fail to encapsulate the ext...
research
11/26/2019

Tracing State-Level Obesity Prevalence from Sentence Embeddings of Tweets: A Feasibility Study

Twitter data has been shown broadly applicable for public health surveil...
research
05/09/2023

Multilevel Sentence Embeddings for Personality Prediction

Representing text into a multidimensional space can be done with sentenc...

Please sign up or login with your details

Forgot password? Click here to reset