SufiSent - Universal Sentence Representations Using Suffix Encodings

02/20/2018
by   Siddhartha Brahma, et al.
0

Computing universal distributed representations of sentences is a fundamental task in natural language processing. We propose a method to learn such representations by encoding the suffixes of word sequences in a sentence and training on the Stanford Natural Language Inference (SNLI) dataset. We demonstrate the effectiveness of our approach by evaluating it on the SentEval benchmark, improving on existing approaches on several transfer tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2018

Unsupervised Learning of Sentence Representations Using Sequence Consistency

Computing universal distributed representations of sentences is a fundam...
research
03/14/2018

SentEval: An Evaluation Toolkit for Universal Sentence Representations

We introduce SentEval, a toolkit for evaluating the quality of universal...
research
04/29/2020

Learning Better Universal Representations from Pre-trained Contextualized Language Models

Pre-trained contextualized language models such as BERT have shown great...
research
09/26/2020

Metaphor Detection using Deep Contextualized Word Embeddings

Metaphors are ubiquitous in natural language, and their detection plays ...
research
06/04/2019

Towards Lossless Encoding of Sentences

A lot of work has been done in the field of image compression via machin...
research
04/20/2015

Self-Adaptive Hierarchical Sentence Model

The ability to accurately model a sentence at varying stages (e.g., word...
research
05/25/2020

AMR quality rating with a lightweight CNN

Structured semantic sentence representations such as Abstract Meaning Re...

Please sign up or login with your details

Forgot password? Click here to reset