Exploring Asymmetric Encoder-Decoder Structure for Context-based Sentence Representation Learning

10/28/2017
by   Shuai Tang, et al.
0

Context information plays an important role in human language understanding, and it is also useful for machines to learn vector representations of language. In this paper, we explore an asymmetric encoder-decoder structure for unsupervised context-based sentence representation learning. As a result, we build an encoder-decoder architecture with an RNN encoder and a CNN decoder. We further combine a suite of effective designs to significantly improve model efficiency while also achieving better performance. Our model is trained on two different large unlabeled corpora, and in both cases transferability is evaluated on a set of downstream language understanding tasks. We empirically show that our model is simple and fast while producing rich sentence representations that excel in downstream tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/08/2018

Exploiting Invertible Decoders for Unsupervised Sentence Representation Learning

The encoder-decoder models for unsupervised sentence representation lear...
research
08/19/2021

Sentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text Models

We provide the first exploration of text-to-text transformers (T5) sente...
research
04/22/2018

Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation

The encoder-decoder dialog model is one of the most prominent methods us...
research
03/30/2019

Machine translation considering context information using Encoder-Decoder model

In the task of machine translation, context information is one of the im...
research
05/21/2022

Improvements to Self-Supervised Representation Learning for Masked Image Modeling

This paper explores improvements to the masked image modeling (MIM) para...
research
03/16/2019

Improving Lemmatization of Non-Standard Languages with Joint Learning

Lemmatization of standard languages is concerned with (i) abstracting ov...
research
11/07/2017

Theoretical limitations of Encoder-Decoder GAN architectures

Encoder-decoder GANs architectures (e.g., BiGAN and ALI) seek to add an ...

Please sign up or login with your details

Forgot password? Click here to reset