DeepAI AI Chat
Log In Sign Up

Unsupervised Neural Text Simplification

by   Sai Surya, et al.

The paper presents a first attempt towards unsupervised neural text simplification that relies only on unlabeled text corpora. The core framework is comprised of a shared encoder and a pair of attentional-decoders that gains knowledge of both text simplification and complexification through discriminator-based-losses, back-translation and denoising. The framework is trained using unlabeled text collected from en-Wikipedia dump. Our analysis (both quantitative and qualitative involving human evaluators) on a public test data shows the efficacy of our model to perform simplification at both lexical and syntactic levels, competitive to existing supervised methods. We open source our implementation for academic use.


page 1

page 2

page 3

page 4


TED: A Pretrained Unsupervised Summarization Model with Theme Modeling and Denoising

Text summarization aims to extract essential information from a piece of...

Exploring the Use of an Unsupervised Autoregressive Model as a Shared Encoder for Text-Dependent Speaker Verification

In this paper, we propose a novel way of addressing text-dependent autom...

Bilex Rx: Lexical Data Augmentation for Massively Multilingual Machine Translation

Neural machine translation (NMT) has progressed rapidly over the past se...

Towards Unsupervised Speech-to-Text Translation

We present a framework for building speech-to-text translation (ST) syst...

Unsupervised Controllable Text Formalization

We propose a novel framework for controllable natural language transform...

CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training

Two important tasks at the intersection of knowledge graphs and natural ...

text2sdg: An open-source solution to monitoring sustainable development goals from text

Monitoring progress on the United Nations Sustainable Development Goals ...