Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation

02/27/2022
by   Zhuang Li, et al.
0

In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for conditional natural language generation with none or a handful of task-specific labeled examples. In order to improve compositional generalization, our model performs disentangled representation learning by introducing a prior for the latent content space and another prior for the latent label space. We show both empirically and theoretically that the conditional priors can already disentangle representations even without specific regularizations as in the prior work. We can also sample diverse content representations from the content space without accessing data of the seen tasks, and fuse them with the representations of novel tasks for generating diverse texts in the low-resource settings. Our extensive experiments demonstrate the superior performance of our model over competitive baselines in terms of i) data augmentation in continuous zero/few-shot learning, and ii) text style transfer in both zero/few-shot settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/11/2021

Learning from Language Description: Low-shot Named Entity Recognition via Decomposed Framework

In this work, we study the problem of named entity recognition (NER) in ...
research
10/22/2022

ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback

Recently, dataset-generation-based zero-shot learning has shown promisin...
research
04/28/2020

MultiMix: A Robust Data Augmentation Strategy for Cross-Lingual NLP

Transfer learning has yielded state-of-the-art results in many supervise...
research
06/01/2023

UCAS-IIE-NLP at SemEval-2023 Task 12: Enhancing Generalization of Multilingual BERT for Low-resource Sentiment Analysis

This paper describes our system designed for SemEval-2023 Task 12: Senti...
research
04/01/2022

Learning Disentangled Representations of Negation and Uncertainty

Negation and uncertainty modeling are long-standing tasks in natural lan...
research
05/11/2022

Towards Improved Zero-shot Voice Conversion with Conditional DSVAE

Disentangling content and speaking style information is essential for ze...
research
08/04/2018

Learning disentangled representation from 12-lead electrograms: application in localizing the origin of Ventricular Tachycardia

The increasing availability of electrocardiogram (ECG) data has motivate...

Please sign up or login with your details

Forgot password? Click here to reset