DeepAI AI Chat
Log In Sign Up

Learning Implicit Text Generation via Feature Matching

05/07/2020
by   Inkit Padhi, et al.
Duke University
Amazon
ibm
0

Generative feature matching network (GFMN) is an approach for training implicit generative models for images by performing moment matching on features from pre-trained neural networks. In this paper, we present new GFMN formulations that are effective for sequential data. Our experimental results show the effectiveness of the proposed method, SeqGFMN, for three distinct generation tasks in English: unconditional text generation, class-conditional text generation, and unsupervised text style transfer. SeqGFMN is stable to train and outperforms various adversarial approaches for text generation and text style transfer.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/17/2018

Adversarial Text Generation via Feature-Mover's Distance

Generative adversarial networks (GANs) have achieved significant success...
04/04/2019

Learning Implicit Generative Models by Matching Perceptual Features

Perceptual features (PFs) have been used with great success in tasks suc...
07/09/2020

Unsupervised Text Generation by Learning from Search

In this work, we present TGLS, a novel framework to unsupervised Text Ge...
07/20/2022

GenText: Unsupervised Artistic Text Generation via Decoupled Font and Texture Manipulation

Automatic artistic text generation is an emerging topic which receives i...
12/06/2022

Style transfer and classification in hebrew news items

Hebrew is a Morphological rich language, making its modeling harder than...
10/07/2022

Unsupervised Neural Stylistic Text Generation using Transfer learning and Adapters

Research has shown that personality is a key driver to improve engagemen...
04/16/2022

Efficient Reinforcement Learning for Unsupervised Controlled Text Generation

Controlled text generation tasks such as unsupervised text style transfe...