WakaVT: A Sequential Variational Transformer for Waka Generation

04/01/2021
by   Yuka Takeishi, et al.
0

Poetry generation has long been a challenge for artificial intelligence. In the scope of Japanese poetry generation, many researchers have paid attention to Haiku generation, but few have focused on Waka generation. To further explore the creative potential of natural language generation systems in Japanese poetry creation, we propose a novel Waka generation model, WakaVT, which automatically produces Waka poems given user-specified keywords. Firstly, an additive mask-based approach is presented to satisfy the form constraint. Secondly, the structures of Transformer and variational autoencoder are integrated to enhance the quality of generated content. Specifically, to obtain novelty and diversity, WakaVT employs a sequence of latent variables, which effectively captures word-level variability in Waka data. To improve linguistic quality in terms of fluency, coherence, and meaningfulness, we further propose the fused multilevel self-attention mechanism, which properly models the hierarchical linguistic structure of Waka. To the best of our knowledge, we are the first to investigate Waka generation with models based on Transformer and/or variational autoencoder. Both objective and subjective evaluation results demonstrate that our model outperforms baselines significantly.

READ FULL TEXT
research
03/19/2021

Adversarial and Contrastive Variational Autoencoder for Sequential Recommendation

Sequential recommendation as an emerging topic has attracted increasing ...
research
03/28/2020

Variational Transformers for Diverse Response Generation

Despite the great promise of Transformers in many sequence modeling task...
research
10/22/2022

Recurrence Boosts Diversity! Revisiting Recurrent Latent Variable in Transformer-Based Variational AutoEncoder for Diverse Text Generation

Variational Auto-Encoder (VAE) has been widely adopted in text generatio...
research
06/06/2023

Emotion-Conditioned Melody Harmonization with Hierarchical Variational Autoencoder

Existing melody harmonization models have made great progress in improvi...
research
04/06/2021

Variational Transformer Networks for Layout Generation

Generative models able to synthesize layouts of different kinds (e.g. do...
research
11/08/2019

Question Generation from Paragraphs: A Tale of Two Hierarchical Models

Automatic question generation from paragraphs is an important and challe...
research
11/21/2017

Generating Thematic Chinese Poetry with Conditional Variational Autoencoder

Computer poetry generation is our first step towards computer writing. W...

Please sign up or login with your details

Forgot password? Click here to reset