Eval all, trust a few, do wrong to none: Comparing sentence generation models

04/21/2018
by   Ondřej Cífka, et al.
0

In this paper, we study recent neural generative models for text generation related to variational autoencoders. These models employ various techniques to match the posterior and prior distributions, which is important to ensure a high sample quality and a low reconstruction error. In our study, we follow a rigorous evaluation protocol using a large set of previously used and novel automatic metrics and human evaluation of both generated samples and reconstructions. We hope that it will become the new evaluation standard when comparing neural generative models for text.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2023

Evaluating Generative Models for Graph-to-Text Generation

Large language models (LLMs) have been widely employed for graph-to-text...
research
10/13/2020

Random Network Distillation as a Diversity Metric for Both Image and Text Generation

Generative models are increasingly able to produce remarkably high quali...
research
05/04/2020

Distributional Discrepancy: A Metric for Unconditional Text Generation

The goal of unconditional text generation is training a model with real ...
research
06/22/2022

Understanding the Properties of Generated Corpora

Models for text generation have become focal for many research tasks and...
research
04/13/2020

Reverse Engineering Configurations of Neural Text Generation Models

This paper seeks to develop a deeper understanding of the fundamental pr...
research
10/28/2021

Preventing posterior collapse in variational autoencoders for text generation via decoder regularization

Variational autoencoders trained to minimize the reconstruction error ar...
research
04/01/2021

Towards creativity characterization of generative models via group-based subset scanning

Deep generative models, such as Variational Autoencoders (VAEs), have be...

Please sign up or login with your details

Forgot password? Click here to reset