Modern Methods for Text Generation

09/10/2020
by   Dimas Munoz Montesinos, et al.
0

Synthetic text generation is challenging and has limited success. Recently, a new architecture, called Transformers, allow machine learning models to understand better sequential data, such as translation or summarization. BERT and GPT-2, using Transformers in their cores, have shown a great performance in tasks such as text classification, translation and NLI tasks. In this article, we analyse both algorithms and compare their output quality in text generation tasks.

READ FULL TEXT

page 7

page 8

research
05/29/2022

CoNT: Contrastive Neural Text Generation

Recently, contrastive learning attracts increasing interests in neural t...
research
11/27/2017

Neural Text Generation: A Practical Guide

Deep learning methods have recently achieved great empirical success on ...
research
12/01/2021

Translation-equivariant Image Quantizer for Bi-directional Image-Text Generation

Recently, vector-quantized image modeling has demonstrated impressive pe...
research
10/06/2020

Stepwise Extractive Summarization and Planning with Structured Transformers

We propose encoder-centric stepwise models for extractive summarization ...
research
12/08/2022

Momentum Calibration for Text Generation

The input and output of most text generation tasks can be transformed to...
research
06/01/2023

EEL: Efficiently Encoding Lattices for Reranking

Standard decoding approaches for conditional text generation tasks typic...
research
11/10/2019

Distilling the Knowledge of BERT for Text Generation

Large-scale pre-trained language model, such as BERT, has recently achie...

Please sign up or login with your details

Forgot password? Click here to reset