A Graph Total Variation Regularized Softmax for Text Generation

01/01/2021
by   Liu Bin, et al.
0

The softmax operator is one of the most important functions in machine learning models. When applying neural networks to multi-category classification, the correlations among different categories are often ignored. For example, in text generation, a language model makes a choice of each new word based only on the former selection of its context. In this scenario, the link statistics information of concurrent words based on a corpus (an analogy of the natural way of expression) is also valuable in choosing the next word, which can help to improve the sentence's fluency and smoothness. To fully explore such important information, we propose a graph softmax function for text generation. It is expected that the final classification result would be dominated by both the language model and graphical text relationships among words. We use a graph total variation term to regularize softmax so as to incorporate the concurrent relationship into the language model. The total variation of the generated words should be small locally. We apply the proposed graph softmax to GPT2 for the text generation task. Experimental results demonstrate that the proposed graph softmax achieves better BLEU and perplexity than softmax. Human testers can also easily distinguish the text generated by the graph softmax or softmax.

READ FULL TEXT
research
11/01/2019

Kernelized Bayesian Softmax for Text Generation

Neural models for text generation require a softmax layer with proper to...
research
03/24/2016

Neural Text Generation from Structured Data with Application to the Biography Domain

This paper introduces a neural model for concept-to-text generation that...
research
03/30/2021

AfriKI: Machine-in-the-Loop Afrikaans Poetry Generation

This paper proposes a generative language model called AfriKI. Our appro...
research
09/22/2017

Sentence Correction Based on Large-scale Language Modelling

With the further development of informatization, more and more data is s...
research
11/12/2021

Speeding Up Entmax

Softmax is the de facto standard in modern neural networks for language ...
research
02/21/2019

Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities

The softmax function on top of a final linear layer is the de facto meth...
research
04/15/2019

Pun Generation with Surprise

We tackle the problem of generating a pun sentence given a pair of homop...

Please sign up or login with your details

Forgot password? Click here to reset