Residual Learning of Neural Text Generation with n-gram Language Model

10/26/2022
by   Huayang Li, et al.
0

N-gram language models (LM) have been largely superseded by neural LMs as the latter exhibits better performance. However, we find that n-gram models can achieve satisfactory performance on a large proportion of testing cases, indicating they have already captured abundant knowledge of the language with relatively low computational cost. With this observation, we propose to learn a neural LM that fits the residual between an n-gram LM and the real-data distribution. The combination of n-gram and neural LMs not only allows the neural part to focus on the deeper understanding of language but also provides a flexible way to customize an LM by switching the underlying n-gram model without changing the neural model. Experimental results on three typical language tasks (i.e., language modeling, machine translation, and summarization) demonstrate that our approach attains additional performance gains over popular standalone neural models consistently. We also show that our approach allows for effective domain adaptation by simply switching to a domain-specific n-gram model, without any extra training. Our code is released at https://github.com/ghrua/NgramRes.

READ FULL TEXT
research
05/26/2023

External Language Model Integration for Factorized Neural Transducers

We propose an adaptation method for factorized neural transducers (FNT) ...
research
07/13/2023

Copy Is All You Need

The dominant text generation models compose the output by sequentially s...
research
08/05/2020

Efficient MDI Adaptation for n-gram Language Models

This paper presents an efficient algorithm for n-gram language model ada...
research
11/22/2019

Improving N-gram Language Models with Pre-trained Deep Transformer

Although n-gram language models (LMs) have been outperformed by the stat...
research
02/17/2016

Authorship Attribution Using a Neural Network Language Model

In practice, training language models for individual authors is often ex...
research
02/24/2016

Domain Specific Author Attribution Based on Feedforward Neural Network Language Models

Authorship attribution refers to the task of automatically determining t...
research
12/26/2013

Language Modeling with Power Low Rank Ensembles

We present power low rank ensembles (PLRE), a flexible framework for n-g...

Please sign up or login with your details

Forgot password? Click here to reset