A Random Gossip BMUF Process for Neural Language Modeling

09/19/2019
by   Yiheng Huang, et al.
0

LSTM language model is an essential component of industrial ASR systems. One important challenge of training an LSTM language model is how to scale out the learning process to leverage big data. Conventional approach such as block momentum provides a blockwise model update filtering (BMUF) process to stabilize the learning process, and achieves almost linear speedups with no degradation for speech recognition with DNNs and LSTMs. However, it needs to calculate the global average of all nodes and when the number of computing nodes is large, the communication latency is a big problem. For this reason, BMUF is not suitable under restricted network conditions. In this paper, we present a decentralized BMUF process, in which the model is split into different components, and each component is updated by communicating to some randomly chosen neighbor nodes with the same component, followed by a BMUF-like process. We apply this method to several LSTM language modeling tasks. Experimental results show that our approach achieves consistently better performance than the conventional BMUF. In particular, we obtain a lower perplexity than the single-GPU baseline on the wiki-text-103 benchmark using 4 GPUs. In addition, no performance degradation is incurred when scaling to 8 and 16 GPUs. Last but not least, our approach has a much simpler network topology than the centralized topology with a superior performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2020

Cascade RNN-Transducer: Syllable Based Streaming On-device Mandarin Speech Recognition with a Syllable-to-Character Converter

End-to-end models are favored in automatic speech recognition (ASR) beca...
research
11/02/2018

Adversarial Training of End-to-end Speech Recognition Using a Criticizing Language Model

In this paper we proposed a novel Adversarial Training (AT) approach for...
research
10/23/2018

Language Modeling at Scale

We show how Zipf's Law can be used to scale up language modeling (LM) to...
research
10/03/2019

Neural Zero-Inflated Quality Estimation Model For Automatic Speech Recognition System

The performances of automatic speech recognition (ASR) systems are usual...
research
03/11/2022

Are discrete units necessary for Spoken Language Modeling?

Recent work in spoken language modeling shows the possibility of learnin...
research
12/28/2022

Cramming: Training a Language Model on a Single GPU in One Day

Recent trends in language modeling have focused on increasing performanc...
research
05/30/2023

Empirical Sufficiency Lower Bounds for Language Modeling with Locally-Bootstrapped Semantic Structures

In this work we build upon negative results from an attempt at language ...

Please sign up or login with your details

Forgot password? Click here to reset