Adapting GPT, GPT-2 and BERT Language Models for Speech Recognition

07/29/2021
by   Xianrui Zheng, et al.
0

Language models (LMs) pre-trained on massive amounts of text, in particular bidirectional encoder representations from Transformers (BERT), generative pre-training (GPT), and GPT-2, have become a key technology for many natural language processing tasks. In this paper, we present results using fine-tuned GPT, GPT-2, and their combination for automatic speech recognition (ASR). Unlike unidirectional LM GPT and GPT-2, BERT is bidirectional whose direct product of the output probabilities is no longer a valid language prior probability. A conversion method is proposed to compute the correct language prior probability based on bidirectional LM outputs in a mathematically exact way. Experimental results on the widely used AMI and Switchboard ASR tasks showed that the combination of the fine-tuned GPT and GPT-2 outperformed the combination of three neural LMs with different architectures trained from scratch on the in-domain text by up to a 12 (WERR). Furthermore, the proposed conversion for language prior probabilities enables BERT to receive an extra 3 GPT and GPT-2 results in further improvements.

READ FULL TEXT
research
10/11/2018

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

We introduce a new language representation model called BERT, which stan...
research
04/11/2021

Innovative Bert-based Reranking Language Models for Speech Recognition

More recently, Bidirectional Encoder Representations from Transformers (...
research
02/22/2022

Improving CTC-based speech recognition via knowledge transferring from pre-trained language models

Recently, end-to-end automatic speech recognition models based on connec...
research
07/16/2020

Translate Reverberated Speech to Anechoic Ones: Speech Dereverberation with BERT

Single channel speech dereverberation is considered in this work. Inspir...
research
06/01/2020

An Effective Contextual Language Modeling Framework for Speech Summarization with Augmented Features

Tremendous amounts of multimedia associated with speech information are ...
research
06/05/2023

Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion

Using Japanese honorifics is challenging because it requires not only kn...
research
03/20/2022

g2pW: A Conditional Weighted Softmax BERT for Polyphone Disambiguation in Mandarin

Polyphone disambiguation is the most crucial task in Mandarin grapheme-t...

Please sign up or login with your details

Forgot password? Click here to reset