DeepAI AI Chat
Log In Sign Up

Gated Embeddings in End-to-End Speech Recognition for Conversational-Context Fusion

by   Suyoun Kim, et al.

We present a novel conversational-context aware end-to-end speech recognizer based on a gated neural network that incorporates conversational-context/word/speech embeddings. Unlike conventional speech recognition models, our model learns longer conversational-context information that spans across sentences and is consequently better at recognizing long conversations. Specifically, we propose to use the text-based external word and/or sentence embeddings (i.e., fastText, BERT) within an end-to-end framework, yielding a significant improvement in word error rate with better conversational-context representation. We evaluated the models on the Switchboard conversational speech corpus and show that our model outperforms standard end-to-end speech recognition models.


page 1

page 2

page 3

page 4


Acoustic-to-Word Models with Conversational Context Information

Conversational context information, higher-level knowledge that spans ac...

Cross-Attention End-to-End ASR for Two-Party Conversations

We present an end-to-end speech recognition model that learns interactio...

Dialog-context aware end-to-end speech recognition

Existing speech recognition systems are typically built at the sentence ...

Building competitive direct acoustics-to-word models for English conversational speech recognition

Direct acoustics-to-word (A2W) models in the end-to-end paradigm have re...

SoundChoice: Grapheme-to-Phoneme Models with Semantic Disambiguation

End-to-end speech synthesis models directly convert the input characters...

Contextual Biasing of Language Models for Speech Recognition in Goal-Oriented Conversational Agents

Goal-oriented conversational interfaces are designed to accomplish speci...

Context-aware RNNLM Rescoring for Conversational Speech Recognition

Conversational speech recognition is regarded as a challenging task due ...