Self-and-Mixed Attention Decoder with Deep Acoustic Structure for Transformer-based LVCSR

by   Xinyuan Zhou, et al.
Shanghai Normal University
National University of Singapore
Beijing Unisound Information Technology Co., Ltd.
KTH Royal Institute of Technology

The Transformer has shown impressive performance in automatic speech recognition. It uses the encoder-decoder structure with self-attention to learn the relationship between the high-level representation of the source inputs and embedding of the target outputs. In this paper, we propose a novel decoder structure that features a self-and-mixed attention decoder (SMAD) with a deep acoustic structure (DAS) to improve the acoustic representation of Transformer-based LVCSR. Specifically, we introduce a self-attention mechanism to learn a multi-layer deep acoustic structure for multiple levels of acoustic abstraction. We also design a mixed attention mechanism that learns the alignment between different levels of acoustic abstraction and its corresponding linguistic information simultaneously in a shared embedding space. The ASR experiments on Aishell-1 shown that the proposed structure achieves CERs of 4.8 best results obtained on this task to the best of our knowledge.


page 1

page 2

page 3

page 4


Towards Online End-to-end Transformer Automatic Speech Recognition

The Transformer self-attention network has recently shown promising perf...

Rethinking Speech Recognition with A Multimodal Perspective via Acoustic and Semantic Cooperative Decoding

Attention-based encoder-decoder (AED) models have shown impressive perfo...

Deep model with built-in self-attention alignment for acoustic echo cancellation

With recent research advances, deep learning models have become an attra...

Deep-Learning Framework for Optimal Selection of Soil Sampling Sites

This work leverages the recent advancements of deep learning in image pr...

Self-Attention Linguistic-Acoustic Decoder

The conversion from text to speech relies on the accurate mapping from l...

Deja-vu: Double Feature Presentation in Deep Transformer Networks

Deep acoustic models typically receive features in the first layer of th...

Please sign up or login with your details

Forgot password? Click here to reset