Self-and-Mixed Attention Decoder with Deep Acoustic Structure for Transformer-based LVCSR

06/18/2020
by   Xinyuan Zhou, et al.
0

The Transformer has shown impressive performance in automatic speech recognition. It uses the encoder-decoder structure with self-attention to learn the relationship between the high-level representation of the source inputs and embedding of the target outputs. In this paper, we propose a novel decoder structure that features a self-and-mixed attention decoder (SMAD) with a deep acoustic structure (DAS) to improve the acoustic representation of Transformer-based LVCSR. Specifically, we introduce a self-attention mechanism to learn a multi-layer deep acoustic structure for multiple levels of acoustic abstraction. We also design a mixed attention mechanism that learns the alignment between different levels of acoustic abstraction and its corresponding linguistic information simultaneously in a shared embedding space. The ASR experiments on Aishell-1 shown that the proposed structure achieves CERs of 4.8 best results obtained on this task to the best of our knowledge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/25/2019

Towards Online End-to-end Transformer Automatic Speech Recognition

The Transformer self-attention network has recently shown promising perf...
research
05/23/2023

Rethinking Speech Recognition with A Multimodal Perspective via Acoustic and Semantic Cooperative Decoding

Attention-based encoder-decoder (AED) models have shown impressive perfo...
research
08/24/2022

Deep model with built-in self-attention alignment for acoustic echo cancellation

With recent research advances, deep learning models have become an attra...
research
09/02/2023

Deep-Learning Framework for Optimal Selection of Soil Sampling Sites

This work leverages the recent advancements of deep learning in image pr...
research
08/31/2018

Self-Attention Linguistic-Acoustic Decoder

The conversion from text to speech relies on the accurate mapping from l...
research
10/23/2019

Deja-vu: Double Feature Presentation in Deep Transformer Networks

Deep acoustic models typically receive features in the first layer of th...
research
11/01/2019

Improving Generalization of Transformer for Speech Recognition with Parallel Schedule Sampling and Relative Positional Embedding

Transformer showed promising results in many sequence to sequence transf...

Please sign up or login with your details

Forgot password? Click here to reset