Attention-Based End-to-End Speech Recognition on Voice Search

07/22/2017
by   Changhao Shan, et al.
0

Recently, there has been an increasing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments. In this paper, we explore the use of attention-based encoder-decoder model for Mandarin speech recognition on voice search. We propose a smoothing method for attention mechanism and compare with content attention and convolutional attention. Moreover, frame skipping is employed for fast training and convergence. On the XiaoMi TV voice search dataset, we achieve a character error rate (CER) of 3.58 of 7.43 trigram language model, we reach 2.81

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2018

An improved hybrid CTC-Attention model for speech recognition

Recently, end-to-end speech recognition with a hybrid model consisting o...
research
07/13/2023

Personalization for BERT-based Discriminative Speech Recognition Rescoring

Recognition of personalized content remains a challenge in end-to-end sp...
research
11/15/2021

Attention based end to end Speech Recognition for Voice Search in Hindi and English

We describe here our work with automatic speech recognition (ASR) in the...
research
10/08/2021

Explaining the Attention Mechanism of End-to-End Speech Recognition Using Decision Trees

The attention mechanism has largely improved the performance of end-to-e...
research
08/30/2018

End-to-end Speech Recognition with Adaptive Computation Steps

In this paper, we present Adaptive Computation Steps (ACS) algorithm, wh...
research
02/03/2020

End-to-End Automatic Speech Recognition Integrated With CTC-Based Voice Activity Detection

This paper integrates a voice activity detection (VAD) function with end...
research
11/07/2018

CNN-based MultiChannel End-to-End Speech Recognition for everyday home environments

Casual conversations involving multiple speakers and noises from surroun...

Please sign up or login with your details

Forgot password? Click here to reset