Decoupling recognition and transcription in Mandarin ASR

08/02/2021
by   Jiahong Yuan, et al.
0

Much of the recent literature on automatic speech recognition (ASR) is taking an end-to-end approach. Unlike English where the writing system is closely related to sound, Chinese characters (Hanzi) represent meaning, not sound. We propose factoring audio -> Hanzi into two sub-tasks: (1) audio -> Pinyin and (2) Pinyin -> Hanzi, where Pinyin is a system of phonetic transcription of standard Chinese. Factoring the audio -> Hanzi task in this way achieves 3.9 CER (character error rate) on the Aishell-1 corpus, the best result reported on this dataset so far.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/12/2018

TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation

In this paper, we present TED-LIUM release 3 corpus dedicated to speech ...
research
06/17/2018

Extending Recurrent Neural Aligner for Streaming End-to-End Speech Recognition in Mandarin

End-to-end models have been showing superiority in Automatic Speech Reco...
research
10/27/2022

SAN: a robust end-to-end ASR model architecture

In this paper, we propose a novel Siamese Adversarial Network (SAN) arch...
research
07/22/2022

ASR Error Detection via Audio-Transcript entailment

Despite improved performances of the latest Automatic Speech Recognition...
research
10/24/2022

10 hours data is all you need

We propose a novel procedure to generate pseudo mandarin speech data nam...
research
05/24/2022

Multi-Level Modeling Units for End-to-End Mandarin Speech Recognition

The choice of modeling units affects the performance of the acoustic mod...
research
02/03/2022

Joint Speech Recognition and Audio Captioning

Speech samples recorded in both indoor and outdoor environments are ofte...

Please sign up or login with your details

Forgot password? Click here to reset