-
Investigation of End-To-End Speaker-Attributed ASR for Continuous Multi-Talker Recordings
Recently, an end-to-end (E2E) speaker-attributed automatic speech recogn...
read it
-
Minimum Bayes Risk Training for End-to-End Speaker-Attributed ASR
Recently, an end-to-end speaker-attributed automatic speech recognition ...
read it
-
Large-Scale Pre-Training of End-to-End Multi-Talker ASR for Meeting Transcription with Single Distant Microphone
Transcribing meetings containing overlapped speech with only a single di...
read it
-
Multi-encoder multi-resolution framework for end-to-end speech recognition
Attention-based methods and Connectionist Temporal Classification (CTC) ...
read it
-
"This is Houston. Say again, please". The Behavox system for the Apollo-11 Fearless Steps Challenge (phase II)
We describe the speech activity detection (SAD), speaker diarization (SD...
read it
-
Stream Attention for far-field multi-microphone ASR
A stream attention framework has been applied to the posterior probabili...
read it
-
Multi-Stream End-to-End Speech Recognition
Attention-based methods and Connectionist Temporal Classification (CTC) ...
read it
Exploring End-to-End Multi-channel ASR with Bias Information for Meeting Transcription
Joint optimization of multi-channel front-end and automatic speech recognition (ASR) has attracted much interest. While promising results have been reported for various tasks, past studies on its meeting transcription application were limited to small scale experiments. It is still unclear whether such a joint framework can be beneficial for a more practical setup where a massive amount of single channel training data can be leveraged for building a strong ASR back-end. In this work, we present our investigation on the joint modeling of a mask-based beamformer and Attention-Encoder-Decoder-based ASR in the setting where we have 75k hours of single-channel data and a relatively small amount of real multi-channel data for model training. We explore effective training procedures, including a comparison of simulated and real multi-channel training data. To guide the recognition towards a target speaker and deal with overlapped speech, we also explore various combinations of bias information, such as direction of arrivals and speaker profiles. We propose an effective location bias integration method called deep concatenation for the beamformer network. In our evaluation on various meeting recordings, we show that the proposed framework achieves a substantial word error rate reduction.
READ FULL TEXT
Comments
There are no comments yet.