Utterance-Wise Meeting Transcription System Using Asynchronous Distributed Microphones
A novel framework for meeting transcription using asynchronous microphones is proposed in this paper. It consists of audio synchronization, speaker diarization, utterance-wise speech enhancement using guided source separation, automatic speech recognition, and duplication reduction. Doing speaker diarization before speech enhancement enables the system to deal with overlapped speech without considering sampling frequency mismatch between microphones. Evaluation on our real meeting datasets showed that our framework achieved a character error rate (CER) of 28.7 microphones, while a monaural microphone placed on the center of the table had a CER of 38.2 which is only 2.1 percentage points higher than the CER in headset microphone-based transcription.
READ FULL TEXT