Enhance Language Identification using Dual-mode Model with Knowledge Distillation

03/07/2022
by   Hexin Liu, et al.
0

In this paper, we propose to employ a dual-mode framework on the x-vector self-attention (XSA-LID) model with knowledge distillation (KD) to enhance its language identification (LID) performance for both long and short utterances. The dual-mode XSA-LID model is trained by jointly optimizing both the full and short modes with their respective inputs being the full-length speech and its short clip extracted by a specific Boolean mask, and KD is applied to further boost the performance on short utterances. In addition, we investigate the impact of clip-wise linguistic variability and lexical integrity for LID by analyzing the variation of LID performance in terms of the lengths and positions of the mimicked speech clips. We evaluated our approach on the MLS14 data from the NIST 2017 LRE. With the 3 s random-location Boolean mask, our proposed method achieved 19.23 average cost compared with the XSA-LID model on 3s, 10s, and 30s speech, respectively.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset