Enhance Language Identification using Dual-mode Model with Knowledge Distillation

03/07/2022
by   Hexin Liu, et al.
0

In this paper, we propose to employ a dual-mode framework on the x-vector self-attention (XSA-LID) model with knowledge distillation (KD) to enhance its language identification (LID) performance for both long and short utterances. The dual-mode XSA-LID model is trained by jointly optimizing both the full and short modes with their respective inputs being the full-length speech and its short clip extracted by a specific Boolean mask, and KD is applied to further boost the performance on short utterances. In addition, we investigate the impact of clip-wise linguistic variability and lexical integrity for LID by analyzing the variation of LID performance in terms of the lengths and positions of the mimicked speech clips. We evaluated our approach on the MLS14 data from the NIST 2017 LRE. With the 3 s random-location Boolean mask, our proposed method achieved 19.23 average cost compared with the XSA-LID model on 3s, 10s, and 30s speech, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2023

DMKD: Improving Feature-based Knowledge Distillation for Object Detection Via Dual Masking Augmentation

Recent mainstream masked distillation methods function by reconstructing...
research
09/18/2023

Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation

Much research effort is being applied to the task of compressing the kno...
research
08/24/2023

Fall Detection using Knowledge Distillation Based Long short-term memory for Offline Embedded and Low Power Devices

This paper presents a cost-effective, low-power approach to unintentiona...
research
03/22/2022

Channel Self-Supervision for Online Knowledge Distillation

Recently, researchers have shown an increased interest in the online kno...
research
03/07/2023

Adaptive Knowledge Distillation between Text and Speech Pre-trained Models

Learning on a massive amount of speech corpus leads to the recent succes...
research
05/21/2023

DualVC: Dual-mode Voice Conversion using Intra-model Knowledge Distillation and Hybrid Predictive Coding

Voice conversion is an increasingly popular technology, and the growing ...

Please sign up or login with your details

Forgot password? Click here to reset