Distilling Multi-Level X-vector Knowledge for Small-footprint Speaker Verification

03/02/2023
by   Xuechen Liu, et al.
0

Deep speaker models yield low error rates in speaker verification. Nonetheless, the high performance tends to be exchanged for model size and computation time, making these models challenging to run under limited conditions. We focus on small-footprint deep speaker embedding extraction, leveraging knowledge distillation. While prior work on this topic has addressed speaker embedding extraction at the utterance level, we propose to combine embeddings from various levels of the x-vector model (teacher network) to train small-footprint student networks. Results indicate the usefulness of frame-level information, with the student models being 85 their teacher, depending on the size of the teacher embeddings. Concatenation of teacher embeddings results in student networks that reach comparable performance along with the teacher while utilizing a 75 reduction from the teacher. The findings and analogies are furthered to other x-vector variants.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset