Distilling Knowledge Using Parallel Data for Far-field Speech Recognition

02/20/2018
by   Jiangyan Yi, et al.
0

In order to improve the performance for far-field speech recognition, this paper proposes to distill knowledge from the close-talking model to the far-field model using parallel data. The close-talking model is called the teacher model. The far-field model is called the student model. The student model is trained to imitate the output distributions of the teacher model. This constraint can be realized by minimizing the Kullback-Leibler (KL) divergence between the output distribution of the student model and the teacher model. Experimental results on AMI corpus show that the best student model achieves up to 4.7 conventionally-trained baseline models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset