Domain Adversarial Neural Networks for Dysarthric Speech Recognition
Speech recognition systems have improved dramatically over the last few years, however, their performance is significantly degraded for the cases of accented or impaired speech. This work explores domain adversarial neural networks (DANN) for speaker-independent speech recognition on the UAS dataset of dysarthric speech. The classification task on 10 spoken digits is performed using an end-to-end CNN taking raw audio as input. The results are compared to a speaker-adaptive (SA) model as well as speaker-dependent (SD) and multi-task learning models (MTL). The experiments conducted in this paper show that DANN achieves an absolute recognition rate of 74.91 12.18 model's recognition rate of 77.65 dysarthric speech data is available DANN and MTL perform similarly, but when they are not DANN performs better than MTL.
READ FULL TEXT