Domain adversarial learning for emotion recognition

10/24/2019
by   Zheng Lian, et al.
0

In practical applications for emotion recognition, users do not always exist in the training corpus. The mismatch between training speakers and testing speakers affects the performance of the trained model. To deal with this problem, we need our model to focus on emotion-related information, while ignoring the difference between speaker identities. In this paper, we look into the use of the domain adversarial neural network (DANN) to extract a common representation between different speakers. The primary task is to predict emotion labels. The secondary task is to learn a common representation where speaker identities can not be distinguished. By using the gradient reversal layer, the gradients coming from the secondary task are used to bring the representations for different speakers closer. To verify the effectiveness of the proposed method, we conduct experiments on the IEMOCAP database. Experimental results demonstrate that the proposed framework shows an absolute improvement of 3.48

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset