End-to-end Multimodal Emotion and Gender Recognition with Dynamic Weights of Joint Loss

09/04/2018 ∙ by Myungsu Chae, et al. ∙ 0

Multi-task learning (MTL) is one of the method for improving generalizability of multiple tasks. In order to perform multiple classification tasks with one neural network model, the losses of each task should be combined. Previous studies have mostly focused on prediction of multiple tasks using joint loss with static weights for training model. Choosing weights between tasks have not taken any considerations while it is set by uniformly or empirically. In this study, we propose a method to make joint loss using dynamic weights to improve total performance not an individual performance of tasks, and apply this method to end-to-end multimodal emotion and gender recognition model using audio and video data. This approach provides proper weights for each loss of the tasks when training ends. In our experiment, a performance of emotion and gender recognition with proposed method shows lower joint loss which is computed as negative log-likelihood than the one with static weights of joint loss. Also, our proposed model shows better generalizability than compared models. In our best knowledge, this research shows the strength of dynamic weights of joint loss for maximizing total performance at first in emotion and gender recognition task.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

Code Repositories

IROS2018_ws

End-to-end multimodal emotion and gender recognition with dynamic weights of joint loss


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.