Multitask Emotion Recognition Model with Knowledge Distillation and Task Discriminator

03/24/2022
by   Euiseok Jeong, et al.
0

Due to the collection of big data and the development of deep learning, research to predict human emotions in the wild is being actively conducted. We designed a multi-task model using ABAW dataset to predict valence-arousal, expression, and action unit through audio data and face images at in real world. We trained model from the incomplete label by applying the knowledge distillation technique. The teacher model was trained as a supervised learning method, and the student model was trained by using the output of the teacher model as a soft label. As a result we achieved 2.40 in Multi Task Learning task validation dataset.

READ FULL TEXT
research
11/09/2019

Attentive Student Meets Multi-Task Teacher: Improved Knowledge Distillation for Pretrained Models

In this paper, we explore the knowledge distillation approach under the ...
research
07/10/2019

BAM! Born-Again Multi-Task Networks for Natural Language Understanding

It can be challenging to train multi-task neural networks that outperfor...
research
10/05/2020

Lifelong Language Knowledge Distillation

It is challenging to perform lifelong language learning (LLL) on a strea...
research
07/09/2021

Emotion Recognition with Incomplete Labels Using Modified Multi-task Learning Technique

The task of predicting affective information in the wild such as seven b...
research
07/08/2021

Multitask Multi-database Emotion Recognition

In this work, we introduce our submission to the 2nd Affective Behavior ...
research
09/12/2023

Self-Training and Multi-Task Learning for Limited Data: Evaluation Study on Object Detection

Self-training allows a network to learn from the predictions of a more c...
research
01/25/2022

Attentive Task Interaction Network for Multi-Task Learning

Multitask learning (MTL) has recently gained a lot of popularity as a le...

Please sign up or login with your details

Forgot password? Click here to reset