Multitask Learning to Improve Egocentric Action Recognition

09/15/2019
by   Georgios Kapidis, et al.
0

In this work we employ multitask learning to capitalize on the structure that exists in related supervised tasks to train complex neural networks. It allows training a network for multiple objectives in parallel, in order to improve performance on at least one of them by capitalizing on a shared representation that is developed to accommodate more information than it otherwise would for a single task. We employ this idea to tackle action recognition in egocentric videos by introducing additional supervised tasks. We consider learning the verbs and nouns from which action labels consist of and predict coordinates that capture the hand locations and the gaze-based visual saliency for all the frames of the input video segments. This forces the network to explicitly focus on cues from secondary tasks that it might otherwise have missed resulting in improved inference. Our experiments on EPIC-Kitchens and EGTEA Gaze+ show consistent improvements when training with multiple tasks over the single-task baseline. Furthermore, in EGTEA Gaze+ we outperform the state-of-the-art in action recognition by 3.84 hand and gaze estimations as side tasks, without requiring any additional input at test time other than the RGB video clips.

READ FULL TEXT

page 1

page 8

research
01/07/2019

Mutual Context Network for Jointly Estimating Egocentric Gaze and Actions

In this work, we address two coupled tasks of gaze prediction and action...
research
05/31/2020

In the Eye of the Beholder: Gaze and Actions in First Person Video

We address the task of jointly determining what a person is doing and wh...
research
03/30/2020

Speech2Action: Cross-modal Supervision for Action Recognition

Is it possible to guess human action from dialogue alone? In this work w...
research
06/19/2018

Modality Distillation with Multiple Stream Networks for Action Recognition

Diverse input data modalities can provide complementary cues for several...
research
09/08/2019

Multi-Modal Three-Stream Network for Action Recognition

Human action recognition in video is an active yet challenging research ...
research
11/18/2017

Excitation Backprop for RNNs

Deep models are state-of-the-art for many vision tasks including video a...
research
05/04/2020

Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video

In this paper, we tackle the problem of egocentric action anticipation, ...

Please sign up or login with your details

Forgot password? Click here to reset