Joint Encoding of Appearance and Motion Features with Self-supervision for First Person Action Recognition

02/10/2020
by   Mirco Planamente, et al.
0

Wearable cameras are becoming more and more popular in several applications, increasing the interest of the research community in developing approaches for recognizing actions from a first-person point of view. An open challenge is how to cope with the limited amount of motion information available about the action itself, as opposed to the more investigated third-person action recognition scenario. When focusing on manipulation tasks, videos tend to record only parts of the movement, making crucial the understanding of the objects being manipulated and of their context. Previous works addressed this issue with two-stream architectures, one dedicated to modeling the appearance of objects involved in the action, another dedicated to extracting motion features from optical flow. In this paper, we argue that features from these two information channels should be learned jointly to capture the spatio-temporal correlations between the two in a better way. To this end, we propose a single stream architecture able to do so, thanks to the addition of a self-supervised block that uses a pretext motion segmentation task to intertwine motion and appearance knowledge. Experiments on several publicly available databases show the power of our approach.

READ FULL TEXT

page 4

page 5

research
07/26/2018

Motion Feature Network: Fixed Motion Filter for Action Recognition

Spatio-temporal representations in frame sequences play an important rol...
research
04/07/2016

Trajectory Aligned Features For First Person Action Recognition

Egocentric videos are characterised by their ability to have the first p...
research
12/30/2017

A Unified Method for First and Third Person Action Recognition

In this paper, a new video classification methodology is proposed which ...
research
08/23/2023

MOFO: MOtion FOcused Self-Supervision for Video Understanding

Self-supervised learning (SSL) techniques have recently produced outstan...
research
02/14/2021

Learning Self-Similarity in Space and Time as Generalized Motion for Action Recognition

Spatio-temporal convolution often fails to learn motion dynamics in vide...
research
01/04/2018

What have we learned from deep representations for action recognition?

As the success of deep models has led to their deployment in all areas o...
research
07/22/2023

Two-stream Multi-level Dynamic Point Transformer for Two-person Interaction Recognition

As a fundamental aspect of human life, two-person interactions contain m...

Please sign up or login with your details

Forgot password? Click here to reset