Efficient Modelling Across Time of Human Actions and Interactions

10/05/2021
by   Alexandros Stergiou, et al.
0

This thesis focuses on video understanding for human action and interaction recognition. We start by identifying the main challenges related to action recognition from videos and review how they have been addressed by current methods. Based on these challenges, and by focusing on the temporal aspect of actions, we argue that current fixed-sized spatio-temporal kernels in 3D convolutional neural networks (CNNs) can be improved to better deal with temporal variations in the input. Our contributions are based on the enlargement of the convolutional receptive fields through the introduction of spatio-temporal size-varying segments of videos, as well as the discovery of the local feature relevance over the entire video sequence. The resulting extracted features encapsulate information that includes the importance of local features across multiple temporal durations, as well as the entire video sequence. Subsequently, we study how we can better handle variations between classes of actions, by enhancing their feature differences over different layers of the architecture. The hierarchical extraction of features models variations of relatively similar classes the same as very dissimilar classes. Therefore, distinctions between similar classes are less likely to be modelled. The proposed approach regularises feature maps by amplifying features that correspond to the class of the video that is processed. We move away from class-agnostic networks and make early predictions based on feature amplification mechanism. The proposed approaches are evaluated on several benchmark action recognition datasets and show competitive results. In terms of performance, we compete with the state-of-the-art while being more efficient in terms of GFLOPs. Finally, we present a human-understandable approach aimed at providing visual explanations for features learned over spatio-temporal networks.

READ FULL TEXT

page 1

page 11

page 18

page 23

page 27

page 28

page 35

page 36

research
11/08/2020

Right on Time: Multi-Temporal Convolutions for Human Action Recognition in Videos

The variations in the temporal performance of human actions observed in ...
research
05/09/2017

Deep Spatio-temporal Manifold Network for Action Recognition

Visual data such as videos are often sampled from complex manifold. We p...
research
02/07/2020

Learning Class Regularized Features for Action Recognition

Training Deep Convolutional Neural Networks (CNNs) is based on the notio...
research
05/23/2023

Full Resolution Repetition Counting

Given an untrimmed video, repetitive actions counting aims to estimate t...
research
06/15/2020

Learn to cycle: Time-consistent feature discovery for action recognition

Temporal motion has been one of the essential components for effectively...
research
02/04/2016

Joint Recognition and Segmentation of Actions via Probabilistic Integration of Spatio-Temporal Fisher Vectors

We propose a hierarchical approach to multi-action recognition that perf...
research
02/20/2019

Dynamic Matrix Decomposition for Action Recognition

Designing a technique for the automatic analysis of different actions in...

Please sign up or login with your details

Forgot password? Click here to reset