Efficient Modelling Across Time of Human Actions and Interactions

by   Alexandros Stergiou, et al.

This thesis focuses on video understanding for human action and interaction recognition. We start by identifying the main challenges related to action recognition from videos and review how they have been addressed by current methods. Based on these challenges, and by focusing on the temporal aspect of actions, we argue that current fixed-sized spatio-temporal kernels in 3D convolutional neural networks (CNNs) can be improved to better deal with temporal variations in the input. Our contributions are based on the enlargement of the convolutional receptive fields through the introduction of spatio-temporal size-varying segments of videos, as well as the discovery of the local feature relevance over the entire video sequence. The resulting extracted features encapsulate information that includes the importance of local features across multiple temporal durations, as well as the entire video sequence. Subsequently, we study how we can better handle variations between classes of actions, by enhancing their feature differences over different layers of the architecture. The hierarchical extraction of features models variations of relatively similar classes the same as very dissimilar classes. Therefore, distinctions between similar classes are less likely to be modelled. The proposed approach regularises feature maps by amplifying features that correspond to the class of the video that is processed. We move away from class-agnostic networks and make early predictions based on feature amplification mechanism. The proposed approaches are evaluated on several benchmark action recognition datasets and show competitive results. In terms of performance, we compete with the state-of-the-art while being more efficient in terms of GFLOPs. Finally, we present a human-understandable approach aimed at providing visual explanations for features learned over spatio-temporal networks.


page 1

page 11

page 18

page 23

page 27

page 28

page 35

page 36


Right on Time: Multi-Temporal Convolutions for Human Action Recognition in Videos

The variations in the temporal performance of human actions observed in ...

Deep Spatio-temporal Manifold Network for Action Recognition

Visual data such as videos are often sampled from complex manifold. We p...

Learning Class Regularized Features for Action Recognition

Training Deep Convolutional Neural Networks (CNNs) is based on the notio...

Full Resolution Repetition Counting

Given an untrimmed video, repetitive actions counting aims to estimate t...

Learn to cycle: Time-consistent feature discovery for action recognition

Temporal motion has been one of the essential components for effectively...

Joint Recognition and Segmentation of Actions via Probabilistic Integration of Spatio-Temporal Fisher Vectors

We propose a hierarchical approach to multi-action recognition that perf...

Dynamic Matrix Decomposition for Action Recognition

Designing a technique for the automatic analysis of different actions in...

Please sign up or login with your details

Forgot password? Click here to reset