Action Localization through Continual Predictive Learning

03/26/2020
by   Sathyanarayanan N. Aakur, et al.
3

The problem of action recognition involves locating the action in the video, both over time and spatially in the image. The dominant current approaches use supervised learning to solve this problem, and require large amounts of annotated training data, in the form of frame-level bounding box annotations around the region of interest. In this paper, we present a new approach based on continual learning that uses feature-level predictions for self-supervision. It does not require any training annotations in terms of frame-level bounding boxes. The approach is inspired by cognitive models of visual event perception that propose a prediction-based approach to event understanding. We use a stack of LSTMs coupled with CNN encoder, along with novel attention mechanisms, to model the events in the video and use this model to predict high-level features for the future frames. The prediction errors are used to continuously learn the parameters of the models. This self-supervised framework is not complicated as other approaches but is very effective in learning robust visual representations for both labeling and localization. It should be noted that the approach outputs in a streaming fashion, requiring only a single pass through the video, making it amenable for real-time processing. We demonstrate this on three datasets - UCF Sports, JHMDB, and THUMOS'13 and show that the proposed approach outperforms weakly-supervised and unsupervised baselines and obtains competitive performance compared to fully supervised baselines. Finally, we show that the proposed framework can generalize to egocentric videos and obtain state-of-the-art results in unsupervised gaze prediction.

READ FULL TEXT
research
04/29/2021

Learning Actor-centered Representations for Action Localization in Streaming Videos using Predictive Learning

Event perception tasks such as recognizing and localizing actions in str...
research
05/05/2020

Temporal Event Segmentation using Attention-based Perceptual Prediction Model for Continual Learning

Temporal event segmentation of a long video into coherent events require...
research
11/09/2021

Towards Active Vision for Action Localization with Reactive Control and Predictive Learning

Visual event perception tasks such as action localization have primarily...
research
11/12/2018

A Perceptual Prediction Framework for Self Supervised Event Segmentation

Temporal segmentation of long videos is an important problem, that has l...
research
02/20/2019

Learning Transferable Self-attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision

Action recognition in videos has attracted a lot of attention in the pas...
research
09/13/2021

Weakly Supervised Person Search with Region Siamese Networks

Supervised learning is dominant in person search, but it requires elabor...
research
06/16/2022

Scalable Temporal Localization of Sensitive Activities in Movies and TV Episodes

To help customers make better-informed viewing choices, video-streaming ...

Please sign up or login with your details

Forgot password? Click here to reset