Learning Actor-centered Representations for Action Localization in Streaming Videos using Predictive Learning

Event perception tasks such as recognizing and localizing actions in streaming videos are essential for tackling visual understanding tasks. Progress has primarily been driven by the use of large-scale, annotated training data in a supervised manner. In this work, we tackle the problem of learning actor-centered representations through the notion of continual hierarchical predictive learning to localize actions in streaming videos without any training annotations. Inspired by cognitive theories of event perception, we propose a novel, self-supervised framework driven by the notion of hierarchical predictive learning to construct actor-centered features by attention-based contextualization. Extensive experiments on three benchmark datasets show that the approach can learn robust representations for localizing actions using only one epoch of training, i.e., we train the model continually in streaming fashion - one frame at a time, with a single pass through training videos. We show that the proposed approach outperforms unsupervised and weakly supervised baselines while offering competitive performance to fully supervised approaches. Finally, we show that the proposed model can generalize to out-of-domain data without significant loss in performance without any finetuning for both the recognition and localization tasks.


page 1

page 2

page 8


Action Localization through Continual Predictive Learning

The problem of action recognition involves locating the action in the vi...

Towards Active Vision for Action Localization with Reactive Control and Predictive Learning

Visual event perception tasks such as action localization have primarily...

A Perceptual Prediction Framework for Self Supervised Event Segmentation

Temporal segmentation of long videos is an important problem, that has l...

Guess Where? Actor-Supervision for Spatiotemporal Action Localization

This paper addresses the problem of spatiotemporal localization of actio...

Unsupervised Gaze Prediction in Egocentric Videos by Energy-based Surprise Modeling

Egocentric perception has grown rapidly with the advent of immersive com...

Knowledge Guided Learning: Towards Open Domain Egocentric Action Recognition with Zero Supervision

Advances in deep learning have enabled the development of models that ha...

A Knowledge-Driven Quality-of-Experience Model for Adaptive Streaming Videos

The fundamental conflict between the enormous space of adaptive streamin...

Please sign up or login with your details

Forgot password? Click here to reset