Pooled Motion Features for First-Person Videos

12/19/2014
by   M. S. Ryoo, et al.
0

In this paper, we present a new feature representation for first-person videos. In first-person video understanding (e.g., activity recognition), it is very important to capture both entire scene dynamics (i.e., egomotion) and salient local motion observed in videos. We describe a representation framework based on time series pooling, which is designed to abstract short-term/long-term changes in feature descriptor elements. The idea is to keep track of how descriptor values are changing over time and summarize them to represent motion in the activity video. The framework is general, handling any types of per-frame feature descriptors including conventional motion descriptors like histogram of optical flows (HOF) as well as appearance descriptors from more recent convolutional neural networks (CNN). We experimentally confirm that our approach clearly outperforms previous feature representations including bag-of-visual-words and improved Fisher vector (IFV) when using identical underlying feature descriptors. We also confirm that our feature representation has superior performance to existing state-of-the-art features like local spatio-temporal features and Improved Trajectory Features (originally developed for 3rd-person videos) when handling first-person videos. Multiple first-person activity datasets were tested under various settings to confirm these findings.

READ FULL TEXT

page 2

page 3

research
11/15/2017

A Correlation Based Feature Representation for First-Person Activity Recognition

In this paper, a simple yet efficient feature encoding for first-person ...
research
02/22/2017

Boosted Multiple Kernel Learning for First-Person Activity Recognition

Activity recognition from first-person (ego-centric) videos has recently...
research
02/13/2015

Long-short Term Motion Feature for Action Classification and Retrieval

We propose a method for representing motion information for video classi...
research
10/15/2019

Being the center of attention: A Person-Context CNN framework for Personality Recognition

This paper proposes a novel study on personality recognition using video...
research
12/30/2017

A Unified Method for First and Third Person Action Recognition

In this paper, a new video classification methodology is proposed which ...
research
02/19/2018

Learning Representative Temporal Features for Action Recognition

In this paper we present a novel video classification methodology that a...
research
11/21/2016

Deep Temporal Linear Encoding Networks

The CNN-encoding of features from entire videos for the representation o...

Please sign up or login with your details

Forgot password? Click here to reset