Manipulated Object Proposal: A Discriminative Object Extraction and Feature Fusion Framework for First-Person Daily Activity Recognition

by   Changzhi Luo, et al.

Detecting and recognizing objects interacting with humans lie in the center of first-person (egocentric) daily activity recognition. However, due to noisy camera motion and frequent changes in viewpoint and scale, most of the previous egocentric action recognition methods fail to capture and model highly discriminative object features. In this work, we propose a novel pipeline for first-person daily activity recognition, aiming at more discriminative object feature representation and object-motion feature fusion. Our object feature extraction and representation pipeline is inspired by the recent success of object hypotheses and deep convolutional neural network based detection frameworks. Our key contribution is a simple yet effective manipulated object proposal generation scheme. This scheme leverages motion cues such as motion boundary and motion magnitude (in contrast, camera motion is usually considered as "noise" for most previous methods) to generate a more compact and discriminative set of object proposals, which are more closely related to the objects which are being manipulated. Then, we learn more discriminative object detectors from these manipulated object proposals based on region-based convolutional neural network (R-CNN). Meanwhile, we develop a network based feature fusion scheme which better combines object and motion features. We show in experiments that the proposed framework significantly outperforms the state-of-the-art recognition performance on a challenging first-person daily activity benchmark.


Attention-Based Sensor Fusion for Human Activity Recognition Using IMU Signals

Human Activity Recognition (HAR) using wearable devices such as smart wa...

Object and Text-guided Semantics for CNN-based Activity Recognition

Many previous methods have demonstrated the importance of considering se...

Three-Stream Fusion Network for First-Person Interaction Recognition

First-person interaction recognition is a challenging task because of un...

Ego-Object Discovery

Lifelogging devices are spreading faster everyday. This growth can repre...

Activity Recognition with Moving Cameras and Few Training Examples: Applications for Detection of Autism-Related Headbanging

Activity recognition computer vision algorithms can be used to detect th...

PerceptionNet: A Deep Convolutional Neural Network for Late Sensor Fusion

Human Activity Recognition (HAR) based on motion sensors has drawn a lot...

Hierarchical Context Embedding for Region-based Object Detection

State-of-the-art two-stage object detectors apply a classifier to a spar...