Skeleton Focused Human Activity Recognition in RGB Video

by   Bruce X. B. Yu, et al.

The data-driven approach that learns an optimal representation of vision features like skeleton frames or RGB videos is currently a dominant paradigm for activity recognition. While great improvements have been achieved from existing single modal approaches with increasingly larger datasets, the fusion of various data modalities at the feature level has seldom been attempted. In this paper, we propose a multimodal feature fusion model that utilizes both skeleton and RGB modalities to infer human activity. The objective is to improve the activity recognition accuracy by effectively utilizing the mutual complemental information among different data modalities. For the skeleton modality, we propose to use a graph convolutional subnetwork to learn the skeleton representation. Whereas for the RGB modality, we will use the spatial-temporal region of interest from RGB videos and take the attention features from the skeleton modality to guide the learning process. The model could be either individually or uniformly trained by the back-propagation algorithm in an end-to-end manner. The experimental results for the NTU-RGB+D and Northwestern-UCLA Multiview datasets achieved state-of-the-art performance, which indicates that the proposed skeleton-driven attention mechanism for the RGB modality increases the mutual communication between different data modalities and brings more discriminative features for inferring human activities.



There are no comments yet.


page 1

page 3

page 4

page 5


Skeleton Sequence and RGB Frame Based Multi-Modality Feature Fusion Network for Action Recognition

Action recognition has been a heated topic in computer vision for its wi...

Non-local Graph Convolutional Network for joint Activity Recognition and Motion Prediction

3D skeleton-based motion prediction and activity recognition are two int...

Towards Robust Human Activity Recognition from RGB Video Stream with Limited Labeled Data

Human activity recognition based on video streams has received numerous ...

Multi-agent Attentional Activity Recognition

Multi-modality is an important feature of sensor based activity recognit...

Skeleton-based Relational Reasoning for Group Activity Analysis

Research on group activity recognition mostly leans on standard two-stre...

CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced Annotations

This paper introduces a novel activity dataset which exhibits real-life ...

AssembleNet++: Assembling Modality Representations via Attention Connections

We create a family of powerful video models which are able to: (i) learn...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.