Action Recognition Using Volumetric Motion Representations

11/19/2019
by   Michael Peven, et al.
0

Traditional action recognition models are constructed around the paradigm of 2D perspective imagery. Though sophisticated time-series models have pushed the field forward, much of the information is still not exploited by confining the domain to 2D. In this work, we introduce a novel representation of motion as a voxelized 3D vector field and demonstrate how it can be used to improve performance of action recognition networks. This volumetric representation is a natural fit for 3D CNNs, and allows out-of-plane data augmentation techniques during training of these networks. Both the construction of this representation from RGB-D video and inference can be run in real time. We demonstrate superior results using this representation with our network design on the open-source NTU RGB+D dataset where it outperforms state-of-the-art on both of the defined evaluation metrics. Furthermore, we experimentally show how the out-of-plane augmentation techniques create viewpoint invariance and allow the model trained using this representation to generalize to unseen camera angles. Code is available here: https://github.com/mpeven/ntu_rgb.

READ FULL TEXT VIEW PDF

page 4

page 5

04/19/2019

Temporal Unet: Sample Level Human Action Recognition using WiFi

Human doing actions will result in WiFi distortion, which is widely expl...
04/01/2022

ObjectMix: Data Augmentation by Copy-Pasting Objects in Videos for Action Recognition

In this paper, we propose a data augmentation method for action recognit...
05/21/2019

Lightweight Network Architecture for Real-Time Action Recognition

In this work we present a new efficient approach to Human Action Recogni...
06/17/2020

A Real-time Action Representation with Temporal Encoding and Deep Compression

Deep neural networks have achieved remarkable success for video-based ac...
05/12/2020

3DV: 3D Dynamic Voxel for Action Recognition in Depth Video

To facilitate depth-based 3D action recognition, 3D dynamic voxel (3DV) ...
08/30/2021

LIGAR: Lightweight General-purpose Action Recognition

Growing amount of different practical tasks in a video understanding pro...
06/23/2021

Vision-based Behavioral Recognition of Novelty Preference in Pigs

Behavioral scoring of research data is crucial for extracting domain-spe...

1 Introduction

High-quality, low-cost RGB-D sensors are already commonplace in today’s world. Color, depth, and often articulated pose data (3D body configuration) can be collected with ease in real time. The success of action recognition methods depend on how these multi-modal data sources can be used.

Articulated pose data is a popular choice as input to action recognition classifiers as it has the advantage of being an easy-to-use, low-dimensional representation of a human. Pose data lies in 3D Cartesian space, therefore

out-of-plane data augmentation techniques (3D rotations and translations) can be used to infer novel viewpoints and increase training set size. However, skeletons alone do not provide enough context for activities involving human-object interaction. Furthermore, pose information is often noisy and can only perform as well as the middleware in RGB-D sensors which compute it - by definition limiting performance. Despite the fact that pose is often used for action recognition, it remains to be seen whether this intermediate representation alone is enough to recover a diverse set of human activities from RGB-D data.

The field of action recognition has benefited from the substantial performance improvements of Convolutional Neural Networks (CNNs) over the past decade. We have seen the ability of these networks, trained on millions of images for object recognition, to have considerable generalization when applied to other tasks, including the problem of action recognition in video. However, the performance gains for action recognition have not been commensurate with object detection results - the pre-trained object recognition networks lack the ability to model the temporal structure present in actions. To deal with this, two-stream convolutional networks

[simonyan2014two] use pre-trained networks on a single image in conjunction with 2D optical flow over multiple RGB frames to represent motion. Despite these successes, methods using 2D video lack the important 3D spatial representation obtained from RGB-D sensors. 2D data augmentation approaches are limited either to in-plane operations or techniques like color jittering that do not represent legitimate physical phenomena. Furthermore, traditional 2D imagery is viewpoint dependent - without context, optical illusions such as “forced perspective” causes an affiliation between overlapping objects that are in fact separated.

In this work, we propose a novel method for classification of actions in video. We design an input representation that takes advantage of 3D motion signals captured from RGB-D sensors without relying on pose data. Using this volumetric input, we investigate an extension to the successful two-stream CNN architecture [simonyan2014two]

and experimentally demonstrate how our input representation improves performance over the original 2D formulation. Furthermore, using 3D data augmentation techniques, we show that we can improve performance by generalizing to unseen views. Finally, to capture the long-term structure of human activities, we apply a recurrent neural network which has shown promise in modeling global temporal signals

[ng2015beyond].

To summarize, our contribution is threefold. First, we introduce a novel input to action recognition network using a volumetric representation of 3D motion by projecting 2D optical flow from RGB video into 3D using the z-coordinate captured in a depth-map video. Next, we show that out-of-plane

data augmentation (translations and rotations) that can be performed over this voxel grid input helps to create view-point invariance. Finally, we obtain state-of-the-art activity classification results using a two-stream CNN network with 3D convolutions over this volumetric representation and further extend it with a long-short term memory network (LSTM) on the output of the convolutional layers. The two-stream network captures spatial and short-term temporal signals from the video while the LTSM models the longer-term temporal structure.

We have released the code in an open-source repository located at https://github.com/mpeven/ntu_rgb. It contains everything needed for creating the input representations and training the network on these inputs. Also included is a custom OpenGL application for viewing the input representations. This was built in order to manually validate an interpretable input representation, demonstrated in Figure 1.

The rest of this paper is organized as follows. In Section 2, we discuss related work in the field of action recognition. In Section 3, we introduce our method of representing 3D motion from RGB-D data and define our proposed neural network architecture in detail. Implementation and evaluation details are given in Section 4 with a presentation and discussion of our results.

Figure 1: Our volumetric representation of motion during the ’shaking hands’ action. The red points show the point cloud and the blue vectors represent the 3D motion vectors used as input to the 3D convolutional network. The code to visualize this type of representation from any RGB+D video is provided as an OpenGL application in the released source code.

2 Related Work

A broad range of action recognition methods have been studied in the computer vision community over the past few decades. Previous works related to ours fall under the following categories: i) Convolutional networks for action recognition, ii) Long-term temporal modeling for action recognition, and iii) Input representations for action recognition.

2.1 CNNs for Action Recognition

The success of CNNs in the image domain has led to a variety of methods when applied toward action recognition. To incorporate the temporal information in video, early work investigated methods involving 3D CNNs [baccouche2011sequential, ji20133d, karpathy2014large, tran2015learning] which perform 3D convolutions over snippets of images to learn spatiotemporal patterns. However, [karpathy2014large] showed similar performance between a 3D CNN over multiple frames and a 2D CNN over a single frame in a large-scale action recognition setting. This result indicates that the spatiotemporal filters in 3D CNNs may not learn a useful temporal signal without a considerable number of training videos. To correct for this, [sun2015human] propose using a 1D temporal convolution to be placed on top of 2D spatial convolutions. Recent work in [carreira2017quo], show impressive results using 3D CNNs with a deep temporal receptive field. Interestingly, [carreira2017quo] showed that explicitly representing motion using 2D optical flow still improved performance, even when using 3D kernels which are claimed to learn motion signals across video frames.

In this work, we also employ 3D CNNs; however, ours differs in that the three input dimensions are not (where is the temporal dimension of a video), but

, as the input representations we create lie in 3D Cartesian space. Because the convolutions in our 3D CNN are applied over a volume, it is reasonable to assume a symmetric receptive field. Specifically, this means we avoid having to either estimate or experimentally evaluate the receptive field of convolutions in the temporal dimension as is done in

[carreira2017quo] and [tran2015learning], where the choice may not be optimal when generalizing to a new dataset that has different frame-rates or actions that happen at an unfamiliar pace. Following on the success of 3D CNNs for the object recognition domain in [song2016deep], we investigate its application towards action recognition by using a volumetric motion field.

Our proposed method is most closely related to the two-stream CNNs introduced in [simonyan2014two]. This approach has a spatial CNN, leveraging the success of object recognition networks and pre-trained models and it has a temporal CNN using perspective-based 2D optical flow to represent motion. The substantial success of two-stream approaches has led to numerous successors [carreira2017quo, feichtenhofer2016convolutional, lea2017temporal, liu2017enhanced, wang2015action, wang2015towards] into one of the more effective methods for action recognition to date.

2.2 Modeling Longer-term Structure

A limitation of both 3D CNNs and the original two-stream approach is the inability to model the long-term temporal structure present in video. The work introduced in [baccouche2010action] propose recurrent neural networks (RNNs), specifically LSTM cells, to model this long-term structure. This work is expanded upon in [baccouche2011sequential], where it uses the outputs from a 3D CNN as the input to the RNN. The work in [donahue2015long, lea2017temporal, ng2015beyond, singh2016multi, wang2016temporal] all propose alternative methods of modeling a longer-term signal on top of the outputs of either the original two-stream approach or similar techniques of separating spatial and short-term temporal features.

2.3 Input Representations for Action Recognition

With the emergence of low-cost RGB-D sensors, many datasets for action recognition also include frame-wise depth maps and articulated pose data. A variety of methods look at using more than just the RGB video frames, for example, in [kim2017interpretable, liu2016spatio, liu2017enhanced, rahmani2017learning, shahroudy2016ntu, zhao2017two, zolfaghari2017chained] articulated pose data is used for action recognition; either alone or in addition to RGB and depth video. As the pose output from Microsoft Kinect is calculated using the depth map only, work in [haque2016recurrent, liu2017enhanced] ignore this intermediate output and try to learn a representation with depth-maps only. These works have shown that the 3D representation from pose or depth contains a useful signal; however, state-of-the-art results for action recognition use RGB data in some context (illustrated in Table 2).

Similar to our work, [wang2017scene] use scene flow to get a representation of 3D motion for action recognition. However, they constrain this representation to a 2D image, rather than as a volumetric motion field. Furthermore, their work differs from ours as dynamics of these motion fields in the temporal dimension are not explicitly modeled. Although these design choices made in [wang2017scene] are to make use of pre-trained object detectors, the approach we take in this work is to model 3D motion in a voxel grid in order to perform both 3D convolutions and out-of plane augmentation techniques directly on this input.

Figure 2: The network architecture is composed of three main parts: (1) the spatial stream from RGB frames; (2) the local-temporal stream using our 3D motion representation (see Fig. 3 for more detail on the design of this stream); (3) a global-temporal network to model temporal structure over an entire action.

3 Methods Overview

We describe here in detail the methods used to transform raw RGB and depth video (from an RGB-D sensor) into a volumetric representation of 3D motion. Formally, the notation we use is as follows: We look at 3 representations of each video , the RGB frames , the depth frames , and the voxelized combination . The height and width of the RGB video and depth video are and , respectively. The values , , and are the Cartesian coordinates in the voxel grid and is the number of frames in the video. The last dimension in the voxel grid is the 3D motion vectors that we will describe in detail below.

3.1 3D Voxel Flow

First, we run dense optical flow over the RGB video to get an estimate of 2D motion over the video. The output of dense optical flow at frame in is a displacement vector field, where each vector represents the motion of the scene at pixel to pixel . However, these vectors are in the RGB image system. Using standard camera calibration techniques, we can represent each value in this vector field in terms of the depth image by using the extrinsics between the color and depth cameras and computing a look-up table to map pixels in the RGB image to pixels in the depth image. Finally, given the intrinsic parameters of the depth camera, we can project each pixel in the depth image to its corresponding 3D camera coordinate:

(1)

where is the depth value at pixel coordinates and are the intrinsic parameters (principle point and focal length, respectively) of the depth camera (note: if the depth camera produces invalid data at a particular pixel, we skip it). This allows us to project the 2D vector field from the color image into the 3D camera coordinate system of the depth image. Thus, we have a dense 3D vector field representing motion across frames:

(2)

We rescale the 3D motion vectors to fit within a voxel grid. The size of the voxel grid was chosen to be large enough to clearly represent the scene (see Figure 1) but also kept small enough for computational reasons, as discussed below.

3.2 Voxel Grid Augmentations

The benefit of a volumetric representation is that we can model realistic augmentations, e.g. a change in viewing angle, in comparison to forced 2D perspective augmentation methods. Two of these out-of-plane data augmentation techniques to the voxel grid input were investigated. The first method was to translate all of the voxels in the voxel grid in the , , and directions. The second was performing an out-of-plane rotation of the motion vectors (rotation about the axis normal to the ground plane).

Figure 3: The Local-Temporal Stream. This is the network designed for the volumetric motion representation described in Section 3.1. This input is visualized here as images with a red point-cloud and green motion vectors.

3.3 3D Two-Stream Model

Inspired by the two-stream approach [simonyan2014two] of decomposing a video into a discriminative representation of an action, we wish to segment a video into both spatial and short-term temporal components. This is a powerful approach as it leverages the success of object recognition networks for the spatial stream and explicit modeling of motion in the temporal stream. Furthermore, the spatial stream is important to pick up on the discriminative spatial cues for identical motion patterns, (e.g. mopping vs. sweeping). Equivalently, the temporal motion stream can do the same in the case of identical spatial features (e.g. standing up vs. sitting down). Formally, we would like to represent video as a snippet , which represents a set of consecutive frames, i.e. the snippet starting at frame would be represented by . In this approach, the motion field throughout represents a localized motion (gesture) and the color image at represents the median spatial representation of the snippet.

However, this representation of a video is insufficient. It is unable to model the entire structure of an action longer than the number of frames in the video snippet representing the temporal feature [wang2016temporal]. We solve this issue by sampling multiple segments across the entire video. Formally, if is the input to the temporal network and video has frames, we would like to get sections , each of frames, where the frames for snippet are chosen as follows:

(3)
where:
(4)

3.3.1 Spatial Stream

The spatial appearance stream responds to static information in individual video frames. Thus, we build upon the substantial success of residual networks (ResNets) [he2016deep]

for object recognition tasks and apply it over the RGB frames sampled from our video. We can leverage the models pre-trained on the millions of images in the ImageNet dataset and fine-tune it towards our task. Training and configuration details are specified in Section

4.2. As input we use the center image in each sampled section of the video; for snippets of length , input to the spatial stream are the images .

Stream Input Cross-Subject Cross-View
2D Spatial Stream 2D images 81.48% 80.60%
2D Local-Temporal Stream 2D optical flow 71.27% 71.09%
3D Local-Temporal Stream 3D motion field 78.97% 89.58%
2D 2-Stream 2D images & 2D optical flow 88.15% 84.24%
3D 2-Stream 2D images & 3D motion field 89.84% 94.54%
Table 1: Results of an architecture ablation study on the NTU RGB+D dataset. This table shows the results from the different streams of the network. The first section shows the results from using only the RGB video as input. The second section shows a comparison of the motion stream between a 2D optical flow input and the volumetric input developed here. The final section shows the results of a product of experts voting scheme when combining the output of each stream.

3.3.2 Local-Temporal Stream

The temporal stream of our network is used to learn features from short-term motion patterns. Using the sampling technique described in Section 3.3, we create short snippets from an input video. For each snippet, we build the motion representation using the method described in Section 3.1 to collect the 3D motion vectors at each frame in the snippet, constructing a voxel grid per snippet, and concatenating the motion vectors in the snippet into their respective voxels. The final representation of a single snippet is , where , , are the dimensions of the voxel grid, and represents stacked 3D motion vectors across an entire snippet of length .

We propose a fully 3D CNN (Fig. 3) using this representation of motion as input. 3D convolutions are performed in a sliding window style search over each voxel grid. Because these voxel grids are the optical flow based representations of motion, this can be seen as the 3D equivalent of the two-stream temporal network defined in [simonyan2014two] if we choose . We consider the effectiveness of a 3D CNN compared to the 2D equivalent, which we describe below.

3.3.3 Global-Temporal Model

Modeling the global structure over compositions of the local features computed above is crucial to discriminate between certain actions. For example, taking off a shoe contains the same spatial and temporal patterns as putting on a shoe, albeit in the opposite order. A solution to this, as described in [baccouche2011sequential, donahue2015long, luo2017unsupervised, ng2015beyond] is to model the global structure of an action using recurrent neural networks. We employ an LSTM over the sequences of activations coming from both the spatial and temporal stream. As we do not wish to investigate methods of feature fusion in this work, we use separate LSTMs for the spatial and temporal stream. Therefore, we avoid attempting to model the relationship between space and time, and as can be seen in Figure 2, these networks can ultimately be trained separately. A product of experts voting scheme is used to produce a final classification across streams.

4 Experiments and Results

The methods introduced are evaluated here. We first present the evaluation dataset. We follow with a description of the implementation and training details. Then, we explore the method used to evaluate the performance of our 3D representation over the 2D baseline and we compare the full two-stream method to current state of the art results. In the end we show in an ablative study the performance gains using 3D data augmentation techniques.

4.1 NTU RGB+D Action Recognition Dataset

We evaluate our approach in the NTU RGB+D dataset [shahroudy2016ntu] which to the best of our knowledge, is the largest open-source RGB-D action recognition dataset. This dataset contains RGB, depth, IR, and 3D articulated pose data for each action. There are 56.9 thousand videos split into 60 individual action classes. Of these 60 classes, 40 of which are daily activities, 9 are health related, and 11 are mutual (two-person) actions. The dataset is collected using 3 separate Kinect V2 cameras positioned at angles , , and relative to the subject. This dataset defines two methods for evaluation, which we follow as described in [shahroudy2016ntu]. The first one is the cross-subject split where a portion of the subjects are assigned to a train split and the rest are reserved for testing. The second one is the cross-view split where videos from two of the camera angles are used for training and the other is used for testing. This corresponds to 40,320 training videos and 16,560 testing videos for the cross-subject split and 37,920 and 18,960 for the cross-view splits.

Model Pose Depth RGB Cross-Subject Cross-View
Part-Aware LSTM [shahroudy2016ntu] X 62.9% 70.3%
ED-LSTM [luo2017unsupervised] X 66.2% -
LSTM with Trust Gates [liu2016spatio] X 69.2% 77.7%
Res-TCN [kim2017interpretable] X 74.3% 83.1%
DSSCA-SSLM [shahroudy2017deep] X X 74.9% -
D+S CNN [rahmani2017learning] X X 75.2% 83.1%
D-Pose Traversal [weng2018deformable] X 76.8% 84.9%
ESV-CNN [liu2017enhanced] X X 80.0% 87.2%
Chained-MSN [zolfaghari2017chained] X X X 80.8% -
Unsupervised [li2018unsupervised] X X 80.9% 83.4%
TS-RNNCNN [zhao2017two] X X 83.7% 93.6%
CT-DAN [wang2017cooperative] X X 86.4% 89.0%
Glimpse Clouds [baradel2018glimpse] X X 86.6% 93.2%
DA-Net [wang2018dividing] X 88.1% 92.0%
Ours X X 89.8% 94.5%
Table 2: Results of our model compared to state-of-the-art methods on the NTU RGB+D dataset. We present only results from the full 2-stream network, our best performing model.

4.2 Training and Implementation Details

4.2.1 Sampling parameters

As input to the network, frames were chosen to create the volumetric motion representations from segments of the video. In the NTU RGB+D dataset, we found this choice to be reasonable as it captures nearly 60% of the average video. The size of the voxel grid was chosen to be . These values were chosen for a number of reasons. Most importantly, when visualized, it is a large enough voxel grid for a human to clearly recognize the actions being performed in the scene. Furthermore, as discussed below, it allows 3D pooling layers after each of the 4 convolution layers to reduce the final output to a vector of size where is the number of convolutions in the final layer.

4.2.2 CNN parameters

We selected the size and configuration of the convolutional layers in our temporal stream to be similar to both [simonyan2014two] and [song2016deep]. These are enumerated on the right hand side of Fig. 3

. There are 4 layers of 3D convolutions with stride 1, 3D batch-normalization, ReLU activation, and Max Pooling. The kernel sizes were

, , , and respectively. The output after the final pooling operation is a vector of size . The global temporal model (seen in Fig. 2

) is an LSTM with input size 512 and hidden size 256. Dropout is not applied in this network. The final logits layer has an output equal to the number of classes (60 in NTU RGB+D). The 2D equivalent model, discussed below, has the exact same configuration except with 2D convolutions and 2D batch-normalization. The predictions of the two-stream model uses the product of experts method

[hinton2002training] of multiplying each stream’s softmax output, however, gradient calculations for backprop were performed before the fusion of the two networks.

4.2.3 Training details

We use cross-entropy loss to optimize our models. The ADAM [kingma2014adam]

optimizer is used with a learning rate of 0.001, reducing it by 50% every 10 epochs. We train all models for 40 epochs or until the training accuracy has saturated. For the spatial stream, the ResNet-18 implementation in Pytorch was used. We use pre-trained ImageNet weights, however the weights of the first half of the network were frozen to avoid overfitting. All experiments were run on a machine with dual Titan-RTX GPUs. The batch size was 8 (videos) when training the motion stream and 128 for the spatial stream. These were chosen because of GPU memory limitations. For training it took on average 33 minutes per epoch for the motion stream and 6 minutes per epoch for the spatial stream (each epoch being a full pass over the training split). Inference over the entire test set took an average of 12 minutes, approximately 23 videos per second. This greatly exceeds real-time performance as it runs at over 1,000 frames per second. In order to determine stopping criteria during training, 5% of the training samples were reserved as a validation set.

4.2.4 Data augmentation and pre-processing

During training, random translations, as described in Section 3.2 are applied up to the total size of the grid. Random out-of-plane rotations of voxels and motion vectors are applied at about the axis normal to the ground-plane. We examine the performance of these techniques in an ablation study over the NTU RGB+D dataset (Table 3

). For the spatial stream over color images, the images need to be pre-processed before being passed through ResNet by resizing the images to (224,224) and normalizing the color values to the same mean and standard deviation. During training, we augment images through random crops, adding random color jitter, and rotating them

. The random parameters of these pre-processing and augmentation techniques are consistent across an individual sample (video), varying only between samples and across training iterations. During inference, no augmentation is applied, only rescaling and normalization. To obtain the 2D optical flow images, the off-the-shelf implementation of [farneback2003two] in the OpenCV toolkit is used.

4.3 Experimental Results and Analysis

An ablative experiment designed to evaluate our input representation is presented in Table 1. We first investigate the results from the spatial and temporal stream alone. Additionally, we investigate the 3D representation of motion in our temporal stream with a comparison to the 2D equivalent. We can see that the individual spatial and temporal streams achieve competitive results on their own, with performance rivaling or better than many of the state of the art results presented in Table 2. However, the best results come from the full 2-stream model, showing an increase in accuracy of up to 14% above its respective individual stream. This corroborates that motion and spatial features are discriminative on a disjoint set of features and the full two-stream approach is the most effective.

Using a 3D representation of motion shows a significant performance increase in comparison to the 2D equivalent. The largest increase comes in the cross-view split with an 18.5% performance increase in classification accuracy. We hypothesize these results show that out-of-plane data augmentation techniques with a 3D representation better capture geometric (view-point) invariance than the 2D equivalent. To substantiate these claims, we design an ablative experiment of the augmentation presented in Table 3 which we discuss in detail below. Of note, we can see that there is no significant difference between the results of using a 3D representation without data augmentation techniques and its 2D equivalent. This shows that a 3D representation alone is not enough.

4.4 Comparison with the State of the Art

Table 2 presents the results of previous works compared to our proposed 3D two-stream approach. For sake of comparison, we also include the modalities the various models use for training their respective models. Although various modalities were used in the methods we compare against, we felt it fair to compare across all of them, as all outputs from an RGB-D sensor (depth, RGB, and pose) are available for download. These results show that our methodology improves over the previous state-of-the-art results on this dataset.

4.5 Exploration - Data Augmentation

We conduct further experiments to understand the difference in performance when using 3D data augmentation techniques. Table 3 presents the results of this ablation study. Applying random translations to the voxel grids we see a increase in the classification rate of each evaluation metric. Applying random out-of-plane rotations to the volumetric representation boosts performance by another . It is interesting to note that the the relative performance gain on the cross-view split (17.73%) is much higher than the cross-subject split (7.79%). Because the cross-view split is from a different camera angle, we believe this illustrates that out-of-plane rotations and translations during training help the network become invariant to view-point as evidenced by this significant perfomance gain.

Augmentation Technique Cross-Subject Cross-View
No augmentation 71.18% 71.85%
Translations 76.70% 80.08%
Translations + Rotations 78.97% 89.58%
Table 3: Results on NTU RGB+D: augmentation ablation study. This shows classification accuracy when using only the volumetric motion representation (the temporal stream of the network) with the various augmentation techniques applied to the input.

The volumetric representation created in this work is from a single RGB-D camera. This is limiting because the type of video, referred to as 2.5D, can only capture partial surfaces from the depth-map, i.e. the camera-facing surface. Without complete surface reconstruction, the data augmentations through virtually rotating and translating the scene is limiting due to missing information. However, our results show a clear benefit towards applying this technique nonetheless, evidenced by the significant performance gain shown in Table 3.

5 Conclusions

We propose a method for activity recognition using a volumetric representation of 3D motion. The method achieves state of the art performance on both the cross-subject and cross-view evaluation metrics of the NTU RGB+D dataset. A novel representation of motion is created from RGB-D video by projecting a dense optical flow field, calculated over RGB frames, into a 3D voxel grid using the corresponding depth map. A two-stream convolutional network is applied over short snippets of video and an LSTM is used to model the temporal structure over the snippets. In the two-steam network, the spatial stream uses a pre-trained object recognition network for the RGB frames and for the temporal stream we define a 3D CNN formulation over the volumetric 3D motion field. In our experiments, we show that this 3D representation outperforms the equivalent 2D representation. Furthermore, we show that the out-of-plane data augmentation techniques that are possible over a 3D representation can significantly improve performance. Future work will investigate approaches to fuse the temporal and spatial stream to allow learning of complementary filters across streams.

References