RSA: Randomized Simulation as Augmentation for Robust Human Action Recognition

12/03/2019
by   Yi Zhang, et al.
8

Despite the rapid growth in datasets for video activity, stable robust activity recognition with neural networks remains challenging. This is in large part due to the explosion of possible variation in video – including lighting changes, object variation, movement variation, and changes in surrounding context. An alternative is to make use of simulation data, where all of these factors can be artificially controlled. In this paper, we propose the Randomized Simulation as Augmentation (RSA) framework which augments real-world training data with synthetic data to improve the robustness of action recognition networks. We generate large-scale synthetic datasets with randomized nuisance factors. We show that training with such extra data, when appropriately constrained, can significantly improve the performance of the state-of-the-art I3D networks or, conversely, reduce the number of labeled real videos needed to achieve good performance. Experiments on two real-world datasets NTU RGB+D and VIRAT demonstrate the effectiveness of our method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 6

page 7

12/09/2019

Synthetic Humans for Action Recognition from Unseen Viewpoints

Our goal in this work is to improve the performance of human action reco...
10/14/2021

Nuisance-Label Supervision: Robustness Improvement by Free Labels

In this paper, we present a Nuisance-label Supervision (NLS) module, whi...
10/28/2020

ElderSim: A Synthetic Data Generation Platform for Human Action Recognition in Eldercare Applications

To train deep learning models for vision-based action recognition of eld...
12/11/2015

Improving Human Activity Recognition Through Ranking and Re-ranking

We propose two well-motivated ranking-based methods to enhance the perfo...
03/11/2019

A Hybrid Framework for Action Recognition in Low-Quality Video Sequences

Vision-based activity recognition is essential for security, monitoring ...
07/21/2020

Creating a Large-scale Synthetic Dataset for Human Activity Recognition

Creating and labelling datasets of videos for use in training Human Acti...
11/11/2019

Guided weak supervision for action recognition with scarce data to assess skills of children with autism

Diagnostic and intervention methodologies for skill assessment of autism...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recognizing human actions robustly from video is an important task in computer vision that has many real-world applications. The recent advances of action recognition are partially due to the creation of large-scale labeled video datasets 

[15, 5, 33]. However, performance on action recognition benchmarks is still not as good as object recognition in static images. One reason for this inferior performance is the extremely high intra-class variation -- the same action looks very different when performed by different people in different environments while captured from different camera viewpoints [27, 28]. Factors, such as lighting, appearance, and camera viewpoints, which should not affect recognizing an action are called nuisance factors. When collecting real-world video data different nuisance factors are often entangled together, e.g. people often play soccer during the day on grass fields. Therefore, it is often difficult or impractical to collect and annotate a large real-world video dataset that sufficiently covers the space of all nuisance factors. Current action recognition benchmarks are far from capturing all the variations of a single action. Therefore, models trained on these datasets are prone to be biased towards intrinsic dataset-specific features.

Figure 1: Illustration of our main idea. We use simulated videos to augment real training data to improve model robustness to nuisance factors, e.g. novel viewpoint, change of background and human appearance.

In this paper we explore the possibility of augmenting real videos with simulated data to improve action recognition performance, particularly through robustness to nuisance factors, e.g. camera viewpoints, actor appearance and background. To this end, we propose the Randomized Simulation as Augmentation (RSA) framework in which we render actions in a virtual world where nuisance factors can be controlled. By varying the rendering parameters, we can generate a large number of action sequences that cover a large variation of nuisance factors. Adding these data to model training can reduce the possibility of overfitting and model bias. This is in the same spirit with domain randomization [31, 42, 39, 24].

Rendering synthetic action using computer graphics engines requires 3D human motion to drive 3D human models. For each action, we identify three possible ways to obtain the corresponding 3D human motion. First, we can search from existing large-scale motion capture libraries, i.e. CMU MoCap Library [4] and Human3.6M [13]

, which contain high-quality motions. A more scalable way is to obtain human motion from videos. Since 2D human pose estimation is relatively accurate, we can further obtain 3D motion from videos with depth information, i.e. RGB-D videos. Recently, monocular 3D human pose estimation becomes increasingly accurate. This supports extracting human motion directly from the RGB video. Furthermore, there is temporal consistency in videos which can be exploited to enhance pose estimation.

Utilizing synthetic data requires dealing with the gap between synthetic and real domains, i.e. the reality gap. Since 1) renderers from computer graphics are not perfect and 2) it is often impractical to model every possible nuisance factors in the generation process, synthetic data will still have discrepancies from real data. This limits the potential for learning generalizable features from the synthetic domain. For deep models with large capacity, there is the additional complication that directly training with both real and simulated datasets jointly can result in learning models for the two domains separately [22]. We explore two possible solutions to this problem: 1) pretraining the network on synthetic data and then fine-tuning on real data; 2) performing domain adversarial training [9, 43] which minimizes the discrepancy of distributions of extracted features from two domains. We find that both methods outperform a baseline joint training method as well as performance from real data alone.

To support our methodology, we perform extensive experiments on two real-world datasets NTU RGB+D [17] and VIRAT [23]. We show that augmentation with simulated data boosts the performance of the state-of-the-art action recognition model while improving the robustness to various nuisance factors. Meanwhile, we show that using synthetic data significantly reduce the number of labeled real videos needed to achieve good performance. We name our framework Randomized Simulation as Augmentation (RSA).

Our contributions can be summarized as follows:

  • We propose an effective framework to augment real videos in with large-scale rendered synthetic videos to improve robustness to nuisance factors for human action recognition.

  • We describe methods for effectively handling the domain gap between synthetic and real domains.

  • We demonstrate the effectiveness of our approach on two real-world datasets in terms of both improving performance of a state-of-the-art model and greatly reducing the need for labeled real data.

Figure 2: Overview of the RSA framework. Given real human action videos, we obtain corresponding 3D human motions for driving synthetic human models in a virtual environment. We render synthetic human action videos with randomized parameters that covers much more variations than the original dataset. Through transferable training with both real and synthetic data, the model achieves improved robustness to nuisance factors.

2 Related Work

Human Action Recognition.

Human action recognition has also benefited from the recent advances of deep convolutional neural networks (DCNN). Karpathy

et al. [15] demonstrated on a large dataset that DCNNs significantly improves action recognition accuracy over the feature-based baselines. Further improvements have been achieved through better temporal modeling [46, 53] and adding optical flow branch [34]. Another branch of methods for action recognition directly learn spatio-temporal features with 3D convolutions [40, 38]. Carreira and Zisserman [5] showed the effectiveness of inflating the filters of 2D CNNs pretrained on large-scale 2D image datasets as the initialization for 3D convolutional layers. To reduce the large computational cost of 3D convolutions, recent trends [26, 41, 48] have been focused on different ways to replace 3D convolutions with 2D convolutions. Skeleton-based models [45, 35, 49] for human action recognition reason on dynamic human skeletons. Currently their performance is not on par with the RGB-based methods [49].

Synthetic Data for Computer Vision. Synthetic data has already been successfully applied in many computer vision tasks, such as stereo [21], optical flow [3, 21], semantic segmentation [29, 30], human part segmentation [44], for which the groundtruth is either difficult or expensive to obtain in real world.

Motivated by the fact that existing benchmarks lack the variety required to properly assess the performance of video analysis algorithms, Gaidon et al. [8] propose to use synthetic data as a proxy to assess real-world performance on high-level computer vision tasks. The ability to control the nuisance factors in virtual world enables to study the robustness of vision algorithms [51]. Amlan et al. [14] parameterizes synthetic dataset generator with a neural network and generates data to match content distributions for different tasks. [6] is the first work to investigate using virtual worlds to generate synthetic videos for action recognition, where the author proposes to procedurally generate realistic action from a library of atomic motions. Xavieret al. [25] uses sequences of atomic actions and interactions to generate actions in a simulated household environment. Our work, however, goes beyond by randomizing the nuisance factors in synthetic data to improve model robustness and boost the performance in real domain.

Domain Adaptation. Computer vision models trained on a source domain often suffer from performance drop when tested on domains which have different marginal distributions. This phenomenon is known as domain shift which domain adaptation (DA) methods seek to alleviate. One type of DA methods directly minimizes a distance function, e.g. maximum mean discrepancy [18, 19, 50] and CORAL function [37], for feature distributions of the source and target data. Domain adversarial training [9, 43] encourage to learn features which is indistinguishable between domains. Pixel-level DA methods [2, 11] align the distributions in the pixel space rather than the feature space. For video domain adaptation, aligning the temporal dynamics between source and target domain is shown to be effective [36]. For adaptation from synthetic to real domain, domain randomization [31, 42, 39, 24] is proposed which tries to randomize all irrelevant factors in virtual world forcing the model to focus on essential structure of the problem.

3 Method

We present the Randomized Simulation as Augmentation (RSA) framework for robust action recognition in this section. We first provide the formulation of our method in a probabilistic perspective. We then introduce how we implement our framework in detail, followed by how we handle the domain gap between synthetic and real domain.

3.1 Formulation

We use a simplified generative model, where an action is defined by human motion only, for generating human action videos while taking nuisance factors into consideration. Formally, we denote to be an RGB video. We formulate the generation of a human action video as a function

, where random variable

denotes the underlining action to generate, denotes human motion which contains a sequence of 3D human poses and are a set of pairwise independent random variables representing the nuisance factors, e.g. human appearance, surrounding environment and camera viewpoint, which is also independent of and . We obtain the following generative model for human action videos,

(1)

where is indicator function. Therefore, the optimal discriminative recognizer is

(2)

which is our goal of approximation with a deep neural network and is not affected by nuisance factors.

3.2 Randomized Simulation as Augmentation for Robust Human Action Recognition

It is costly and inefficient to sample enough data from the real , i.e. collect and annotate real videos, that sufficiently covers the variation of nuisance factors. We propose to use simulation in the 3D virtual world to augment more variation of nuisance factors. The whole framework is shown in Fig. 2

. The key idea is to use uniform distributions over all possible values for

. This is desirable because 1) the distributions of some nuisance factors should inherently be uniformly distributed, e.g. camera viewpoints; 2) even though other nuisance factors do have distributions that are not uniform in real world, e.g. the height of humans, such prior distribution should not be exploited too strongly by a robust recognition model.

Specifically, we use a 3D renderer to approximate the real generator . As it is shown in Fig. 3, the renderer uses human motion sequences to drive 3D human skeletal meshes in a virtual environment.

For the purposes of simulation, we need a motion model. We learn the model from existing datasets. For action category , let denote the training examples from an existing dataset. The most direct way is to estimate 3D human motion from , by which we obtain a motion set . For datasets where RGB-D videos are provided, we can simply perform 2D human joint estimation which is relatively accurate and reconstruct the 3D pose with the help of depth information. We subsequently assume takes value from uniformly. Furthermore, we use action category name to query similar 3D human motion from existing large-scale motion capture libraries such as CMU MoCap Library [4].

We consider three main nuisance factors -- human appearance, background environment and camera viewpoints -- that are controlled by rendering parameters to follow an approximately uniform distribution. We obtain a random human mesh from MakeHuman [1] by random sampling from a set of parameters that define a humanoid. Unlike previous work [6], our method does not require realistic human texture. Instead, we use images randomly sampled from the MSCOCO dataset [16] as textures. The human action is rendered in a simple virtual environment consisting of a floor and background sky sphere. Similarly, the textures of the floor and sky sphere are sampled from the same dataset. For the camera, we set the resolution to be and an FOV of degrees. As shown in Fig. 3, the camera always faces toward the human model and is controlled by three parameters, i.e. distance to the human model, azimuth and elevation, from which a random camera viewpoint is sampled for each video.

Figure 3: The virtual environment for generating synthetic data. A virtual camera can record the action from different viewpoint. The textures are given to sky, ground, vehicle and actor.

3.3 Transfer from Synthetic to Real

Since the graphics renderer we utilized is only an approximation to the real world generator , there are still discrepancies between real and synthetic action videos. Thus, a model trained with synthetic data will not generalize to real data. This problem is called the reality gap. Directly training networks on the mixture of synthetic and real data may not help the model to achieve better performance on the real domain. Because the huge capacity of deep networks enables learning to handle two domains separately as shown in Fig. 8.

To overcome the reality gap, we investigate two strategies, i.e. finetuning and domain adversarial training [9, 43]. For finetuning, the action recognition network is pretrained on the synthetic videos and then finetuned on real videos using a small learning rate. Domain adversarial training seeks to find a shared latent space where data from the two domains are indistinguishable, usually through a minimax optimization of a GAN loss [10].

Our domain adversarial training process is illustrated in Fig. 4. Let and to be the distributions of source and target domains. denotes the module for extracting spatial temporal feature in an action recognition network , e.g. the layers before the final convolutional layer in a I3D network [5]. A discriminator is added to predict whether a feature is from source or target domain, which means maximizing the adversarial loss ,

(3)

The objective for the action recognition model is to bring the feature representation from the two domains closer as well as performing correct classification. Therefore, the final optimization is as follows,

(4)

where denotes the cross entropy loss for action classification on the two domains and is a weight for balancing the classification task and adversarial training. We perform training in an alternating manner, i.e. freezing the weight of when optimizing and vice versa.

Figure 4: Domain adversarial training for the I3D network. A GAN loss is added to the training for alignment of synthetic and real domain in feature space.

3.4 Synthetic Data Generation Pipeline

We developed two pipelines for data generation in our RSA framework to maximize the utilization of available 3D human resources. They are based on Unreal Engine and Blender respectively, each of which has unique properties. The first pipeline enables utilizing more high-quality commercial resources for building complex environments and human-object interaction. The second one is more suitable for controlling low-level rendering processing, such as controlling human meshes and rendering parameters.

For the Unreal Engine pipeline, we developed code for randomizing object textures, changing appearance of any 3D human model, and controlling the rotation of each bone on a human skeleton. A Python interface is provided which enables connecting the virtual environment with other learning algorithms. The whole pipeline can be released as a binary executable for reproducing and sharing our work. Our code for obtaining images and groundtruths is based on UnrealCV [47].

For the Blender pipeline, human meshes from MakeHuman is utilized. The animation data captured from Microsoft Kinect [52] can be used by our pipeline to drive the human mesh in Blender. We also implemented a full pipeline to randomize other rendering factors.

4 Experiments

In this section we present the datasets used for evaluation, implementation details and experiment results showing the effectiveness of our proposed RSA framework. We also provide detailed analysis illustrating why our method works.

4.1 Datasets for Evaluation

NTU RGB+D dataset [32, 17]. This is a large controlled human action dataset with variety of camera views, capture environments and human subjects. This dataset is collected from 106 distinct subjects and contains 114,480 sequences in total. There are 120 classes of human actions including daily one-person actions, two-human interactions and health-related activities. There are 32 different setups each of which has a different location and background. The actions are captured by three cameras at the same time covering the front, side and 45 degrees views. Depth and 3D skeletal data are also captured by Microsoft Kinetic.

Highly Disjoint Train/test split for NTU RGB+D. The controlled real-world dataset enables us to evaluate the robustness of action recognition models to nuisance factors. We split this dataset so that train and test splits have no shared environments, human subjects and camera viewpoints. For environments, we use setup 7, 10, 11, 13, 21, 22, 23, 24, 25 for testing and the rest for training. We do not use the split for the cross-setup evaluation because the same environments appear in both train and test set in that split. For human subjects, we select human subject IDs for training following the cross-subject evaluation in [17]. For camera viewpoints, we only choose the camera view 1 (C001) for training and test on others. In this work, we focus on a subset of 31 classes which do not require reasoning about human-object interactions. This results in a training set of 3950 videos and a test set of 2670 videos.

VIRAT dataset [23]. This is a dataset for video surveillance containing videos with cluttered backgrounds and various camera viewpoints, including 19 classes of human actions and human-vehicle interactions. The videos are collected at multiple sites distributed throughout the USA, including five scenes named 0000, 0002, 0400, 0401 and 0500. The scenes are shown in Fig. 5

, the videos in these scenes share the same actions but have different backgrounds and viewpoints. VIRAT dataset is challenging due to 1) low resolution actions, i.e. each action only occupies a small region in the whole frame; 2) only limited data available for training. We focus on the task of classifying video clips that has already been cropped both spatially and temporally using ground truth annotation.

Highly Disjoint Train/test split for VIRAT. The differences between five scenes enable us to create new splits suitable for evaluating the robustness of action recognition model to nuisance factors. We split the dataset based on Leave-One-Scene-Out (LOSO) criterion, in which we use videos from one scene for testing and the rest of training. In our experiment, we leave out in order to keep the balance between training and testing set. In the new split, we focus on six typical yet challenging classes of activities in VIRAT: Opening, Open Trunk, Entering, Exiting, Close Trunk and Closing. The training set contains 534 video clips and testing set contains 363 videos clips.

Figure 5: The Leave-One-Scene-Out (LOSO) setting for the five scenes in VIRAT dataset; the left four scenes are used for training and the right one is left out for testing.

4.2 Implementation Details

Network architecture. For all experiments, we use the I3D network [5]

as our base action recognition network which is initialized by inflating weights from 2D Inception network pretrained on ImageNet 

[7]. We rescale the shorter side of all input videos to 256 pixels. For training, we randomly sample 32 consecutive frames temporally and crop a patch from each frame spatially. Random left-right flip is applied as 2D data augmentation. At test time, we spatially sample three patches evenly along the longer side of the video and temporally sample three clips evenly from the full video. For implementation of domain adversarial training, we use a simple discriminator with three 3D convolutional layers each followed by a 3D BatchNorm Layer [12]

and a Leaky ReLU activation.

Human Motion Model Acquisition. For NTU RGB+D dataset, the 3D skeleton data is already provided which is calculated by Kinect. We find the predefined skeleton of Kinect, calculate the local rotations of all the joints in each frame and write the animations in BVH file format. We then import the generated BVH files into Blender and use these animations to drive 3D Human models. For VIRAT dataset, we use a low-cost motion capture device to collect a set of human motions for each action class.

Figure 6: Comparison of real video frames and our synthetic video frames. The top two rows are for NTU RGB+D dataset and the bottom two rows are for VIRAT dataset. The examples in NTU RGB+D are from kicking something and shoot at basket, in VIRAT are from Opening Trunk.

4.3 Experimental Results on NTU RGB+D

In this section we evaluate RSA on the NTU RGB+D dataset. We generate a total of 14631 synthetic videos, which is about 4 times larger than the real training set, to augment the real training set. We compare three methods for learning from our generated synthetic data: 1) direct joint training (Joint training) as our baseline; 2) pretraining on synthetic videos and finetuning on real videos (RSA-F); and 3) domain adversarial training (RSA-DA).

As shown in Table 1, RSA improves the action recognition accuracy compared to training purely on real videos. The classification accuracy improves from to () with finetuning, and to () with adversarial training. This suggests that with RSA the state-of-the-art I3D model achieves increased robustness to nuisance factors. Due to the reality gap, model trained on synthetic data only achieve an accuracy of . Direct joint training performs worse than real-only, showing that handling the reality gap carefully is important.

In the rest of this section, we provide more experiments for better understanding of our framework, including: effects with reduced real videos, ablation studies on the effect of individual nuisance factor and visualization of learned features from different ways of handling the reality gap.

Model Accuracy
Real-only 83.93
Synthetic-only 47.57
Joint training 82.92
RSA-F 88.69
RSA-DA 85.02
Table 1: Results on NTU RGB+D dataset in accuracy (%).
Figure 7: Performance boost with RSA on NTU RGB+D dataset using less real data.
Model All Real Data 1/2 Real Data 1/8 Real Data
Subject Background Camera Subject Background Camera Subject Background Camera
Real-only 94.50 93.11 87.32 92.00 86.48 82.28 64.04 51.41 54.14
RSA-F 95.48 94.08 90.27 93.17 91.86 87.77 87.92 83.33 81.06
RSA-DA 93.74 90.66 88.18 92.06 89.52 85.47 87.48 82.08 79.13
Table 2: Results for individual nuisance factors on NTU RGB+D dataset. We evaluate model robustness to human subject changes (Subject), different background (Background) and different camera viewpoints (Camera). Experiments are performed with different amount of real videos for training.

Effectiveness with fewer training videos. To evaluate how much improvement RSA can achieve with less real training data, we perform experiments on NTU RGB+D with only a subset of the real videos available for training. As shown in Fig. 7, consistent performance boosts are achieved by adding RSA. As the number of labeled real data decreases, I3D performance drops as expected due to an increased possibility of overfitting to training data. With the help of our proposed method, the same level performance (accuracy down ) is achieved with only 1/8 of all real training data, whereas real-only performance drops significantly (accuracy down ). This shows that our framework can significantly reduce the number of labeled real videos need to achieve good performance.

Improvement for individual nuisance factor. To study how well our model works for each nuisance factor, we create new test sets varying one nuisance factor at a time. For example, to evaluate the robustness to different human subjects, we keep the camera view and setup the same as training and choose videos performed by human subjects in the test set. Results are shown in Table 2. Our method improves the robustness of all three nuisance factors especially when training data is insufficient. Among the three nuisance factors evaluated, the camera viewpoint results to be the least robust one for I3D, which can be greatly improved by our method.

Visualization for Handling the Reality Gap. We visualize the feature distributions of data from both domains learned by different models in 2D space using t-SNE [20]. Fig. 8 shows feature embedding for models trained with direct joint training, finetuning and domain adversarial training. Compared with joint training, domain adversarial training achieves better alignment for synthetic and real domain while learning more discriminative features, i.e. features are in separate clusters. This can explain why domain adversarial training achieves better results than joint training. As shown in Fig. 7(b), after finetuning on real videos, I3D focuses on the real domain and is no longer able to handle synthetic data well.

(a) Joint Training
(b) RSA-F
(c) RSA-DA
Figure 8: Feature embedding in 2D space using t-SNE for models trained with different strategy for handling the reality gap. Blue: Real domain; Red: Synthetic domain. (Better viewed in color.) (a) Joint training with synthetic and real data achieves moderate alignment of the two domains but the learned features are not very discriminative. (b) Finetuning on real data results in high discriminative ability on real domain. (c) Domain adversarial training aligns the two domains better than using joint training while keeping the features in separate clusters.

4.4 Experimental Results on VIRAT

In this section, we evaluate RSA on VIRAT dataset using the same three methods for learning from synthetic data as on NTU RGB+D dataset. VIRAT is quite different with NTU RGB+D; it contains less data and is more difficult, the human and vehicle in the video are far from the camera and the surveillance video itself is blurred thus hard to recognize the activity; the latter is collected indoor and the action is explicit thus easy to recognize. Therefore, the accuracy of model on VIRAT is generally lower than that on NTU RGB+D. We experiment on VIRAT to show our proposed method’s ability of handling difficult dataset, for which we generated 7582 synthetic videos (14 times the size of real training data).

As is shown in Table 3, RSA greatly improves the accuracy compared with training only on real. While the joint training baseline hardly improves the accuracy, finetuning lifts the accuracy from to (+) and domain adversarial training lifts the accuracy from to (+

). We can see domain adversarial training works better than finetuning, unlike the results on NTU RGB+D, which is probably because the number of real data on VIRAT is insufficient for finetuning.

In the following parts, we provide experiments to show the influence of nuisance factors and detailed analysis of the improvement brought by our method.

Model Accuracy
Real-only 46.01
Synthetic-only 33.61
Joint training 46.28
RSA-F 54.82
RSA-DA 55.10
Table 3: Results on VIRAT dataset in accuracy (%).

Influence of nuisance factors We did experiments on original train/test splits of VIRAT and compare with the LOSO splits to quantitatively show the effect of nuisance factors. The training and testing sets in the original splits share similar background, viewpoints and vehicle appearance, while in LOSO splits the two sets have little overlap on these factors. Results are in Table 5. The result shows that dramatic variation of the nuisance factors poses a challenge to state-of-the-art action recognition network as the I3D performance drops from to ().

Improvement on each class

We evaluate the performance for each action class using F1 score metric, which is defined as the harmonic mean of precision and recall. The results are shown in Table 

4. Domain adversarial training brings improvement to all the classes while finetuning brings improvement to most of the classes except Open Trunk. Great improvement is achieved on the Entering class which is difficult for model trained on real only. Model trained using joint training achieves improvement on some classes while performing terribly on Exiting.

We also find an interesting fact that the model trained with synthetic data only performs even better than real data trained model on some classes, such as and . This suggests that the model can learn useful features from synthetic data with randomized nuisance factors despite the reality gap.

Model

Average

Closing

Close Trunk

Entering

Exiting

Open Trunk

Opening

Real-only 44.4 55.7 38.5 23.2 52.1 40.8 39.1
Synthetic-only 32.3 33.7 27.8 38.0 24.0 18.9 41.6
Joint training 44.8 53.0 41.9 27.8 30.4 47.8 51.9
RSA-F 51.8 61.2 39.1 44.4 60.6 40.0 58.1
RSA-DA 52.0 65.8 43.6 46.0 56.7 42.4 54.0
Table 4: F1 score of each class of activity on VIRAT real test set in LOSO setup.
Split Accuracy
Original 52.97
LOSO 46.01
Table 5: Comparison of classification accuracy for original and LOSO split of VIRAT dataset. I3D network used.

5 Conclusion

In this work, we propose the RSA framework which augments real-world videos with randomized synthetic data for training more robust human action recognition networks. Domain gap between synthetic and real data is carefully handled so that model can learn transferable features. We demonstrate on two real-world datasets that our method boosts the performance of the state-of-the-art I3D network for human action recognition and improves model robustness to nuisance factors. Furthermore, we show that our method can reduce real-world labeled data needed for achieving good performance. This can potentially save a huge amount of human labors for collecting and annotating large-scale video datasets. In our future work, we plan to extend our model to simulate more complex human-object interactions and explore the potential of using the free extra groundtruths from simulation, e.g. depth and segmentation masks, for better action recognition models.

Acknowledgement: This work is supported by IARPA via DOI/IBC contract No. D17PC00342. We thank Tae Soo Kim, Mike Peven, Jiteng Mu and Jialing Lyu for discussions.

References

  • [1] MakeHuman. http://www.makehumancommunity.org/.
  • [2] Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, 2017.
  • [3] Daniel J Butler, Jonas Wulff, Garrett B Stanley, and Michael J Black.

    A naturalistic open source movie for optical flow evaluation.

    In ECCV, 2012.
  • [4] Carnegie Mellon Graphics Lab. Carnegie Mellon University Motion Capture Database, 2016.
  • [5] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017.
  • [6] César Roberto de Souza, Adrien Gaidon, Yohann Cabon, and Antonio Manuel López. Procedural generation of videos to train deep action recognition networks. In CVPR, 2017.
  • [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  • [8] Adrien Gaidon, Qiao Wang, Yohann Cabon, and Eleonora Vig. Virtual worlds as proxy for multi-object tracking analysis. In CVPR, 2016.
  • [9] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. JMLR, 2016.
  • [10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014.
  • [11] Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In ICML, 2018.
  • [12] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
  • [13] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. TPAMI, 2014.
  • [14] Amlan Kar, Aayush Prakash, Ming-Yu Liu, Eric Cameracci, Justin Yuan, Matt Rusiniak, David Acuna, Antonio Torralba, and Sanja Fidler. Meta-sim: Learning to generate synthetic datasets. In ICCV, 2019.
  • [15] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
  • [16] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
  • [17] Jun Liu, Amir Shahroudy, Mauricio Perez, Gang Wang, Ling-Yu Duan, and Alex C. Kot. Ntu rgb+d 120: A large-scale benchmark for 3d human activity understanding. TPAMI, 2019.
  • [18] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In ICML, 2015.
  • [19] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan.

    Deep transfer learning with joint adaptation networks.

    In ICML, 2017.
  • [20] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne.

    Journal of machine learning research

    , 9(Nov):2579--2605, 2008.
  • [21] Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In CVPR, 2016.
  • [22] Saeid Motiian, Marco Piccirilli, Donald A Adjeroh, and Gianfranco Doretto. Unified deep supervised domain adaptation and generalization. In ICCV, 2017.
  • [23] Sangmin Oh, Anthony Hoogs, Amitha Perera, Naresh Cuntoor, Chia-Chih Chen, Jong Taek Lee, Saurajit Mukherjee, JK Aggarwal, Hyungtae Lee, Larry Davis, et al. A large-scale benchmark dataset for event recognition in surveillance video. In CVPR, 2011.
  • [24] Aayush Prakash, Shaad Boochoon, Mark Brophy, David Acuna, Eric Cameracci, Gavriel State, Omer Shapira, and Stan Birchfield. Structured domain randomization: Bridging the reality gap by context-aware synthetic data. In ICRA, 2019.
  • [25] Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. Virtualhome: Simulating household activities via programs. In CVPR, 2018.
  • [26] Zhaofan Qiu, Ting Yao, and Tao Mei. Learning spatio-temporal representation with pseudo-3d residual networks. In ICCV, 2017.
  • [27] Hossein Rahmani, Arif Mahmood, Du Q Huynh, and Ajmal Mian. Hopc: Histogram of oriented principal components of 3d pointclouds for action recognition. In ECCV, 2014.
  • [28] Hossein Rahmani, Ajmal Mian, and Mubarak Shah. Learning a deep model for human action recognition from novel viewpoints. TPAMI, 2017.
  • [29] Stephan R. Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In ECCV, 2016.
  • [30] German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In CVPR, 2016.
  • [31] Fereshteh Sadeghi and Sergey Levine. Cad2rl: Real single-image flight without a single real image. arXiv preprint arXiv:1611.04201, 2016.
  • [32] Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. Ntu rgb+d: A large scale dataset for 3d human activity analysis. In CVPR, 2016.
  • [33] Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In ECCV, 2016.
  • [34] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In NeurIPS, 2014.
  • [35] Tae Soo Kim and Austin Reiter. Interpretable 3d human action analysis with temporal convolutional networks. In CVPRW, 2017.
  • [36] Waqas Sultani and Imran Saleemi. Human action recognition across datasets by foreground-weighted histogram decomposition. In CVPR, 2014.
  • [37] Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In ECCV, 2016.
  • [38] Lin Sun, Kui Jia, Dit-Yan Yeung, and Bertram E Shi. Human action recognition using factorized spatio-temporal convolutional networks. In ICCV, 2015.
  • [39] Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In IROS, 2017.
  • [40] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015.
  • [41] Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. In CVPR, 2018.
  • [42] Jonathan Tremblay, Aayush Prakash, David Acuna, Mark Brophy, Varun Jampani, Cem Anil, Thang To, Eric Cameracci, Shaad Boochoon, and Stan Birchfield. Training deep networks with synthetic data: Bridging the reality gap by domain randomization. In CVPRW, 2018.
  • [43] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In CVPR, 2017.
  • [44] Gul Varol, Javier Romero, Xavier Martin, Naureen Mahmood, Michael J Black, Ivan Laptev, and Cordelia Schmid. Learning from synthetic humans. In CVPR, 2017.
  • [45] Chunyu Wang, Yizhou Wang, and Alan L Yuille. Mining 3d key-pose-motifs for action recognition. In CVPR, 2016.
  • [46] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016.
  • [47] Yi Zhang Siyuan Qiao Zihao Xiao Tae Soo Kim Yizhou Wang Alan Yuille Weichao Qiu, Fangwei Zhong. Unrealcv: Virtual worlds for computer vision. ACM Multimedia Open Source Software Competition, 2017.
  • [48] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In ECCV, 2018.
  • [49] Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI, 2018.
  • [50] Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschläger, and Susanne Saminger-Platz.

    Central moment discrepancy (CMD) for domain-invariant representation learning.

    In ICLR, 2017.
  • [51] Yi Zhang, Weichao Qiu, Qi Chen, Xiaolin Hu, and Alan Yuille. Unrealstereo: Controlling hazardous factors to analyze stereo vision. In 3DV, 2018.
  • [52] Zhengyou Zhang. Microsoft kinect sensor and its effect. IEEE multimedia, 19(2):4--10, 2012.
  • [53] Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Torralba. Temporal relational reasoning in videos. In ECCV, 2018.