HalluciNet-ing Spatiotemporal Representations Using 2D-CNN

Spatiotemporal representations learnt using 3D convolutional neural networks (CNN's) are currently the state-of-the-art approaches for action related tasks. However, 3D-CNN's are notoriously known for being memory and compute resource intensive. 2D-CNN's, on the other hand, are much lighter on computing resource requirements, and are faster. However, 2D-CNN's performance on action related tasks is generally inferior to that of 3D-CNN's. Also, whereas 3D-CNN's simultaneously attend to appearance and salient motion patterns, 2D-CNN's are known to take shortcuts and recognize actions just from attending to background, which is not very meaningful. Taking inspiration from the fact that we, humans, can intuit how the actors will act and objects will be manipulated through years of experience and general understanding of the "how the world works," we suggest a way to combine the best attributes of 2D- and 3D-CNN's – we propose to hallucinate spatiotemporal representations as computed by 3D-CNN's, using a 2D-CNN. We believe that requiring the 2D-CNN to "see" into the future, would encourage it gain deeper about actions, and how scenes evolve by providing a stronger supervisory signal. Hallucination task is treated rather as an auxiliary task, while the main task is any other action related task such as, action recognition. Thorough experimental evaluation shows that hallucination task indeed helps improve performance on action recognition, action quality assessment, and dynamic scene recognition. From practical standpoint, being able to hallucinate spatiotemporal representations without an actual 3D-CNN, would enable deployment in resource-constrained scenarios such as lower-end phones and edge devices, and/or with lower bandwidth. This translates to pervasion of Video Analytics Software as a Service (VA SaaS), for e.g., automated physiotherapy options for financially challenged demographic.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 3

page 4

page 5

page 6

page 8

page 9

page 10

08/25/2020

Spatiotemporal Action Recognition in Restaurant Videos

Spatiotemporal action recognition is the task of locating and classifyin...
02/15/2021

Win-Fail Action Recognition

Current video/action understanding systems have demonstrated impressive ...
05/05/2015

Contextual Action Recognition with R*CNN

There are multiple cues in an image which reveal what action a person is...
08/07/2019

STM: SpatioTemporal and Motion Encoding for Action Recognition

Spatiotemporal and motion features are two complementary and crucial inf...
12/13/2015

Action Recognition with Image Based CNN Features

Most of human actions consist of complex temporal compositions of more s...
03/17/2019

Spatiotemporal Filtering for Event-Based Action Recognition

In this paper, we address the challenging problem of action recognition,...
01/04/2018

What have we learned from deep representations for action recognition?

As the success of deep models has led to their deployment in all areas o...

Code Repositories

HalluciNet--PyTorch

Approximating a 3DCNN with a 2DCNN


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Concept. Instead of computing spatiotemporal representations using computationally expensive 3D-CNN’s, we propose to approximate those using a 2D-CNN from a single still image. We hypothesize that this hallucination task offers a stronger supervision, which helps with action related tasks, while bringing down the computational and communication costs.

Spatiotemporal representations are densely packed with information regarding the appearance and salient motion patterns occurring in the video clips, as illustrated in Fig. 2. Due to this representational power they are currently the best performing models on action related tasks like action recognition [42, 21, 14, 6], action quality assessment [31, 30], skills assessment [5], action detection [12]. This representation power comes at a cost of increased computational complexity [60, 50, 58, 13]. Hadidi et al. [13] recently conducted an exhaustive comparison of various CNN’s from the perspective of computational cost. We have cited some of their findings in Table 1 to show how much costlier are 3D-CNN’s than 2D-CNN’s. For further analysis regarding deployment of CNN’s on edge devices, we guide readers to the extensive study reported in [13]. Very high compute resource requirements leave 3D-CNN’s unsuitable for deploying in resource-constrained scenarios.

Figure 2: Visualization of C3D model; illustration taken from [42] with permission. Notice that the model learns to capture appearance from few starting frame and salient motion the rest (every second row), unlike optical flow (every third row), which responds to all the moving pixels. All the moving pixels may not be of use, and attending to useless pixels might adversely affect performance on target task.

2D-CNN’s are generally used for learning and extracting spatial features pertaining to a single frame/image. As such, typical 2D-CNN’s, by design do not take into account any motion information. Some of the works [48, 49, 33, 10] have addressed this by using optical flows. Optical flow fires at all pixels that have moved/changed (refer to Fig. 2). This means, optical flow will pick up cues from some irrelevant activity happening in the background as well. 3D-CNN, on other hand, will attend to salient motion patterns characteristic of an action class. As a matter of fact, 2D-CNN’s can also find short cuts to recognize actions, where instead of recognizing an action meaningfully from foreground, 2D-CNN would pick up enough cues from the background as reported in [20, 15]. These kinds of a short cuts might get the job done, but it is not very meaningful. However, 2D-CNN’s has the advantage of being computationally lightweight, which makes them suitable for deployment on edge devices.

3D-CNN
(C3D)
2D-CNN
(VGG)
FLOP/param 734 112
Time per inference (ms) 200 90
Table 1: Computational cost of 3D-CNN’s vs 2D-CNN’s.

In nutshell, 2D-CNN’s have the advantage of being computationally less expensive, while 3D-CNN’s extract spatiotemporal features that more representation power. In our work, we propose a way to combine the best of both worlds. Our inspiration comes from the observation that given an image of a scene, humans can predict how the scene around them would evolve. They are able to do so because they have better general understanding of how other people are expected to behave and objects would move/manipulated. Building machines/computer vision systems with such capabilities has been a long standing goal. To this end, we propose to hallucinate spatiotemporal representations, as computed by a 3D-CNN, using a 2D-CNN and from a single still frame (see Fig.

1).

Conceptually, encouraging a 2D-CNN to predict spatiotemporal representations pertaining to 16 frames, from a single frame provides strong supervisory signal, which would help the 2D-CNN to gain deeper understanding of actions and how a given scene evolves with time.

Practically, predicting spatiotemporal representations, instead of actually computing, comes in handy in the following situations:

  • Resource-constrained scenarios: many computer vision efforts in areas like automated physiotherapy, that are targeted for low-income groups, make use of 3D-CNN’s. It is more likely that low income demographic would have devices with low computational resources, which are not suitable to run 3D-CNN’s; in these cases, we can just hallucinate spatiotemporal representations.

  • Limited and/or expensive bandwidth: VA SaaS is increasingly being employed. Bandwidth used for communicating between clients and cloud is usually limited and expensive. Using our method, we can hallucinate information pertaining to multiple frames (e.g., 16 frames) from just one frame, which reduces the transmission load by 15 times).

We propose to use hallucination task as an auxiliary task along with the action related task such as, action recognition, action quality assessment, etc. Experimentally, we show that incorporating hallucination loss during training helps in following four cases: action recognition, fine-grained action recognition, action quality assessment, dynamic scene recognition.

2 Related Work

Our work sits close to predicting features, present or future, from same and different modalities, efficient/light-weight approaches, and knowledge distillation. We briefly visit works that are closest to ours, and compare and contrast our approach against those.

Predicting features:

Wang et al. [51] treat actions as transformations from a precondition to the effect. Essentially, they propose to learn the CNN parameters and transformation matrices, such that the product of features of precondition (initial) frames and transformation matrix will produce the features corresponding to effect (future) frames.

Hoffman et al. [17] proposed to hallucinate depth modality using RGB modality, and showed that improvements over single modalities and simple fusion of modalities.

Vondrick et al. [44]

propose to learn to predict future, in feature space. Their approach also allows for multi-modal predictions. Given the current video frame, they propose to predict the representation of a future frame. However, they future frame representation is computed using a 2D-CNN pretrained on dataset like ImageNet or Places, which belong to a non-human-action domain.

Capturing information in future frames:

Many works like [56, 23, 48, 49, 33, 9, 47, 24, 44, 45, 2, 46] have focused on capturing information in multiple future frames.

While Vondrick et al. [45] propose a framework to generate future frames by disentangling foreground and background, Vondrick and Torralba [46] propose to disentangle low-level details and high-level semantics with the use of a transformer. Learning to generate future frames helps the network to learn useful representations that transfer well to other tasks like action recognition. However, our goal is not to predict pixel-perfect future, rather to make predictions at semantic level. Instead of generating future frames, a few works like [48, 49, 33, 10] focus on learning to predict optical flow (very short term motion information) from static images. Gao et al. [10] propose better optical flow encoding method then previous works [48, 49, 33]. Their approach, by design, requires to use an encoder, and a decoder. Our approach on the other hand, does not require a decoder, which helps in reducing the computational load on resource-constrained edge devices. Moreover, our approach learns to hallucinate spatiotemporal representations corresponding to a stack of 16 frames, as compared to motion information in two consecutive frames like [48, 49, 33, 10]. As can be seen in Fig. 2, optical flow attends to all kind of motion, even irrelevant background motion, while spatiotemporal representations only attend to action relevant salient motion patterns. Through experiments, we confirm the benefits of hallucinating and using spatiotemporal representations over optical flow prediction. Bilen et al. [2] introduce a novel, compact representation of videos, called ‘dynamic image’. Dynamic images can be thought of as a summary of videos in a single image. Computing a dynamic image requires to all the corresponding frames, where as in our case, hallucinating requires to process just a single image.

Efficient spatiotemporal feature computation:

Numerous works have developed approaches to make video processing more efficient, either through using less evidence [59, 1, 3, 39] or through more efficient processing [40, 43, 35, 60, 52, 50, 27, 28].

TSD [59] distills a long video sequence into a very short one, and is aimed for VA SaaS scenarios where bandwidth is limited or expensive. Bhardwaj et al. [1]

propose to learn a student recurrent neural network that can classify a video using fewer frames. Our goal is hallucinate 3D-CNN representations using a 2D-CNN from a single frame. We also discuss our intuition that stronger supervision can be artificially provided without any manual annotation efforts using our hallucination loss.

[3, 39] focus on predicting optical flow stream features using raw RGB image stream. While their method can rid of optical flow stream, their method still processes all the frames for/through RGB stream. Our approach enjoys the benefits of processing fewer frames, reduced computation load, and receiving stronger supervision.

3D convolutions can be factorized into 2D convolutions (spatial convolutions) followed by 1D convolutions (along the temporal dimension). This concept has been studied in numerous works [40, 43, 35, 60, 52] and better designs have been developed that take advantage of this factorization. 3D-CNN’s inherently have larger number of trainable parameters than their 2D counterparts, because of which 3D-CNN’s might be prone to overfitting [60]. To address this, [60] proposed to use 2D convolutions along with 3D convolutions. Tran et al. [43] explore many 3D-CNN variants and observe that by replacing 3D convolutions with (2+1)D convolutions, more non-linearities can be made available in the CNN, which may allow to learn more complex functions. Xie et al. [52] found that 3D convolutions in bottom layers might be redundant, and may be replaced with 2D convolutions followed by 3D convolutions in the top layers for better temporal reasoning. Following this design, they obtained better results with lesser complexity.

Lee et al. [27] introduce MFNet, in which spatiotemporal information is extracted from feature maps from consecutive appearance blocks and used along with appearance information. This reduces the computational cost in comparison to two-stream approaches like [37]. In a concurrent work, Lin et al. [28] introduce a novel Temporal Shift Module (TSM) to allow information exchange among consecutive frames just by shifting channel, which gives strong temporal modeling ability with no additional computational cost.

While these works aim to address either using less visual evidence or more efficient, our solution to hallucinate spatiotemporal representations using a 2D-CNN from a single image aims to solve both the problems, and provides stronger supervision.

3 Best of Both Worlds

Let’s consider the visualization [57] of C3D model [42] shown in Fig. 2, particularly, the instance of gymnast on a balance beam, to understand what a 3D-CNN actually learns to capture. We notice that C3D fires at pixels belonging to the body of the gymnast, and captures the cartwheel done by the athlete over the span of 16 frames.

What would happen if a 2D-CNN was asked to hallucinate C3D features pertaining to 16 frames, just from looking at the single, i.e., the starting frame?

In order to complete the hallucination task, 2D-CNN, for e.g., will have to:

  • learn to identify that there’s an actor in the scene and localize them

  • spatially segment the actors and objects

  • identify event going on is a balance beam gymnastic event, the actor is a gymnast

  • identify that gymnast is on her way to attempt a cartwheel

  • predict how she would be moving while attempting the cartwheel

  • approximate the position gymnast would be in after 16 frames, etc.

Figure 3: Approach. Teacher network is a 3D-CNN (e.g., C3D [42]), which computes spatiotemporal representation from 16-frames clips, while student network is a 2D-CNN (e.g., AlexNet [25]). Student network is a multitask network, which is jointly optimized for action recognition, and to hallucinate spatiotemporal representation (computed by teacher network) from a single frame.

Here we have just discussed a case of the action class balance-beam, but readers can imagine the same for other classes. This is a lot of semantic details to be predicted from a single frame. In typical action recognition task, the network would have been provided with just the action class label, which may be considered as a weak supervision signal. Incorporating hallucination task during training, would be equivalent of artificially providing with dense labels, a much stronger supervisory signal. Joint actor-action segmentation datasets [54] aim to provide such detailed annotations; actor-action segmentation is an actively pursued research direction [18, 11, 55, 19, 53]. However, following our proposition, we can get detailed supervision of a similar flavor (not exactly same) for free, which saves tremendous annotation efforts. Hallucination loss will encourage the network to focus on actors and objects and will develop better general understanding about actions and how objects are manipulated. 2D-CNN’s will now be less likely to take shortcuts – recognizing actions from background, ignoring the actual actor and action being performed [20, 15], as it cannot hallucinate spatiotemporal features from background. Moreover, the ability to just hallucinate spatiotemporal representations, would allow us to replace 3D-CNN’s with 2D counterparts in resource-constrained scenarios.

As a method to gain the benefits described in the preceding, we propose to use hallucination task. Note that an another way to do this would be to predict the future frames in pixel space. But we are interested in predicting at semantic level - perfect per pixel construction is not our goal. So rather than doing prediction in pixel space, we propose to do prediction in the feature space.

Hallucination task can also be seen as distilling knowledge from a teacher network (3D-CNN), to a student network (2D-CNN), ; where, is pretrained and then kept frozen, while parameters of are learnt. Let and represent mid-level representations from and , respectively, and be th video frame.

(1)
(2)

Hallucination loss, (Eq. 3), encourages to regress to by minimizing the Euclidean distance between and

(3)

Hallucination task is not the only goal. In addition to bringing down the computational cost, we would also like to improve the performance on action related tasks. To this end, we propose to incorporate hallucination task as an auxiliary task to be used with the actual action related main task, such as action recognition. So, main task loss (e.g., classification loss), , is used in conjunction with the hallucination loss, and the idea is that hallucination loss will help with the main task. So the overall loss can be expressed as follows,

(4)

where, is a loss balancing factor. Our approach is presented in Fig. 3. Realization of our approach is very straightforward.

4 Experiments

We had hypothesized that incorporating hallucination task, would help by providing deeper understanding of actions. We evaluate the effect of incorporating hallucination task on the following action related tasks:

  1. Action recognition (Sec. 4.1)

  2. Detailed/Fine-grained action recognition (Sec. 4.2)

  3. Action quality assessment (Sec. 4.3)

  4. Dynamic scene recognition (Sec. 4.4)

Choice of networks:

In principle, any 2D- and 3D-CNN’s can be used as student and teacher networks, respectively. We choose to use ResNeXt-101 [14] as our teacher network, and VGG11-bn as our student model. Until not mentioned, assume that we have pretrained our teacher network on UCF-101 dataset, and is kept frozen. Student model is pretrained on ImageNet dataset [4]. We name the network trained with hallucination loss as HalluciNet, without hallucination loss as just 2D-CNN or vanilla 2D-CNN.

Which layer to hallucinate?

we choose to hallucinate the activations of the last bottleneck group of ResNeXt-101, which are 2048-dimensional. Representations of shallower layer will have higher dimensionality, and will be less semantically mapped.

Implementation details:

We PyTorch

[32] to implement all the networks. Network parameters are optimized using Adam optimizer [22] with starting learning rate of 0.0001. in Eq. 4 is set to 50, unless specified otherwise. Further experiment specific details are specified along with the experiment. We will make our code publicly available.

Performance baselines:

Our baseline to compare the performance is a 2D-CNN with same architecture, but which was trained without hallucination loss. In addition, we also compare the performance against other methods, which we specify in each experiment.

4.1 Action recognition

In first experiment, we evaluate to see if hallucination task helps with general action recognition. We compare the performance with dense optical flow prediction from static image approach [49], and motion prediction from static image approach [10].

Datasets:

UCF-101 [38] and HMDB-51 [26] action recognition datasets are considered. In order to be consistent with literature, we adopt their experiment protocol. Central frames from the train and test samples are used for reporting performance, which are named as UCF- and HMDB-static, as in the literature [10].

Metric:

We report top-1 clip-level accuracy (in %).

We considered two cases as shown in Fig. 4. We found that fusing the hallucinated representations yielded better results. So we will consider that case in the remainder of the work.

Figure 4: Fusing representations.
Figure 5: Hallucination loss curve on UCF101 dataset.

First of all, we show the evolution of hallucination loss in Fig. 5. Through gradual decrease in value, we can clearly see that 2D-CNN is learning to hallucinate the spatiotemporal representations. Starting value of the loss is less as we are computing the loss after passing the activations through a sigmoid layer.

We summarize the performance on action recognition task in Table 2. We find that on both the datasets, incorporating hallucination task helps. Our HalluciNet outperforms prior approaches [49, 10] on UCF101. On HMDB51, our HalluciNet yields better results that [49], but [10] works better than ours. However, our method has an advantage of being computationally lighter than [10], as it does not use a flow image generator network.

Method UCF-static HMDB-static
App stream [10] 63.60 35.10
App stream ensemble [10] 64.00 35.50
Motion stream [10] 24.10 13.90
Motion stream [49] 14.30 04.96
App + Motion [10] 65.50 37.10
App + Motion [49] 64.50 35.90
Ours 2D-CNN 64.05 34.23
Ours HalluciNet 66.30 36.58
Table 2: Action recognition results and comparison.

4.2 Detailed action recognition

Figure 6: Detailed action recognition and action quality assessment models.
Method CNN Type Frames P A RT SS TW
C3D-AVG [30] 3D 96 96.32 99.72 97.45 96.88 93.20
MSCADC [30] 3D 16 78.47 97.45 84.70 76.20 82.72
Nibali et al. [29] 3D 16 74.79 98.30 78.75 77.34 79.89
Ours 2D-CNN 2D 6 92.63 99.72 86.40 87.25 84.70
Ours HalluciNet 2D 6 93.77 99.72 94.33 88.67 86.97
Table 3: Performance comparison on detailed action recognition task. Frames represent the number of frames the corresponding method sees. P, AS, RT, SS, TW stand for position, arsmstand, rotation type, number of somersaults, and number of twists.

We need to find suitable tasks to evaluate the utility of hallucinating future. Evaluating performance on ubiquitous task of recognizing actions in typically used datasets, like UCF-101 action recognition dataset, might not be sufficient. We need to evaluate on a task where the student network is required to hallucinate future in order to “fill the holes” in the input visual datastream. Fine-grained or detailed action recognition makes for a good candidate task.

Task description:

In Olympic Diving, athletes attempt many different types of dives. In general action recognition dataset, like UCF101, all these dives would grouped under a single action class, Diving. However, these dives vary from each other in a subtle way. Each dive has following five components: a) Position (legs straight or bent?) b) starting from Armstand or not? c) Rotation direction (backwards, forwards, etc.?) d) how many times the diver Somersaulted? e) how many times the diver twisted? Different combinations of these components would produce a unique type of dive. The task is to predict all five components of a dive, using very few frames.

Why is this task more suitable?

Unlike general action recognition datasets like UCF-101 [38] or Kinetics [21], action in diving samples in this dataset vary very subtly. Furthermore, cues needed in order to differentiate or recognize a dive are distributed across the entire action sequence. So, to make dive classification task more suitable for our case, we ask the network to classify a dive correctly using only few frames. In particular, we every 16th frame is shown to the student network. We truncate diving samples to 96 frames. So, out of 96 frames, the student network is shown only 6 frames, based on which it needs to classify the dive.

Dataset:

For this task, we use a recently released Diving dataset, MTL-AQA [30], which has 1059 training and 353 test samples.

Training procedure:

  1. We take a teacher network pretrained on UCF-101 dataset, and a student network pretrained on ImageNet dataset.

  2. Firstly, we again pretrain the ImageNet pretrained student network on UCF-101 action recognition, along with spatiotemporal representation hallucination, using loss function as in Eq.

    4, with set to 50. For vanilla 2D-CNN, we do not use hallucination loss.

  3. Finally, the student network is trained to classify dives. Since we will be gathering evidence over six frames, we make use of LSTM [16]

    to aggregate this evidence. LSTM is single-layered, with a hidden state being 256 dimensional. LSTM’s hidden state from last time step is passed through separate linear layers, one for each of the properties of a dive. The student network is trained end-to-end for 20 epochs using Adam solver with a constant learning rate of 0.0001.

Results of our models are summarized in Table 3, where we also compare them with other state-of-the-art 3D-CNN based approaches [29, 30]. We observe that our HalluciNet outperforms on four out of five fields. Difference in performance is more in case of RT, SS, TW than P, because position (legs straight or bent) may be equally identifiable from a single image or clip, but RT, SS, TW are more difficult to predict by a plain 2D-CNN without. In comparison, our HalluciNet has been trained to forecast short term future, and hence excels in situations which involve longer term dynamics. Our HalluciNet even outperforms 3D-CNN based approaches that use more frames (MSCADC [30] and Nibali et al. [29]). C3D-AVG outperforms HalluciNet, but is computationally very expensive and uses 16x more frames.

4.3 Assessing the quality of actions

Method CNN Type Frames Sp. Corr.
Pose+DCT [34] - 96 26.82
C3D-SVR [31] 3D 96 77.16
C3D-LSTM [31] 3D 96 84.89
C3D-AVG-STL [30] 3D 96 89.60
MSCADC-STL [30] 3D 16 84.72
Ours 2D-CNN 2D 6 82.18
Ours HalluciNet 2D 6 83.33
Table 4: Performance on AQA task.

Action quality assessment (AQA) is another task which can help bring out the utility of hallucinating spatiotemporal representations from still images using 2D-CNN. In AQA, the task is to measure or quantify how well an action was performed. A good example of AQA would be that of judging Olympic events like diving, gymnastics, figure skating, etc.

Dataset:

MTL-AQA [30], same as in Sec. 4.2.

Metric:

Consistent with literature, we report Spearman’s rank correlation (in %).

We follow the same training procedure as in Sec. 4.2, except that for AQA task we use L2 loss to train, as it is a regression task. We train for 20 epochs with Adam as solver, and anneal the learning rate by a factor of 10 every 5 epochs.

The results are presented in Table 4. Incorporating hallucination task helps improve performance on AQA task. Our HalluciNet outperforms C3D-SVR as well and is close to MSCADC. Although, C3D-AVG performs best on AQA task, this experiment still supports advantage of using hallucination task.

4.4 Dynamic scene recognition

Dataset:

Feichtenhofer et al. introduced YUP++ dataset for the task of dynamic scene recognition in [8]. It has a total of 20 scene classes. Use of this dataset to evaluate the utility of inferred motion was suggested in [10]. In the work by Feichtenhofer, 10% of the samples are used for training, while the remaining 90% of the samples are used for testing purpose. Gao et al. [10] form their own split, called ‘static-YUP++’.

Method Accuracy
SFA [41] 56.90
BoSE [7] 77.00
T-CNN [36] 50.60
Ours 2D-CNN 77.50
Ours HalluciNet 80.46
Table 5: Dynamic Scene recognition on YUP++.

Protocol:

For training and testing purposes, we consider the central frame of each sample.

We conduct two following experiments, and set in Eq. 4 to 1 in both the experiments.

  1. In order to evaluate the utility of hallucination task for dynamic scene recognition, and comparing our methods with [41, 7, 36]. For fair comparison, we use the split used [41, 7, 36]. Results summarized in Table 5. HalluciNet improves the performance of our vanilla 2D-CNN, also outperforms spatiotemporal energy based approach (BoSE), slow feature analysis (SFA) approach and temporal CNN (T-CNN). T-CNN might be the closest for comparison because it uses a stack of 10 optical flow frames. Yet, our HalluciNet outperforms by a large margin.

  2. To compare our approach with [10]. For fair comparison, we use the split used by [10]. Results summarized in Table 6. On static-YUP++, HalluciNet outperforms other motion information using approaches. HalluciNet outperforms by a large margin even when groundtruth motion information is used.

Method Accuracy
Appearance [10] 74.30
GT Motion [10] 55.50
Inferred Motion [10] 30.00
Appearance ensemble [10] 75.20
Appearance + Inferred Motion [10] 78.20
Appearance + GT Motion [10] 79.60
Ours 2D-CNN 79.84
Ours HalluciNet 88.10
Table 6: Dynamic scene recognition on static-YUP++.

5 Conclusion

3D-CNN’s extract richer spatiotemporal features than 2D-CNN’s, but this comes at a considerably higher computational cost. 2D-CNN’s have the benefit of being computationally much lighter. Since neural networks are universal function approximators, we propose a simple solution to approximate (hallucinate) spatiotemporal representations (computed by 3D-CNN) using a 2D-CNN. Hallucinating spatiotemporal representations, instead of actually computing, brings down the computational cost, and makes deployment on edge devices feasible, in addition to lowering the communication bandwidth requirement. Besides practical benefits, hallucination loss also provides stronger supervisory signal.

References

  • [1] S. Bhardwaj, M. Srinivasan, and M. M. Khapra (2019) Efficient video classification using fewer frames. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 354–363. Cited by: §2, §2.
  • [2] H. Bilen, B. Fernando, E. Gavves, A. Vedaldi, and S. Gould (2016) Dynamic image networks for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3034–3042. Cited by: §2, §2.
  • [3] N. Crasto, P. Weinzaepfel, K. Alahari, and C. Schmid (2019) MARS: Motion-Augmented RGB Stream for Action Recognition. In CVPR, Cited by: §2, §2.
  • [4] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: §4.
  • [5] H. Doughty, W. Mayol-Cuevas, and D. Damen (2019) The pros and cons: rank-aware temporal attention for skill determination in long videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7862–7871. Cited by: §1.
  • [6] C. Feichtenhofer, H. Fan, J. Malik, and K. He (2019) Slowfast networks for video recognition. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6202–6211. Cited by: §1.
  • [7] C. Feichtenhofer, A. Pinz, and R. P. Wildes (2014) Bags of spacetime energies for dynamic scene recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2681–2688. Cited by: item 1, Table 5.
  • [8] C. Feichtenhofer, A. Pinz, and R. P. Wildes (2017) Temporal residual networks for dynamic scene recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4728–4737. Cited by: §4.4.
  • [9] C. Finn, I. Goodfellow, and S. Levine (2016) Unsupervised learning for physical interaction through video prediction. In Advances in neural information processing systems, pp. 64–72. Cited by: §2.
  • [10] R. Gao, B. Xiong, and K. Grauman (2018) Im2flow: motion hallucination from static images for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5937–5947. Cited by: §1, §2, item 2, §4.1, §4.1, §4.1, §4.4, Table 2, Table 6.
  • [11] K. Gavrilyuk, A. Ghodrati, Z. Li, and C. G. Snoek (2018) Actor and action video segmentation from a sentence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5958–5966. Cited by: §3.
  • [12] B. Ghanem, J. C. Niebles, C. Snoek, F. C. Heilbron, H. Alwassel, V. Escorcia, R. Khrisna, S. Buch, and C. D. Dao (2018) The activitynet large-scale activity recognition challenge 2018 summary. arXiv preprint arXiv:1808.03766. Cited by: §1.
  • [13] R. Hadidi, J. Cao, Y. Xie, B. Asgari, T. Krishna, and H. Kim (2019) Characterizing the deployment of deep neural networks on commercial edge devices. In IEEE International Symposium on Workload Characterization, Cited by: §1.
  • [14] K. Hara, H. Kataoka, and Y. Satoh (2018) Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet?. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6546–6555. Cited by: §1, §4.
  • [15] Y. He, S. Shirakabe, Y. Satoh, and H. Kataoka (2016) Human action recognition without human. In European Conference on Computer Vision, pp. 11–17. Cited by: §1, §3.
  • [16] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. Neural computation 9 (8), pp. 1735–1780. Cited by: item 3.
  • [17] J. Hoffman, S. Gupta, and T. Darrell (2016) Learning with side information through modality hallucination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 826–834. Cited by: §2.
  • [18] J. Ji, S. Buch, A. Soto, and J. Carlos Niebles (2018) End-to-end joint semantic segmentation of actors and actions in video. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 702–717. Cited by: §3.
  • [19] V. Kalogeiton, P. Weinzaepfel, V. Ferrari, and C. Schmid (2017) Joint learning of object and action detectors. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4163–4172. Cited by: §3.
  • [20] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei (2014) Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732. Cited by: §1, §3.
  • [21] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §1, §4.2.
  • [22] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.
  • [23] K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert (2012) Activity forecasting. In European Conference on Computer Vision, pp. 201–214. Cited by: §2.
  • [24] H. S. Koppula and A. Saxena (2015) Anticipating human activities using object affordances for reactive robotic response. IEEE transactions on pattern analysis and machine intelligence 38 (1), pp. 14–29. Cited by: §2.
  • [25] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: Figure 3.
  • [26] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre (2011) HMDB: a large video database for human motion recognition. In Proceedings of the International Conference on Computer Vision (ICCV), Cited by: §4.1.
  • [27] M. Lee, S. Lee, S. Son, G. Park, and N. Kwak (2018) Motion feature network: fixed motion filter for action recognition. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 387–403. Cited by: §2, §2.
  • [28] J. Lin, C. Gan, and S. Han (2019) Tsm: temporal shift module for efficient video understanding. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7083–7093. Cited by: §2, §2.
  • [29] A. Nibali, Z. He, S. Morgan, and D. Greenwood (2017) Extraction and classification of diving clips from continuous video footage. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–104. Cited by: §4.2, Table 3.
  • [30] P. Parmar and B. T. Morris (2019) What and how well you performed? a multitask learning approach to action quality assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 304–313. Cited by: §1, §4.2, §4.2, §4.3, Table 3, Table 4.
  • [31] P. Parmar and B. Tran Morris (2017) Learning to score olympic events. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 20–28. Cited by: §1, Table 4.
  • [32] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §4.
  • [33] S. L. Pintea, J. C. van Gemert, and A. W. Smeulders (2014) Déja vu. In European Conference on Computer Vision, pp. 172–187. Cited by: §1, §2, §2.
  • [34] H. Pirsiavash, C. Vondrick, and A. Torralba (2014) Assessing the quality of actions. In European Conference on Computer Vision, pp. 556–571. Cited by: Table 4.
  • [35] Z. Qiu, T. Yao, and T. Mei (2017) Learning spatio-temporal representation with pseudo-3d residual networks. In proceedings of the IEEE International Conference on Computer Vision, pp. 5533–5541. Cited by: §2, §2.
  • [36] K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pp. 568–576. Cited by: item 1, Table 5.
  • [37] K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. In Advances in neural information processing systems, pp. 568–576. Cited by: §2.
  • [38] K. Soomro, A. R. Zamir, and M. Shah (2012) UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402. Cited by: §4.1, §4.2.
  • [39] J. C. Stroud, D. A. Ross, C. Sun, J. Deng, and R. Sukthankar (2018) D3D: distilled 3d networks for video action recognition. arXiv preprint arXiv:1812.08249. Cited by: §2, §2.
  • [40] L. Sun, K. Jia, D. Yeung, and B. E. Shi (2015) Human action recognition using factorized spatio-temporal convolutional networks. In Proceedings of the IEEE international conference on computer vision, pp. 4597–4605. Cited by: §2, §2.
  • [41] C. Theriault, N. Thome, and M. Cord (2013) Dynamic scene classification: learning motion descriptors with slow features analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2603–2610. Cited by: item 1, Table 5.
  • [42] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri (2015) Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision, pp. 4489–4497. Cited by: Figure 2, §1, Figure 3, §3.
  • [43] D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun, and M. Paluri (2018) A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6450–6459. Cited by: §2, §2.
  • [44] C. Vondrick, H. Pirsiavash, and A. Torralba (2016) Anticipating visual representations from unlabeled video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 98–106. Cited by: §2, §2.
  • [45] C. Vondrick, H. Pirsiavash, and A. Torralba (2016) Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pp. 613–621. Cited by: §2, §2.
  • [46] C. Vondrick and A. Torralba (2017) Generating the future with adversarial transformers. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1020–1028. Cited by: §2, §2.
  • [47] J. Walker, C. Doersch, A. Gupta, and M. Hebert (2016)

    An uncertain future: forecasting from static images using variational autoencoders

    .
    In European Conference on Computer Vision, pp. 835–851. Cited by: §2.
  • [48] J. Walker, A. Gupta, and M. Hebert (2014) Patch to the future: unsupervised visual prediction. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 3302–3309. Cited by: §1, §2, §2.
  • [49] J. Walker, A. Gupta, and M. Hebert (2015) Dense optical flow prediction from a static image. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2443–2451. Cited by: §1, §2, §2, §4.1, §4.1, Table 2.
  • [50] H. Wang, J. Lin, and Z. Wang (2019) Design light-weight 3d convolutional networks for video recognition temporal residual, fully separable block, and fast algorithm. arXiv preprint arXiv:1905.13388. Cited by: §1, §2.
  • [51] X. Wang, A. Farhadi, and A. Gupta (2016) Actions~ transformations. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 2658–2667. Cited by: §2.
  • [52] S. Xie, C. Sun, J. Huang, Z. Tu, and K. Murphy (2018) Rethinking spatiotemporal feature learning: speed-accuracy trade-offs in video classification. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 305–321. Cited by: §2, §2.
  • [53] C. Xu and J. J. Corso (2016) Actor-action semantic segmentation with grouping process models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3083–3092. Cited by: §3.
  • [54] C. Xu, S. Hsieh, C. Xiong, and J. J. Corso (2015) Can humans fly? action understanding with multiple classes of actors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2264–2273. Cited by: §3.
  • [55] Y. Yan, C. Xu, D. Cai, and J. J. Corso (2017) Weakly supervised actor-action segmentation via robust multi-task ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1298–1307. Cited by: §3.
  • [56] J. Yuen and A. Torralba (2010) A data-driven approach for event prediction. In European Conference on Computer Vision, pp. 707–720. Cited by: §2.
  • [57] M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Cited by: §3.
  • [58] H. Zhang, Y. Li, P. Wang, Y. Liu, and C. Shen (2018) RGB-d based action recognition with light-weight 3d convolutional networks. arXiv preprint arXiv:1811.09908. Cited by: §1.
  • [59] Z. Zhang, Z. Kuang, P. Luo, L. Feng, and W. Zhang (2018) Temporal sequence distillation: towards few-frame action recognition in videos. In 2018 ACM Multimedia Conference on Multimedia Conference, pp. 257–264. Cited by: §2, §2.
  • [60] Y. Zhou, X. Sun, Z. Zha, and W. Zeng (2018) Mict: mixed 3d/2d convolutional tube for human action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 449–458. Cited by: §1, §2, §2.