Video commands the lion’s share of internet traffic at 70% and rising 
. Most cell phone cameras now capture high resolution videos in addition to images. Many real-world data sources are video based, ranging from inventory systems at warehouses to self-driving cars or autonomous drones. Video is also arguably the next frontier in computer vision, as it captures a wealth of information still images cannot convey. Videos carry more emotion, allow us to predict the future to a certain extent , provide temporal context and give us better spatial awareness . Unfortunately, very little of this information is currently exploited.
We argue that the reason is two-fold. First, videos have a very low information density, as 1h of 720p video can be compressed from 222GB raw to 1GB. In other words, videos are filled with boring and repeating patterns, drowning the ‘true’ and interesting signal. The redundancy makes it harder for CNNs to extract meaningful information, and makes the training much slower. Second, with only RGB images, learning temporal structure is difficult. A vast body of literature attempts to process videos as RGB image sequences, either with 2D CNNs, 3D CNNs, or recurrent neural networks (RNNs), but has yielded limited success[16, 40]. Using precomputed optical flow almost always boosts the performance .
To address these issues, we exploit the compressed representation developed for storage and transmission of videos rather than operating on the RGB frames (teaser). These compression techniques (like MPEG-4, H.264 etc.) leverage that successive frames are usually similar. They retain only a few frames completely and reconstruct other frames based on offsets, called motion vectors and residual error, from the complete images. Our model consists of multiple CNNs that directly operate on the motion vectors, residuals, in addition to a small number of complete images.
Why is this better? First, video compression removes up to two orders of magnitude of superfluous information, making interesting signals prominent. Second, the motion vectors in video compression provide us the motion information that lone RGB images do not have. Furthermore, the motion signals already exclude spatial variations, e.g. two people performing the same action in different clothings or in different lighting conditions exhibit the same motion
signals. This improves generalization, and the lowered variance further simplifies training. Third, with compressed video, we account for correlation in video frames, i.e. spatial view plus some small changes over time, instead of i.i.d. images. Constraining data in this structure helps us tackling the curse of dimensionality. Last but not least, our method is also much more efficient as we only look at the true signals instead of repeatedly processing near-duplicates. Efficiency is also gained by avoiding to decompress the video, because video is usually stored or transmitted in the compressed version, and access to the motion vectors and residuals are free.
On action recognition datasets UCF-101 , HMDB-51 , and Charades , our approach significantly outperforms all other methods that train on traditional RGB images. Our approach is simple and fast, without using RNNs, complicated fusion or 3D convolutions. It is 4.6 times faster than state-of-the-art 3D CNN model Res3D , and 2.7 times faster than ResNet-152 . When combined with scores from a standard temporal stream network, our model outperforms state-of-the-art methods on all these datasets.
In this section we provide a brief overview about video action recognition and video compression.
2.1 Action Recognition
Traditionally, for video action recognition, the community utilized hand-crafted features, such as Histogram of Oriented Gradients (HOG)  or Histogram of Optical Flow (HOF) , both sparsely  and densely  sampled. While early methods consider independent interest points across frames, smarter aggregation based on dense trajectories have been used [41, 42, 25]. Some of these traditional methods are competitive even today, like iDT which corrects for camera motion .
In the past few years, deep learning has brought significant improvements to video understanding [16, 6]. However, the improvements mainly stem from improvements in deep image representations. Modeling of temporal structure is still relatively simple — most algorithms subsample a few frames and perform average pooling to make final predictions [33, 44]. RNNs [6, 48], temporal CNNs , or other feature aggregation techniques [11, 44] on top of CNN features have also been explored. However, while introducing new computation overhead, these methods do not necessarily outperform simple average pooling . Some works explore 3D CNNs to model the temporal structure [40, 39]. Nonetheless, it results in an explosion of parameters and computation time and only marginally improves the performance .
More importantly, evidence suggests that these methods are not sufficient to capture all temporal structures — using of pre-computed optical flow almost always boosts the performance [33, 44, 9, 2]. This emphasizes the importance of using the right input representation and the inadequacy of RGB frames. Finally, note that all of these methods require raw video frame-by-frame and cannot exploit the fact that video is stored in some compressed format.
2.2 Video Compression
The need for efficient video storage and transmission has led to highly efficient video compression algorithms, such as MPEG-4, H.264, and HEVC, some of which date back to 1990s . Most video compression algorithms leverage the fact that successive frames are usually very similar. We can efficiently store one frame by reusing contents from another frame and only store the difference.
Most modern codecs split a video into I-frames (intra-coded frames), P-frames (predictive frames) and zero or more B-frames (bi-directional frames). I-frames are regular images and compressed as such. P-frames reference the previous frames and encode only the ‘change’. A part of the change – termed motion vectors – is represented as the movements of block of pixels from the source frame to the target frame at time , which we denote by . Even after this compensation for block movement, there can be difference between the original image and the predicted image at time . We denote this residual difference by . Putting it together, a P-frame at time only comprises of motion vectors and a residual . This gives the recurrence relation for reconstructing P-frames as
for all pixel , where denotes the RGB image at time . The motion vectors and the residuals are then passed through discrete cosine transform (DCT) and entropy-encoded.
A B-frame may be viewed as a special P-frame, where motion vectors are computed bi-directionally and may reference a future frame as long as there are no circles in referencing. Both B- and P- frames capture only what changes in the video, and are easier to compress owing to smaller dynamic range 
. See viz_accu for a visualization of the motion estimates and the residuals. Modeling arbitrary decoding order is beyond the scope of this paper. We focus on videos encoded using only backward references, namely I- and P- frames.
Features from Compressed Data.
Some prior works have utilized signals from compressed video for detection or recognition, but only as a non-deep feature[47, 36, 38, 15]. To the best of our knowledge, this is the first work that considers training deep networks on compressed videos. MV-CNN apply distillation to transfer knowledge from an optical flow network to a motion vector network . However, unlike our approach, it does not consider the general setting of representation learning on a compressed video; it still needs the entire decompressed video as RGB stream, and it requires optical flow as an additional supervision.
Equipped with this background, next we will explore how to utilize the compressed representation, devoid of redundant information, for action recognition.
3 Modeling Compressed Representations
Our goal is to design a computer vision system for action recognition that operates directly on the stored compressed video. The compression is solely designed to optimize the size of the encoding, thus the resulting representation has very different statistical and structural properties than the images in a raw video. It is not clear if the successful deep learning techniques can be adapted to compressed representations in a straightforward manner. So we ask how to feed a compressed video into a computer vision system, specifically a deep network?
Feeding I-frames into a deep network is straightforward since they are just images. How about P-frames? From viz_accu we can see that motion vectors, though noisy, roughly resemble optical flows. As modeling optical flows with CNNs has been proven effective, it is tempting to do the same for motion vectors. The third row of viz_accu visualizes the residuals. We can see that they roughly give us a motion boundary in addition to a change of appearance, such as the change of lighting conditions. Again, CNNs are well-suited for such patterns. The outputs of corresponding CNNs from the image, motion vectors, and residual will have different properties. To combine them, we tried various fusion strategies, including mean pooling, maximum pooling, concatenation, convolution pooling, and bilinear pooling, on both middle layers and the final layer, but with limited success.
Digging deeper, one can argue that the motion vectors and residuals alone do not contain the full information of a P-frame — a P-frame depends on the reference frame, which again might be a P-frame. This chain continues all the way back to a preceding I-frame. Treating each P-frame as an independent observation clearly violates this dependency. A simple strategy to address this is to reuse features from the reference frame, and only update the features given the new information. This recurrent definition screams for RNNs to aggregate features along the chain. However, preliminary experiments suggest the elaborate modeling effort in vain (see supplementary material for details). The difficulty arises from the long chain of dependency of the P-frames. To mitigate this issue, we devise a novel yet simple back-tracing technique that decouples individual P-frames.
Decoupled Model. To break the dependency between consecutive P-frames, we trace all motion vectors back to the reference I-frame and accumulate the residual on the way. In this way, each P-frame depends only on the I-frame but not other P-frames.
accu illustrates the back-tracing technique. Given a pixel at location in frame , let be the referenced location in the previous frame. The location traced back to frame is given by
Then the accumulated motion vectors and the accumulated residuals at frame are
respectively. This can be efficiently calculated in linear time through a simple feed forward algorithm, accumulating motion and residuals as we decode the video. Each P-frame now has a different dependency
as shown in new_dependency. Here P-frames depend only on the I-frame and can be processed in parallel.
A nice side effect of the back-tracing is robustness. The accumulated signals contain longer-term information, which is more robust to noise or camera motion. viz_accu shows the accumulated motion vectors and residuals respectively. They exhibit clearer and smoother patterns than the original ones.
Proposed Network. model shows the graphical illustration of the proposed model. The input of our model is an I-frame, followed by P-frames, i.e. . For notational simplicity we set for the I-frame. Each input source is modeled by a CNN, i.e.
While I-frame features are used as is, P-frame features and need to incorporate the information from . There are several reasonable candidates for such a fusion, e.g. maximum, multiplicative or convolutional pooling. We also experiment with transforming RGB features according to the motion vector. Interestingly, we found a simple summing of scores to work best (see supplementary material for details). This gives us a model that is easy to train and flexible for inference.
Implementation. Note that most of the information is stored in I-frames, and we only need to learn the update for P-frames. We thus focus most of the computation on I-frames, and use a much smaller and simpler model to capture the updates in P-frames. This yields significant saving in terms of computation, since in modern codecs most frames are P-frames.
Specifically, we use ResNet-152 (pre-activation) to model I-frames, and ResNet-18 (pre-activation) to model the motion vectors and residuals . This offers a good trade-off between speed and accuracy. For video-level tasks, we use Temporal Segments  to capture long term dependency, i.e. feature at each step is the average of features across segments during training.
We now validate for action recognition that (i) compressed video is a better representation (ablation), leading to (ii) good accuracy (accuracy) and (iii) high speed (speed). However, note that the principle of the proposed method can be applied effortlessly to other tasks like video classification , object detection , or action localization . We pick action recognition due to its wide range of applications and strong baselines.
Datasets and Protocol. We evaluate our method Compressed Video Action Recognition (CoViAR) on three action recognition datasets, UCF-101 , HMDB-51 , and Charades . UCF-101 and HMDB-51 contain short (-second) trimmed videos, each of which is annotated with one action label. Charades contains longer (-second) untrimmed videos. Each video is annotated with one or more action labels and their intervals (start time, end time). UCF-101 contains 13,320 videos from 101 action categories. HMDB-51 contains 6,766 videos from 51 action categories. Each dataset has 3 (training, testing)-splits. We report the average performance of the 3 testing splits unless otherwise stated. The Charades dataset contains 9,848 videos split into 7,985 training and 1,863 test videos. It contains 157 action classes.
During testing we uniformly sample 25 frames, each with flips plus 5 crops, and then average the scores for final prediction. On UCF-101 and HMDB-51 we use temporal segments, and perform the averaging before softmax following TSN . On Charades we use mean average precision (mAP) and weighted average precision (wAP) to evaluate the performance, following previous work .
Training Details. Following TSN , we resize UCF-101 and HMDB-51 videos to . As Charades contains both portrait and landscape videos, we resize them to . Our models are pre-trained on the ILSVRC 2012-CLS dataset , and fine-tuned using Adam  with a batch size of 40. Learning rate starts from 0.001 for UCF-101/HMDB-51 and 0.03 for Charades. It is divided by 10 when the accuracy plateaus. Pre-trained layers use a 100 smaller learning rate. We apply color jittering and random cropping to for data augmentation following TSN. Where available, we select the hyper-parameters on splits other than the tested one. We use MPEG-4 encoded videos, which have on average 11 P-frames for every I-frame. Optical flow models use TV-L1 flows .
4.1 Ablation Study
Here we study the benefits of using compressed representations over RGB images. We focus on UCF-101 and HMDB-51, as they are two of the most well-studied action recognition datasets. ablation presents a detailed analysis. On both datasets, training on compressed videos significantly outperforms training on RGB frames. In particular, it provides 5.8% and 2.7% absolute improvement on HMDB-51 and UCF-101 respectively.
Quite surprisingly, while residuals contribute to a very small amount of data, it alone achieves good accuracy. Motion vectors alone perform not as well, as they do not contain spatial details. However, they offer information orthogonal to what still images provide. When added to other streams, it significantly boosts the performance. Note that we use only I-frames as full images, which is a small subset of all frames, yet CoViAR achieves good performance.
Accumulated Motion Vectors and Residuals. Our back-tracing technique not only simplifies the dependency but also results in clearer patterns to model. This improves the performance, as shown in ablation_accu. On the first split of UCF-101, our accumulation technique provides 5.6% improvement on the motion vector stream network and on the full model, 0.4% improvement (4.2% error reduction). Performance of the residual stream also improves by 0.9% (4.3% error reduction).
Visualizations. In tsne, we qualitatively study the RGB and compressed representations of two videos of the same action in t-SNE  space. We can see that in RGB space the two videos are clearly separated, and in motion vector and residual space they overlap. This suggests that a RGB-image based model needs to learn the two patterns separately, while a compressed-video based model sees a shared representation for videos of the same action, making training and generalization easier.
In addition, note that the two ways of the RGB trajectories overlap, showing that RGB images cannot distinguish between the up-moving and down-moving motion. On the other hand, compressed signals preserve motion. The trajectories thus form circles instead of going back and forth on the same path.
4.2 Speed and Efficiency
Our method is efficient because the computation on the I-frame is shared across multiple frames, and the computation on P-frames is cheaper. flops compares the CNN computational cost of our method with state-of-the-art 2D and 3D CNN architectures. Since for our model the P- and I-frame computational costs are different, we report the average GFLOPs over all frames. As shown in the table, CoViAR is 2.7 times faster than ResNet-152  and is 4.6 times more than Res3D , while being significantly more accurate.
A more detailed speed analysis is presented in fps. The preprocessing time of the two-stream methods, i.e. optical flow computation, is measured on a Tesla P100 GPU with an implementation of the TV-L1 flow algorithm from OpenCV. Our preprocessing, i.e. the calculation of the accumulated motion vectors and residuals, is measured on Intel E5-2698 v4 CPUs. CNN time is measured on the same P100 GPU. We can see that the optical flow computation is the bottleneck for two-stream networks, even with low-resolution videos. Our preprocessing is much faster despite our CPU-only implementation.
For CNN time, we consider both settings where (i) we can forward multiple CNNs at the same time, and (ii) we do it sequentially. For both settings, our method is significantly faster than traditional methods. Overall, our method can be up to 100 times faster than traditional methods with multi-thread preprocessing, running at 1,300 frames per second. accuracy_speed summarizes the results. CoViAR achieves the best efficiency and good accuracy, while requiring a far lesser amount of data.
We now compare the accuracy of CoViAR with state-of-the-art models in results. For fair comparison, here we focus on models using the same pre-training dataset, ILSVRC 2012-CLS . While pre-training using Kinetics yields better performance , since it is larger and more similar to the datasets used in this paper, those results are not directly comparable.
From the upper part of the table, we can see that our model significantly outperforms traditional RGB-image based methods. C3D , Res3D , P3D ResNet , and I3D  consider 3D convolution to learn temporal structures. Karpathy et al.  and TLE  consider more complicated fusions and pooling. MV-CNN  apply distillation to transfer knowledge from an optical-flow-based model. Our method uses much faster 2D CNNs plus simple late fusion without additional supervision, and still significantly outperforms these methods.
Two-stream Network. Most state-of-the-art models use the two-stream framework, i.e. one stream trained on RGB frames and the other on optical flows. It is natural to ask: What if we replace the RGB stream by our compressed stream? Here we train a temporal-stream network using 7 segments with BN-Inception , and combine it with our model by late fusion. Despite its simplicity, this achieves very good performance as shown in add_of.
The lower part of results compares our method with state-of-the-art models using optical flow. CoViAR outperforms all of them. LRCN , Composite LSTM Model , and LSTM  use RNNs to model temporal dynamics. ActionVLAD  and TLE  apply more complicated feature aggregation. iDT+FT  is based on hand-engineered features. Again, our method simply trains 2D CNNs separately without any complicated fusion or RNN and still outperforms these models.
Finally we evaluate our method on the Charades dataset (charades). As Charades consists of annotations at frame-level, we train our network to predict the labels of each frame. At test time we average the scores of the sampled frames as the final prediction. Our method again outperforms other models trained on RGB images. Note that Sigurdsson et al. use additional annotations including objects, scenes, and intentions to train a conditional random field (CRF) model . Our model requires only action labels. When using optical flow, CoViAR outperforms all other state-of-the-art methods. The effectiveness on Charades demonstrates the effectiveness of CoViAR for both video-level and frame-level predictions.
|Video 1||Video 2||Joint space|
In this paper, we propose to train deep networks directly on compressed videos. This is motivated by the practical observation that either video compression is essentially free on all modern cameras, due to hardware-accelerated video codecs or that the video is directly available in its compressed form. In other words, decompressing the video is actually an inconvenience.
We demonstrate that, quite surprisingly, this is not a drawback but rather a virtue. In particular, video compression reduces irrelevant information from the data, thus rendering it more robust. After all, compression is not meant to affect the content that humans consider pertinent. Secondly, the increased relevance and reduced dimensionality makes computation much more effective (we are able to use much simpler networks for motion vectors and residuals). Finally, the accuracy of the model actually improves when using compressed data, yielding new state of the art.
In short, our method is both faster and more accurate, while being simpler to implement than previous works.
We would like to thank Ashish Bora for helpful discussions. This work was supported in part by Berkeley DeepDrive and an equipment grant from Nvidia.
-  S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev, G. Toderici, B. Varadarajan, and S. Vijayanarasimhan. YouTube-8M: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675, 2016.
-  J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017.
-  N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
-  A. Diba, V. Sharma, and L. Van Gool. Deep temporal linear encoding networks. CVPR, 2017.
-  J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015.
-  C. Feichtenhofer, A. Pinz, and R. Wildes. Spatiotemporal residual networks for video action recognition. In NIPS, 2016.
-  C. Feichtenhofer, A. Pinz, and R. P. Wildes. Spatiotemporal multiplier networks for video action recognition. In CVPR, 2017.
-  C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In CVPR, 2016.
-  R. Girdhar and D. Ramanan. Attentional pooling for action recognition. In NIPS, 2017.
-  R. Girdhar, D. Ramanan, A. Gupta, J. Sivic, and B. Russell. Actionvlad: Learning spatio-temporal aggregation for action classification. In CVPR, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
V. Kantorov and I. Laptev.
Efficient feature extraction, encoding and classification for action recognition.In CVPR, 2014.
-  A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recognition. In ICCV, 2011.
-  I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In CVPR, 2008.
-  D. Le Gall. Mpeg: A video compression standard for multimedia applications. Communications of the ACM, 1991.
-  C.-Y. Ma, M.-H. Chen, Z. Kira, and G. AlRegib. TS-LSTM and temporal-inception: Exploiting spatiotemporal dynamics for activity recognition. arXiv preprint arXiv:1703.10667, 2017.
-  L. v. d. Maaten and G. Hinton. Visualizing data using t-SNE. JMLR, 2008.
-  M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. ICLR, 2016.
-  C. V. networking Index. Forecast and methodology, 2016-2021, white paper. San Jose, CA, USA, 2016.
-  X. Peng, C. Zou, Y. Qiao, and Q. Peng. Action recognition with stacked fisher vectors. In ECCV, 2014.
-  M. Pollefeys, D. Nistér, J.-M. Frahm, A. Akbarzadeh, P. Mordohai, B. Clipp, C. Engels, D. Gallup, S.-J. Kim, P. Merrell, et al. Detailed real-time urban 3d reconstruction from video. IJCV, 2008.
-  Z. Qiu, T. Yao, and T. Mei. Learning spatio-temporal representation with pseudo-3d residual networks. In ICCV, 2017.
-  I. E. Richardson. Video codec design: developing image and video compression systems. John Wiley & Sons, 2002.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet large scale visual recognition challenge. IJCV, 2015.
-  Y. Shi, Y. Tian, Y. Wang, W. Zeng, and T. Huang. Learning long-term dependencies for action recognition with a biologically-inspired deep network. In ICCV, 2017.
-  G. A. Sigurdsson, S. Divvala, A. Farhadi, and A. Gupta. Asynchronous temporal fields for action recognition. In CVPR, 2017.
-  G. A. Sigurdsson, G. Varol, X. Wang, A. Farhadi, I. Laptev, and A. Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In ECCV, 2016.
-  K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014.
-  K. Soomro, A. Roshan Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. In CRCV-TR-12-01, 2012.
-  N. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations using lstms. In ICML, 2015.
-  O. Sukmarg and K. R. Rao. Fast object detection and segmentation in MPEG compressed domain. In TENCON, 2000.
L. Sun, K. Jia, K. Chen, D.-Y. Yeung, B. E. Shi, and S. Savarese.
Lattice long short-term memory for human action recognition.In ICCV, 2017.
-  B. U. Töreyin, A. E. Cetin, A. Aksay, and M. B. Akhan. Moving object detection in wavelet compressed video. Signal Processing: Image Communication, 2005.
-  D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015.
-  D. Tran, J. Ray, Z. Shou, S.-F. Chang, and M. Paluri. ConvNet architecture search for spatiotemporal feature learning. arXiv preprint arXiv:1708.05038, 2017.
-  H. Wang, A. Kläser, C. Schmid, and C.-L. Liu. Dense trajectories and motion boundary descriptors for action recognition. IJCV, 2013.
-  H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
-  H. Wang, M. M. Ullah, A. Klaser, I. Laptev, and C. Schmid. Evaluation of local spatio-temporal features for action recognition. In BMVC, 2009.
-  L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016.
-  Y. Wang, M. Long, J. Wang, and P. S. Yu. Spatiotemporal pyramid network for video action recognition. In CVPR, 2017.
S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c. Woo.
Convolutional LSTM network: A machine learning approach for precipitation nowcasting.In NIPS, 2015.
-  B.-L. Yeo and B. Liu. Rapid scene analysis on compressed video. IEEE Transactions on circuits and systems for video technology, 1995.
-  J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, 2015.
-  C. Zach, T. Pock, and H. Bischof. A duality based approach for realtime tv-l 1 optical flow. Pattern Recognition, 2007.
-  B. Zhang, L. Wang, Z. Wang, Y. Qiao, and H. Wang. Real-time action recognition with enhanced motion vector CNNs. In CVPR, 2016.
Appendix A RNN-Based Models
Given the recurrent definition of P-frames, one can use a RNN to model a compressed video. In preliminary experiments, we experiment with a variant using Conv-LSTMs .
The architecture is identical to CoViAR except that i) it uses the original and instead of the accumulated and , because here we want to the original dependency, and ii) it uses a Conv-LSTM to aggregate the CNN features instead of average pooling. Formally, let
denote the max-pooled P-frame feature at time. The Conv-LSTM takes the input sequence
Here the number of channels of is reduced from to by an convolution so that its dimensionality matches . We use -dimensional hidden states and kernels for the Conv-LSTM. Due to memory constraint, we subsample one every two P-frames to reduce the sequence length.
lstm presents the results. Even though the Conv-LSTM model outperforms traditional RGB-based methods, the decoupled CoViAR achieves the best performance. We also try adding the input of Conv-LSTM to its output as a skip connection, but it leads to worse performance (Conv-LSTM-Skip).
Appendix B Feature Fusion
We experiment with different ways of combining P-frame features, , , and I-frame features . In particular, we evaluate maximum, mean, and multiplicative fusion, concatenation of feature maps, and late fusion (summing softmax scores). For maximum, mean, and multiplicative fusion, we perform convolution on I-frame feature maps before fusion, so that their dimensionality matches P-frame features.
fusion summarizes the results; we found late fusion works the best for CoViAR. Note that late fusion allows training of a decoupled model, while the rest requires training multiple CNNs jointly. The ease of training of late fusion may also contribute to its superior performance.
Appendix C CoViAR without Temporal Segments
Appendix D Confusion Matrix
confusion_ours and confusion_rgb show the confusion matrices of CoViAR and the model using only RGB images respectively, on UCF-101. confusion_diff shows the difference between their predictions. We can see that CoViAR corrects many mistakes made by the RGB-based model (off-diagonal purple blocks in confusion_diff). For example, while the RGB-based model gets confused about the similar actions of Cricket Bowling and Cricket Shot, our model better distinguishes between them.