Video Jigsaw: Unsupervised Learning of Spatiotemporal Context for Video Action Recognition

08/22/2018 ∙ by Unaiza Ahsan, et al. ∙ Georgia Institute of Technology Carnegie Mellon University 0

We propose a self-supervised learning method to jointly reason about spatial and temporal context for video recognition. Recent self-supervised approaches have used spatial context [9, 34] as well as temporal coherency [32] but a combination of the two requires extensive preprocessing such as tracking objects through millions of video frames [59] or computing optical flow to determine frame regions with high motion [30]. We propose to combine spatial and temporal context in one self-supervised framework without any heavy preprocessing. We divide multiple video frames into grids of patches and train a network to solve jigsaw puzzles on these patches from multiple frames. So the network is trained to correctly identify the position of a patch within a video frame as well as the position of a patch over time. We also propose a novel permutation strategy that outperforms random permutations while significantly reducing computational and memory constraints. We use our trained network for transfer learning tasks such as video activity recognition and demonstrate the strength of our approach on two benchmark video action recognition datasets without using a single frame from these datasets for unsupervised pretraining of our proposed video jigsaw network.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Unsupervised representation learning of visual data is a much needed line of research as it does not require manually labeled large scale datasets. Especially for classification tasks in video, where the annotation process is tedious and sometimes hard to agree upon (where an action begins and ends for example) [46]. One proposed solution to this problem is self-supervised learning where auxiliary tasks are designed to exploit the inherent structure of unlabeled datasets and a network is trained to solve those tasks. Self-supervised tasks that exploit spatial context include predicting the location of one patch relative to another [9], solving a jigsaw puzzle of image patches [34], predicting an image’s color channels from grayscale [61, 28] among others. Self-supervision tasks on video data include video frame tuple order verification [32], sorting video frames [30] and tracking objects over time and training a Siamese network for similarity based learning [58]. Video data involves not just spatial context but also rich temporal structure in an image sequence. Attempts to combine the two have resulted in multi-task learning approaches [10] that result in some improvement over a single network. This work proposes a self-supervised task that jointly exploits spatial and temporal context in videos by dividing multiple video frames into patches and shuffling them into a jigsaw puzzle problem. The network is trained to solve this puzzle that involves reasoning over space and time.

There are several studies that empirically validate that the earliest visual cues captured by infants’ brains are surface motions of objects [51]. These then go on and develop into perception involving local appearance and texture of objects. [51]. Studies have also pointed out that objects’ motion and their temporal transformation are important for the human visual system to learn the structure of objects [15, 60]. Motivated by these studies, there is recent work on unsupervised video representation learning via tracking objects through videos and training a Siamese network to learn a similarity metric on these object patches [58]. However, the prerequisite of this approach is to track millions of objects through videos and extract the relevant patches. Keeping this in mind, we propose to learn such a structure of objects and their transformations over time by designing a self-supervised task which solves jigsaw puzzles comprising multiple video frame patches, without needing to explicitly track objects over time. Our proposed method, trained on a large scale video activity dataset also does not require optical flow based patch mining and we show empirically that a large unlabeled video dataset with a simple permutation sampling approach are enough to learn an effective unsupervised representation. Figure 1 shows our proposed approach, which we call video jigsaw. Our contributions in this paper are:

Figure 1: Video Jigsaw Task: The first row shows a tuple of frames of action “high jump”. Second row shows how we divide each frame into a 2x2 grid of patches. The third row shows a random permutation of the 12 patches which are input to the network. The final row shows the jigsaw puzzle assembled
  1. We propose a novel self-supervised task which divides multiple video frames into patches, creates jigsaw puzzles out of these patches and the network is trained to solve this task.

  2. Our work exploits both spatial and temporal context in one joint framework without requiring explicit object tracking in videos or optical flow based patch mining from video frames.

  3. We propose a permutation strategy that constrains the sampled permutations and outperforms random permutations while being memory efficient.

  4. We show via extensive experimental evaluation the feasibility and effectiveness of our approach on video action recognition.

  5. We demonstrate the domain transfer capability of our proposed video jigsaw networks, given that our best self-supervised model is trained on Kinetics [24] video frames and we demonstrate competitive results on UCF101 [50] and HMDB51[27] datasets.

2 Related Work

Unsupervised representation learning is a well studied problem in the literature for both images and videos. The goal is to learn a representation that is simpler in some way: it can be low-dimensional, sparse, and/or independent [17]

. One way to learn such a representation is to use a reconstruction objective. Autoencoders

[20]

are neural networks designed to reconstruct the input and produce it as its output. Denoising autoencoders

[56]

train a network to undo random corruption of the input data. Other methods that use reconstruction to estimate the latent variables that can explain the observed data include Deep Boltzmann Machines

[45], stacked autoencoders [29, 5]

and Restricted Boltzmann Machines (RBMs)

[21, 49]

. Classical work (before deep learning) involved hand-designing features and feature aggregation for application such as object discovery in large datasets

[48, 44] and mid-level feature mining [8, 47, 54].

Unsupervised learning from videos include many learning variants such as video frame prediction [60, 33, 52, 55, 18] but we argue that predicting pixels is a much harder task, especially if the end task is to learn high level motion and appearance changes in frames for activity recognition. Other unsupervised representation learning approaches include exemplar CNNs [11], CliqueCNNs [4] and unsupervised similarity learning by clustering [3].

Unsupervised representations are generally learned to make another learning task (of interest) easier [17]. This forms the basis of another line of work that has emerged, called ‘self-supervised learning’ [9, 32, 61, 28, 58, 10, 59, 35]. Self-supervised learning aims to find structure in the unlabeled data by designing auxiliary tasks and pseudo labels to learn features that can explain the factors of variation in the data. These features can then be useful for the target task; in our case, video action recognition. Self-supervised learning can exploit several cues, some of which are spatial context and temporal coherency. Other self-supervised learning tasks on videos use cues like ego-motion [63, 22, 1] as a supervisory signal and other modalities beyond raw pixels such as audio [37, 36] and robot motion [2, 40, 41, 42]. We briefly cover relevant literature from the spatial, temporal and combined contextual cues for self-supervised learning.

Spatial Context:

These methods typically sample patches from images or videos. Supervised tasks are designed around the arrangement of these patches and pseudo labels constructed. Doersch [9] divide an image into a 3x3 grid, sample two patches from an image and train a network to predict the location of the second patch relative to the first. This prediction task requires no labels but learns an effective image representation. Noroozi and Favaro [34] also divide an image into a 3x3 grid but they input all patches in a Siamese-like network where the patches are shuffled and the task is to solve this jigsaw puzzle task. They report that with just 100 permutations, their network is able to learn a representation such that when finetuned on PASCAL VOC 2007 [13] for object detection and classification, it produces good results. Pathak [39]

devise an inpainting auxiliary task where blocks of pixels from an image are removed and the task is to predict the missing pixels. A related task is the image colorization one

[61, 28] where the network is trained to predict the color of the image which is available as a ‘free signal’ with images. Zhang [62] modify the autoencoder architecture to predict raw data channels as their self-supervised task and use the learnt features for supervised tasks.

Figure 2: Our full video jigsaw network training pipeline.

Temporal Coherency:

These methods use temporal coherency as a supervisory signal to train models and use abundant unlabeled video data instead of just images. Wang and Gupta [58] use detection and tracking methods to extract object patches from videos and train a Siamese network with the prior that objects in nearby frames are similar whereas other random object patches are dissimilar. Misra [32] devise a sequence verification task where tuples of video frames are shuffled and the network is trained on the binary task of discriminating between correctly ordered and shuffled frames. Fernando [14]

design a task where they take frames in correct temporal order and shuffled order, encode them and pass them as input to a network which is then trained to predict the odd encoding out of the rest; odd being the temporally shuffled one. Lee

[30] extract high motion tuples of four frames via optical flow and shuffle them. Their network learns to predict the permutation from which the frames were sampled from. Our work is highly related to approaches that shuffle video frames and train a network to learn the permutations. A key difference between our work and Lee [30] is that they use only a single 80 x 80 patch from a video frame and shuffle it with three other patches from different frames. We sample a grid of patches from each frame and shuffle them with other multiple patches from other frames. Instead of the binary task of tuple verification like Misra [32], our self-supervised task is to predict the exact permutation of the patches, much like the jigsaw puzzle task of Noroozi and Favaro [34] — only on videos. Some recent approaches have used temporal coherency-based self-supervision on video sequences to model fine-grained human poses and activities [31] and animal behavior [7]. Our model is not specialized for motor skill learning like [7] and we do not require bounding boxes for humans in the video frames as in [31].

Combining Multiple Cues:

Since our approach combines spatial and temporal context into a single task, it is pertinent to mention recent approaches to combine multiple supervisory cues. Doersch and Zisserman [10] combine four self-supervised tasks in a multi-task training framework. The tasks include context prediction [9], colorization [61], exemplar-based learning [12] and motion segmentation [38]. Their experiments prove that naively combining different tasks does not yield improved results. They propose a lasso regularization scheme to capture only useful features from the trained network. Our work does not require a complex model for combining the spatial and temporal context prediction tasks for self-supervised learning. Wang [59] train a Siamese network to recognize if an object patch belongs to a similar category (but different object) or it belongs to the same object, only later in time. This work attempts to combine spatial and temporal context but requires preprocessing to discover the tracked object patches. Our work constructs the spatiotemporal task from video frames automatically without requiring graph construction or visual detection and tracking. There is also recent work on using synthetic imagery and its ‘free annotations’ to learn visual representations [43] by combining multiple self-supervised tasks. A related approach to ours is that of [53] where the authors devise two tasks for the network to train on in a multi-task framework. One is spatial placement task where a network learns to identify if a an image patch overlaps with a person bounding box or not. The second task is an ordering one where a network is trained to identify the correct sequence of two frames in a Siamese network setting much like [32]. The key difference between their work and ours is that our network does not do multi-task learning and predicts a much richer set of labels (that is, the shuffled configuration of patches) as compared to binary classification.

3 The Video Jigsaw Puzzle Problem

We present the video jigsaw puzzle task in this section. Our goal is to create a task that not only forces a network to learn part-based appearance of complex activities but also, how those parts change over time. For this, we divide a video frame into grid of patches. For a tuple of three video frames, this results in total patches per video. We number the patches from to and shuffle them. Note that there are ways to shuffle these patches. We use a small but diverse subset of these patches’ permutations, selecting them based on their Hamming Distance from the previously sampled permutations [34]. We use two sampling strategies in our experiments which we will describe in more detail. The network is trained to predict the correct order of patches. Our video jigsaw task is illustrated in Figure 1.

3.1 Training Video Jigsaw Network

Our training strategy follows a line of recent works on self-supervised learning on large scale image and video datasets [34, 30]. Typically, the self-supervised task is constructed by defining pseudo labels — in our case, the permuted order of patches. Then, each patch, after undergoing preprocessing, is input to a multi-stream Siamese-like network. Each stream, up till the first fully connected layer, shares parameters and operates independently on the frame patches. After the first fully connected layer (fc6), the feature representations are concatenated and input to another fully connected layer (fc7). The final fully connected layer transforms the features to a dimensional output, where is the number of permutations. A softmax over this output returns the most likely permutation the frame patches were sampled from. Our detailed training network is shown in Figure 2.

Figure 3: Our proposed permutation sampling strategy. We randomly permute the patches within each frame in a tuple, then we permute the frames. Since the number of patches per frame is , there are unique ways to shuffle these patches within a frame. We repeat this for all frames in the tuple and finally select the top permutations based on Hamming distance. This strategy preserves spatial coherence, preserves diversity between permutations, takes a fraction of the time and memory as compared to the algorithm of [34] and results in either comparable or better performance in the transfer learning tasks
Input: Number of permutations , patches per frame , number of frames
Output: Permutation Matrix
1 function
2       for  do
3             random permutation of all permutations of
4      

with sub-vectors

rearranged in a random order
5       all permutations of
6       for  do
7            
8            
9             for  do
10                   for  do
11                         for  do
12                              
13                              
14                         arrange in order
15                        
16                        
17                        
18                        
19                        
20                        
21                  
22            
23            
24      return
25
Algorithm 1 Sampling Permutations with Spatial Coherence

3.2 Generating Video Jigsaw Puzzles

We describe here the strategy to generate puzzles from the video frame patches. Noroozi and Favaro [34] proposed to generate permutations of image patches by maximizing the Hamming distance between the sampled permutations and the subsequently sampled permutations. They iterate over all possible permutations of patches till they end up with permutations; in their case, . In our case, since each video frame is divided into patches and there are frames in a tuple, it is not possible to sample permutations from all possible permutations (which is ) due to memory constraints. To reimplement [34]’s approach, we devise a computationally heavy but memory-efficient means to generate permutations from possibilities. More details on how we generate these permutations are described in the supplementary material. This way, we generate the Hamming-distance based permutations as suggested by [34].

The permutation sampling approach described above treats all video frame patches as one giant image — thus, the patch belonging to the first frame may get shuffled to the last frame’s position (to maximize Hamming distance between the permutations). We treat this permutation sampling approach as an (expensive) baseline but propose another sampling strategy to minimize compute and memory constraints. Our proposed approach can scale to any number of permutations. We generate permutations with a grid per frame. Our proposed approach forces the sampled permutations not only to obey the Hamming distance criteria but also to respect spatial coherence in video frames. This scales down computational and memory requirements dramatically while giving similar or better performance on transfer learning tasks. Our proposed permutation sampling approach is given in Algorithm 1 and visually presented in Figure 3.

Explanation of Algorithm 1:

With the constraint of spatial coherence i.e. patches within a frame constrained to stay together, the full space of hashes consists of possibilities. After generating the first hash randomly (lines 2, 3 and 5), each next hash is picked by maximizing over the full space the average Hamming distance from previously generated hashes. We divide the full space into subsets of hashes. Iterating through each subset (lines 10-11), we store the best hash from the subset into along with its distance metric into (lines 16-20). When the full space is traversed, the best from the good ones () is chosen as the new hash (lines 21-22). Lines 4, 6 and 10-15 describe how each subset is constructed. contains all permutations of patches within the first frame but only a particular permutation of patches from the other frames. For memory efficiency, it is sufficient to only create one matrix that has all patch permutations within the first frame i.e. it is not necessary to create as done in line 4. This is because the former is reused in every iteration but only one row from the latter is used to create , the matrix of repeated rows (line 14) which can be achieved by picking the corresponding row from and adding the offset to each element of the row.

4 Experiments

In this section we describe in detail our experiments on video action recognition using the video jigsaw network and a comprehensive ablation study, justifying our design choices and conclusions. The datasets we use for training the video jigsaw network are UCF101 [50] and Kinetics [24]. The datasets we evaluate on are UCF101 [50] and HMDB51 [27] for video action recognition.

4.1 Datasets

UCF101 [50] is a benchmark video action recognition dataset consisting of 101 action categories and 13,320 videos; around 9.5k videos are used for training and 3.5k videos are for testing. HMDB51 [27] consists of around 7000 vidoes of 51 action categories, out of which 70% belong to training set and 30% are in the test set. Kinetics dataset [24] is a large scale human action video dataset consisting of 400 action categories and more than 400 videos per action category.

4.2 Video Jigsaw Network Training

Tuple Sampling Strategy

For our unsupervised pretraining step on UCF101, we use the frame tuples (4 frames/tuple) provided by the authors of [30]. They extracted optical flow based regions from these frame tuples and used them in the temporal sequence sorting task [30]. We do not use the optical flow based regions from the frames but only use the tuples as a whole. For a given frame tuple , we further sample three frames in the following way:
. Hence, we end up with around 900,000 frame tuples from UCF101 dataset to train our video jigsaw network on. In Kinetics dataset, each video is seconds long. We create our tuples by sampling the , and frames from each video. The reason we do not sample further (as we did in the case of UCF101 dataset) is simply that Kinetics dataset is very large and diverse with more than 400 videos per class. This is not true for UCF101 dataset. Note that we do not use any further preprocessing to generate the frame tuples for our video jigsaw network. Previous approaches have used expensive detection and tracking methods [58] or optical flow computation to sample the high motion patches [30].

Implementation Details

We use Caffe

[23] deep learning framework for all our experiments and CaffeNet [26] as our base network, only with streams for patches per tuple. Our video jigsaw puzzles are generated on the fly according to the permutation matrix generated before training begins. Each row of corresponds to a unique permutation of patches. The video frame patches are shuffled according to the sampled permutation from and input to the network. The network is trained to predict the index in from which the permutation was sampled. Each video frame is cropped to , then divided into a grid. Each grid is pixels and we randomly sample a

patch from it. This strategy ensures that the network can not learn the location of the patches from low level appearance and texture details. We normalize each patch independently from others, to have zero mean and unit standard deviation. This is also done to prevent the network from learning low-level details (also called ‘network shortcuts’ in the self-supervision literature). Each patch is input to the multi-stream video jigsaw network as depicted in Figure 

2. We use a batch size of

and train the network with Stochastic Gradient Descent (SGD) using an initial learning rate of

, which decreases by 10 every 128,000 iterations. Each layer in our network is initialized with xavier initialization [16]

. We train the network for 500,000 iterations (approximately 80 epochs) using a Titan X GPU. Our training converges in around 62 hours.

Progressive Training Approach

We borrow principles from curriculum learning [6] to train our video jigsaw network with an easy jigsaw puzzle task first and then train it for a harder task. We define an easy jigsaw puzzle task as one which has lower as compared to a harder task as the network has to learn fewer configurations of the patches in the video frames. So instead of starting from scratch for say, , we initialize the network’s weights with the weights of the network with .

Avoiding Network Shortcuts

As mentioned in recent self-supervised approaches [9, 34, 35], it is imperative to deal with the self-supervised network’s tendency to learn the patch locations via low level details such as due to chromatic aberration. Typical solutions to this problem are channel swapping [30], color normalization [34], leaving a gap between sampled patches and training with a percentage of images in grayscale rather than color [35]. All these approaches aim to make the patch location learning task harder for the network. Our video jigsaw network incorporates these techniques to avoid network shortcuts. Our patch size is kept sampled from within a window. Around half of the total video frames are randomly projected to grayscale and we normalize each sampled patch independently. Our experiments using these techniques result in a drop in performance in video jigsaw puzzle solving accuracy but the transfer learning accuracy increases.

Choice of Video Jigsaw Training Dataset

As mentioned, we train video jigsaw networks using UCF101 and Kinetics datasets. Our results using the two datasets are shown in Table 1. We show video jigsaw task accuracy (VJ Acc) and the finetuning accuracy on UCF101 (Finetune Acc) for pretraining with both datasets. is the number of permutations. We can note two things from the table. Using Kinetics results in a worse video jigsaw solving performance, but results in a better generalization and transfer learning. Our finetuning results are consistently better with Kinetics pretraining as compared to training on UCF101. This shows that a large-scale diverse dataset like Kinetics is able to generalize to a completely different dataset (UCF101). One possible reason behind the reduced performance of UCF101 dataset is the fact that we oversample from it. This results in an easy task for the video jigsaw network to learn the low-level details of the video frame appearances and rapidly decrease the training loss. However, this would not result in a good transfer learning performance. To test this hypothesis, we use the reduced version of the UCF101 dataset (without any oversampling), comprising just 200,000 frame tuples and train video jigsaw networks for and . The results are shown in Table 2. As is shown, even without oversampling, UCF101-based pretraining does not perform as well as Kinetics dataset.

Pretraining Dataset
VJ Acc (%)
(N = 100)
Finetune Acc (%)
(N = 100)
VJ Acc (%)
(N = 250)
Finetune Acc (%)
(N = 250)
UCF101 97.6 44.0 84.6 42.6
Kinetics 61.6 44.6 44.0 49.0
Table 1: Comparison between UCF101 and Kinetics datasets for video jigsaw training
Pretrained On VJ Acc (%) Finetune Ac (%) VJ Acc (%) Finetune Ac (%)
Kinetics 40.3 49.2 29.4 54.7
UCF101-no oversampling 63.3 46.5 58 46.4
N = no. of permutations N = 500 N = 500 N = 1000 N = 1000
Table 2: Comparison between Kinetics and the original UCF101 frame tuples as pretraining dataset for video jigsaw network

Choice of Number of Permutations

We vary the number of permutations a video jigsaw network has to learn. We start with and take it up to . As we increase the number of permutations (see Table 3), the network finds it harder to learn the configuration of the patches, but the generalization improves. This experiment is run with Kinetics dataset trained on video jigsaw network.

No. of permutations VJ Acc (%) Finetuning Ac (%)
100 61.6 44.6
250 44.0 49.0
500 47.6 48.1
1000 29.4 54.7
Table 3: As we increase , the video jigsaw performance decreases but the finetuning accuracy increases

Permutation Generation Strategy

We compare the performance of our proposed permutation strategy which enforces spatial coherence (referred to as ) between permuted patches — with the proposed approach of [34] (referred to as ). We show results for this comparison in Figure 4. As the bar chart shows, for various number of permutations, our proposed spatial coherency preserving method either outperforms the original random permutation generation strategy or is comparable to it, while being many times faster to generate.

Figure 4: Comparison between the permutation strategy proposed by [34] () and our proposed sampling approach () on the video jigsaw task (indicated by VJ Acc) and the finetuning task on UCF101 (indicated by FN Acc) for various different number of permutations . Our approach consistently performs better or comparable to the approach of [34] while saving memory and computational costs. Figure is best viewed in color

Patch Size

We also compare the performance of our video jigsaw method trained on different frame patch sizes. Table  4 shows that the finetuning accuracy increases with the increase in patch size but does not give much improvement beyond a patch size of .

Patch size Finetuning Ac (%)
64 54.7
80 55.4
100 54.1
Table 4: As we increase patch size, the video jigsaw finetuning accuracy on UCF101 dataset increases

4.3 Finetuning for Action Recognition

Once the video jigsaw network is trained, we use the convolutional layers’ weights to initialize a standard CaffeNet [26] architecture and use it to finetune on UCF101 and HMDB51 datasets. For UCF101, we sample 25 equidistant frames per video and compute frame-based accuracy as our finetuning evaluation measure. For HMDB51 we sample 1 frame per second from each video and use them for the finetuning experiment. With our best model and parameters (pretrained on Kinetics dataset), results are given in Table 5 for test split 1 of both UCF101 and HMDB51 datasets.

Pretraining UCF101 Acc (%) HMDB51 Acc (%)
random 40.0 16.3
ImageNet (with labels) 67.7 28.0
Fernando [14] 60.3 32.5
Hadsell [19] 45.7 16.3
Mobahi [33] 45.4 15.9
Wang and Gupta [58] 40.7 15.6
Misra [32] 50.9 19.8
Lee [30] 56.3 22.1
Vondrick [57] 52.1 -
Video Jigsaw Network (ours) 55.4 27.0
Table 5: Finetuning results on UCF101 and HMDB51 of our proposed video jigsaw network (pretrained on Kinetics dataset with permutations — compared to the state of the art approaches. Note that all these results are computed using CaffeNet architecture. Our method gives superior or comparable performance to the state of the art unsupervised learning + finetuning approaches that use RGB frames for training

Table 5 shows our video jigsaw pretraining approach outperforming recent unsupervised pretraining approaches when finetuning on HMDB51 dataset. On UCF101 dataset, our finetuning accuracy is comparable to the state of the art. The method of Fernando uses a different input from ours (stacks of frame differences) whereas we use RGB frames to form the jigsaw puzzles. All other approaches operate on RGB video frames or frame patches hence we can fairly compare with them. The methods of Lee [30] and Misra [32] are pretrained on UCF101 dataset whereas our best network is trained on Kinetics dataset. This again shows the domain transfer capability of a large scale dataset like Kinetics, compared to UCF101. Our method achieves this without doing any expensive tracking [58] or optical flow based patch or frame mining such as [32, 30]. This means that our approach requires large scale diverse unlabeled video dataset to work. We used frames per video from Kinetics dataset — hence we were only using about 400,000 tuples for our video jigsaw training. We believe that using a larger dataset would lead to better performance, given that our approach is close to the state of the art. Another point to note is that methods which perform well on UCF101 such as Lee [30] and Misra [32] do not perform that well on HMDB51, whereas our method actually generalizes well, given that it is pretrained on a completely different dataset.

Method Supervision Classification
ImageNet 1000 class labels 78.2%
Random [39] none 53.3%
Doersch [9] ImageNet context 55.3%
Jigsaw Puzzle [34] ImageNet context 67.6%
Counting [35] ImageNet context 67.7%
Wang and Gupta [58] 100k videos, VOC2012 62.8%
Agrawal [1] egomotion (KITTI, SF) 54.2%
Misra [32] UCF101 videos 54.3%
Lee [30] UCF101 videos 63.8%
Pathak [38] MS COCO + segments 61.0%
Video Jigsaw Network (ours) Kinetics videos 63.6%
Table 6: PASCAL VOC 2007 classification results compared with other methods. Other results taken from [35] and [30]

4.4 Results on PASCAL VOC 2007 Dataset

The PASCAL VOC 2007 dataset consists of 20 object classes with 5011 images in the train set and 4952 images in the test set. Multiple objects can be present in a single image and the classification task is to detect whether an object is present in a given image or not. We evaluate our video jigsaw network on this dataset by initializing a CaffeNet

with our video jigsaw network’s trained convolutional layers’ weights. The fully-connected layers’ weights are randomly sampled from a Gaussian distribution with zero mean and 0.001 standard deviation. Our finetuning scheme follows the one suggested by

[25]. Our classification results on the Pascal VOC 2007 test set are shown in Table 6.

Our trained network generalizes well not only across datasets but also across tasks. Our video jigsaw network is trained on Kinetics videos and not on object-centric images, yet performs competitively against the state-of-the-art image-based semi-supervised approaches and outperforms most of the video-based semi-supervised methods.

4.5 Visualization Experiments

We show first 40 conv1 filter weights of our best video jigsaw model in Figure 5 which show oriented edges learned by our model. Note that training this model does not use activity labels. We also perform a qualitative retrieval experiment on the video jigsaw model finetuned on Pascal VOC dataset. Results are shown in Figure 6. We note that the retrieved images returned by the model match the query image which qualitatively shows that our model trained on unlabeled videos is able to identify objects in still images.

Figure 5: Visualization of first 40 learned conv1 filters of our best performing video jigsaw model
Figure 6: Retrieval Experiment on PASCAL VOC dataset using our model

5 Conclusion

We propose a self-supervised learning task where spatial and temporal contexts are exploited jointly. Our framework is not dependent on heavy preprocessing steps such as object tracking or optical flow based patch mining. We demonstrate via extensive experimental evaluations that our approach performs competitively on video activity recognition, outperforming the state of the art in self-supervised video action recognition on HMDB51 dataset. We also propose a permutation generation strategy which respects spatial coherency and demonstrate that even for shuffling patches, diverse permutations can be generated extremely efficiently via our proposed approach.

References

  • [1] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. In Computer Vision (ICCV), 2015 IEEE International Conference on, pages 37–45. IEEE, 2015.
  • [2] P. Agrawal, A. V. Nair, P. Abbeel, J. Malik, and S. Levine. Learning to poke by poking: Experiential learning of intuitive physics. In Advances in Neural Information Processing Systems, pages 5074–5082, 2016.
  • [3] M. A. Bautista, A. Sanakoyeu, and B. Ommer. Deep unsupervised similarity learning using partially ordered sets. In

    Proceedings of IEEE Computer Vision and Pattern Recognition

    , 2017.
  • [4] M. A. Bautista, A. Sanakoyeu, E. Tikhoncheva, and B. Ommer. Cliquecnn: Deep unsupervised exemplar learning. In Advances in Neural Information Processing Systems, pages 3846–3854, 2016.
  • [5] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In Advances in neural information processing systems, pages 153–160, 2007.
  • [6] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In

    Proceedings of the 26th annual international conference on machine learning

    , pages 41–48. ACM, 2009.
  • [7] B. Brattoli, U. Buchler, A.-S. Wahl, M. E. Schwab, and B. Ommer. Lstm self-supervision for detailed behavior analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6466–6475, 2017.
  • [8] C. Doersch, A. Gupta, and A. A. Efros. Mid-level visual element discovery as discriminative mode seeking. In Advances in neural information processing systems, pages 494–502, 2013.
  • [9] C. Doersch, A. Gupta, and A. A. Efros. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pages 1422–1430, 2015.
  • [10] C. Doersch and A. Zisserman. Multi-task self-supervised visual learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2051–2060, 2017.
  • [11] A. Dosovitskiy, P. Fischer, J. T. Springenberg, M. Riedmiller, and T. Brox.

    Discriminative unsupervised feature learning with exemplar convolutional neural networks.

    IEEE transactions on pattern analysis and machine intelligence, 38(9):1734–1747, 2016.
  • [12] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller, and T. Brox. Discriminative unsupervised feature learning with convolutional neural networks. In Advances in Neural Information Processing Systems, pages 766–774, 2014.
  • [13] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 111(1):98–136, 2015.
  • [14] B. Fernando, H. Bilen, E. Gavves, and S. Gould. Self-supervised video representation learning with odd-one-out networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5729–5738. IEEE, 2017.
  • [15] P. Földiák. Learning invariance from transformation sequences. Neural Computation, 3(2):194–200, 1991.
  • [16] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In

    Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics

    , pages 249–256, 2010.
  • [17] I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
  • [18] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun. Unsupervised learning of spatiotemporally coherent metrics. In Proceedings of the IEEE international conference on computer vision, pages 4086–4093, 2015.
  • [19] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In Computer vision and pattern recognition, 2006 IEEE computer society conference on, volume 2, pages 1735–1742. IEEE, 2006.
  • [20] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504–507, 2006.
  • [21] G. E. Hinton and T. J. Sejnowski. Learning and releaming in boltzmann machines. Parallel distributed processing: Explorations in the microstructure of cognition, 1(282-317):2, 1986.
  • [22] D. Jayaraman and K. Grauman. Learning image representations tied to ego-motion. In Proceedings of the IEEE International Conference on Computer Vision, pages 1413–1421, 2015.
  • [23] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
  • [24] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
  • [25] P. Krähenbühl, C. Doersch, J. Donahue, and T. Darrell. Data-dependent initializations of convolutional neural networks. arXiv preprint arXiv:1511.06856, 2015.
  • [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
  • [27] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. Hmdb: a large video database for human motion recognition. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 2556–2563. IEEE, 2011.
  • [28] G. Larsson, M. Maire, and G. Shakhnarovich. Learning representations for automatic colorization. In European Conference on Computer Vision, pages 577–593. Springer, 2016.
  • [29] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. In Advances in neural information processing systems, pages 801–808, 2007.
  • [30] H.-Y. Lee, J.-B. Huang, M. Singh, and M.-H. Yang. Unsupervised representation learning by sorting sequences. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 667–676. IEEE, 2017.
  • [31] T. Milbich, M. Bautista, E. Sutter, and B. Ommer. Unsupervised video understanding by reconciliation of posture similarities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4394–4404, 2017.
  • [32] I. Misra, C. L. Zitnick, and M. Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In European Conference on Computer Vision, pages 527–544. Springer, 2016.
  • [33] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 737–744. ACM, 2009.
  • [34] M. Noroozi and P. Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pages 69–84. Springer, 2016.
  • [35] M. Noroozi, H. Pirsiavash, and P. Favaro. Representation learning by learning to count. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5898–5906, 2017.
  • [36] A. Owens, P. Isola, J. McDermott, A. Torralba, E. H. Adelson, and W. T. Freeman. Visually indicated sounds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2405–2413, 2016.
  • [37] A. Owens, J. Wu, J. H. McDermott, W. T. Freeman, and A. Torralba. Ambient sound provides supervision for visual learning. In European Conference on Computer Vision, pages 801–816. Springer, 2016.
  • [38] D. Pathak, R. Girshick, P. Dollar, T. Darrell, and B. Hariharan. Learning features by watching objects move. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2701–2710, 2017.
  • [39] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2536–2544, 2016.
  • [40] L. Pinto, J. Davidson, and A. Gupta. Supervision via competition: Robot adversaries for learning tasks. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 1601–1608. IEEE, 2017.
  • [41] L. Pinto, D. Gandhi, Y. Han, Y.-L. Park, and A. Gupta. The curious robot: Learning visual representations via physical interactions. In European Conference on Computer Vision, pages 3–18. Springer, 2016.
  • [42] L. Pinto and A. Gupta. Learning to push by grasping: Using multiple tasks for effective learning. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pages 2161–2168. IEEE, 2017.
  • [43] Z. Ren and Y. J. Lee. Cross-domain self-supervised multi-task feature learning using synthetic imagery. arXiv preprint arXiv:1711.09082, 2017.
  • [44] B. C. Russell, W. T. Freeman, A. A. Efros, J. Sivic, and A. Zisserman. Using multiple segmentations to discover objects and their extent in image collections. In Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on, volume 2, pages 1605–1614. IEEE, 2006.
  • [45] R. Salakhutdinov and H. Larochelle. Efficient learning of deep boltzmann machines. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 693–700, 2010.
  • [46] G. A. Sigurdsson, O. Russakovsky, and A. Gupta. What actions are needed for understanding human actions in videos? arXiv preprint arXiv:1708.02696, 2017.
  • [47] S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. In Computer Vision–ECCV 2012, pages 73–86. Springer, 2012.
  • [48] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering objects and their location in images. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, volume 1, pages 370–377. IEEE, 2005.
  • [49] P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report, COLORADO UNIV AT BOULDER DEPT OF COMPUTER SCIENCE, 1986.
  • [50] K. Soomro, A. R. Zamir, and M. Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
  • [51] E. S. Spelke. Principles of object perception. Cognitive science, 14(1):29–56, 1990.
  • [52] N. Srivastava, E. Mansimov, and R. Salakhudinov. Unsupervised learning of video representations using lstms. In International conference on machine learning, pages 843–852, 2015.
  • [53] O. Sumer, T. Dencker, and B. Ommer. Self-supervised learning of pose embeddings from spatiotemporal relations in videos. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 4308–4317. IEEE, 2017.
  • [54] J. Sun and J. Ponce. Learning discriminative part detectors for image classification and cosegmentation. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 3400–3407. IEEE, 2013.
  • [55] G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolutional learning of spatio-temporal features. In European conference on computer vision, pages 140–153. Springer, 2010.
  • [56] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM, 2008.
  • [57] C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pages 613–621, 2016.
  • [58] X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. arXiv preprint arXiv:1505.00687, 2015.
  • [59] X. Wang, K. He, and A. Gupta. Transitive invariance for self-supervised visual representation learning. arXiv preprint arXiv:1708.02901, 2017.
  • [60] L. Wiskott and T. J. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural computation, 14(4):715–770, 2002.
  • [61] R. Zhang, P. Isola, and A. A. Efros. Colorful image colorization. In European Conference on Computer Vision, pages 649–666. Springer, 2016.
  • [62] R. Zhang, P. Isola, and A. A. Efros. Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1058–1067, 2017.
  • [63] Y. Zhou and T. L. Berg. Temporal perception and prediction in ego-centric video. In Proceedings of the IEEE International Conference on Computer Vision, pages 4498–4506, 2015.