Slow and steady feature analysis: higher order temporal coherence in video

06/15/2015 ∙ by Dinesh Jayaraman, et al. ∙ The University of Texas at Austin 0

How can unlabeled video augment visual learning? Existing methods perform "slow" feature analysis, encouraging the representations of temporally close frames to exhibit only small differences. While this standard approach captures the fact that high-level visual signals change slowly over time, it fails to capture *how* the visual content changes. We propose to generalize slow feature analysis to "steady" feature analysis. The key idea is to impose a prior that higher order derivatives in the learned feature space must be small. To this end, we train a convolutional neural network with a regularizer on tuples of sequential frames from unlabeled video. It encourages feature changes over time to be smooth, i.e., similar to the most recent changes. Using five diverse datasets, including unlabeled YouTube and KITTI videos, we demonstrate our method's impact on object, scene, and action recognition tasks. We further show that our features learned from unlabeled video can even surpass a standard heavily supervised pretraining approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 13

page 14

page 15

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Visual feature learning with deep neural networks has yielded dramatic gains for image recognition tasks in recent years [23, 38]

. While the main techniques involved in these methods have been known for some time, a key factor in their recent success is the availability of large human-labeled image datasets like ImageNet 

[6]. Deep convolutional neural networks (CNNs) designed for image recognition typically have millions of parameters, necessitating notoriously large training databases to avoid overfitting.

Intuitively, however, visual learning should not be restricted to sets of category-labeled exemplars. Taking human learning as an obvious example, children build up visual representations through constant observation and action in the world. This hints that machine-learned representations would also be well served to exploit long-term

video observations, even in the absence of deliberate labels. Indeed, researchers in cognitive science find that temporal coherence plays an important role in visual learning. For example, altering the natural temporal contiguity of visual stimuli hinders translation invariance in the inferior temporal cortex [27], and functions learned to preserve temporal coherence share behaviors observed in complex cells of the primary visual cortex [4].

Our goal is to exploit unlabeled video, as might be obtained freely from the web, to improve visual feature learning. In particular, we are interested in improving learned image representations for visual recognition tasks.

Figure 1: From unlabeled videos, we learn “steady features” that exhibit consistent feature transitions among sequential frames.

Prior work leveraging video for feature learning focuses on the concept of slow feature analysis (SFA). First formally proposed in [43], SFA exploits temporal coherence in video as “free” supervision to learn image representations invariant to small transformations. In particular, SFA encourages the following property: in a learned feature space, temporally nearby frames should lie close to each other, i.e. for a learned representation and adjacent video frames and , one would like . The rationale behind SFA rests on a simple observation: high-level semantic visual concepts associated with video frames typically change only gradually as a function of the pixels that compose the frames. Thus, representations useful for recognizing high-level concepts are also likely to possess this property of “slowness”. Another way to think about this is that scene changes between temporally nearby frames are usually small and represent label-preserving transformations. A slow representation will tolerate minor geometric or lighting changes, which is essential for high-level visual recognition tasks. The value of exploiting temporal coherence for recognition has been repeatedly verified in ongoing research, including via modern deep convolutional neural network implementations [31, 3, 15, 47, 13, 42].

However, existing approaches require only that high-level visual signals change slowly over time. Crucially, they fail to capture how the visual content changes over time. In contrast, our idea is to incorporate the steady visual dynamics of the world, learned from video. For instance, if trained on videos of walking people, slow feature-based approaches would only require that images of people in nearby poses be mapped close to one another. In contrast, we aim to learn a feature space in which frames from a novel video of a walking person would follow a smooth, predictable trajectory. A learned steady representation capturing such dynamics would be influenced not only by object motions, but also other types of visual transformations. For instance, it would capture how colors of objects in the sunlight change over the course of a day, or how the views of a static scene change as a camera moves around it.

To this end, we propose steady feature analysis—a generalization of slow feature learning. The key idea is to impose higher order temporal constraints on the learned visual representation. Beyond encouraging temporal coherence i.e., small feature differences between nearby frame pairs, we would like to encourage consistent feature transitions across sequential frames. In particular, to preserve second order slowness, we look at triplets of temporally close frames , , , and encourage the learned representation to have . We develop a regularizer that uses contrastive loss over tuples of frames to achieve such mappings with CNNs. Whereas slow feature learning insists that the features not change too quickly, the proposed steady learning insists that—in whichever way the features are evolving—they continue to evolve in that same way in the immediate future. See Figure 1.

We hypothesize that higher-order temporal coherence could provide a valuable prior for recognition by embedding knowledge of the rich dynamics of the visual world into the feature space. We empirically verify this hypothesis using five datasets for a variety of recognition tasks, including object instance recognition, large-scale scene recognition, and action recognition from still images. In each case, by augmenting a small set of labeled exemplars with unlabeled video, the proposed method generalizes better than both a standard discriminative CNN as well as a CNN regularized with existing slow temporal coherence metrics 

[15, 31]

. Our results reinforce that unsupervised feature learning from unconstrained video is an exciting direction, with promise to offset the large labeled data requirements of current state-of-the-art computer vision approaches by exploiting virtually unlimited unlabeled video.

2 Related Work

To build a robust object recognition system, the image representation must incorporate some degree of invariance

to changes in pose, illumination, and appearance. While invariance can be manually crafted, such as with spatial pooling operations or gradient descriptors, it may also be learned. One approach often taken in the convolutional neural network (CNN) literature is to pad the training data by systematically perturbing raw images with label-preserving transformations (e.g., translation, scaling, intensity scaling, etc.) 

[37, 39, 8]. A good representation will ensure that the jittered versions originating from the same content all map close by in the learned feature space.

In a similar spirit, unlabeled video is an appealing resource for recovering invariance. The simple fact that things typically cannot change too quickly from frame to frame makes it possible to harvest sets of sequential images whose learned representations ought not to differ substantially. Slow feature analysis (SFA) [43, 17] leverages this notion to learn features from temporally adjacent video frames.

Recent work uses CNNs to explore the power of learning slow features, also referred to as “temporally coherent” features [31, 3, 47, 13, 42]. The existing methods either produce a holistic image embedding [31, 3, 13, 15], or else track local patches to learn a localized representation [47, 48, 42]. Most methods exploit the learned features for object recognition [31, 47, 3, 42], while others employ them for dimensionality reduction [15] or video frame retrieval [13]. In [31], a standard deep CNN architecture is augmented with a temporal coherence regularizer, then trained using video of objects on clean backgrounds rotating on a turntable. The method of [3] builds on this concept, proposing the use of decorrelation to avoid trivial solutions to the slow feature criterion, with applications to handwritten digit classification. The authors of [13] propose injecting an auto-encoder loss and explore training with unlabeled YouTube video. Building on SFA subspace ideas [43], researchers have also examined slow features for action recognition [46], facial expression analysis [45], future prediction [40], and temporal segmentation [32, 28].

Related to all the above methods, we aim to learn features from unlabeled video. However, whereas all the past work aims to preserve feature slowness, our idea is to preserve higher order feature steadiness. Our learning objective is the first to move beyond adjacent frame neighborhoods, requiring not only that sequential features change gradually, but also that they change in a similar manner in adjacent time intervals.

Another class of methods learns transformations [30, 29, 34]. Whereas the above feature learning methods (and ours) train with unlabeled video spanning various unspecified transformations, these methods instead train with pairs of images for which the transformation is known and/or consistent. Then, given a novel input, the model can be used to predict its transformed output. Rather than use learned transformations for extrapolation like these approaches, our goal is to exploit transformation patterns in unlabeled video to learn features that are useful for recognition.

Aside from inferring the transformation that implicitly separates a pair of training instances, another possibility is to explicitly predict the transformation parameters. Recent work considers how the camera’s ego-motion (e.g., as obtained from inertial sensors, GPS) can be exploited as supervision during CNN training [18, 2]. These methods also lack the higher-order relationships we propose. Furthermore, they require training data annotated with camera/ego-pose parameters, which prevents them from learning with “in the wild” videos (like YouTube) for which the camera was not instrumented with external sensors to record motor changes. In contrast, our method is free to exploit arbitrary unlabeled video data.

Several recent papers [5, 41, 14] have trained unsupervised image representations targeting specific narrow tasks. [5] learn efficient generative codes to synthesize images, while [41] learn features to predict pixel-level optical flow maps for video frames. Contemporary with an earlier version of our work [19], [14] proposed to learn features that vary linearly in time, for the specific task of extrapolating future video frames given a pair of past frames. They report qualitative results for toy video frame synthesis. While our formulation also encourages collinearity in the feature space, our aim is to learn generally useful features from real videos without supervision, and we report results on natural image scene, object, and action recognition tasks.

3 Approach

Given auxiliary raw unlabeled video, we wish to learn an embedding amenable to a supervised classification task. We pose this as a feature learning problem in a convolutional neural network, where the hidden layers of the network are tuned not only with the backpropagation gradients from a classification loss, but also with gradients computed from the unlabeled video that exploit its temporal steadiness.

3.1 Notation and framework overview

A supervised training dataset provides target class labels for images (represented in pixel space). The unsupervised training dataset consists of ordered video frames, where is the video frame at time instant .111For notational simplicity, we will describe our method assuming that the unsupervised training data is drawn from a single continuous video, but it is seamless to train instead with a batch of unlabeled video clips.

Importantly, we do not assume that the video necessarily stems from the same categories or even the same domain as images in . For example, in results we will demonstrate cases where and consist of natural scene images and autonomous vehicle video, respectively; or Web photos of human actions and YouTube video spanning dozens of distinct activities. The idea is that training with diverse unlabeled video should allow the learner to recover fundamental cues about how objects move, how scenes evolve over time, how occlusions occur, how illumination varies, etc., independent of their specific semantic content.

The full image-pixels-to-class label classifier we learn will have the compositional form

where is a -dimensional feature map operating on images in the pixel space, and takes as input the feature map

, and outputs the class estimate. We learn a linear classifier

represented by a weight matrix with rows . At test time, a novel image is classified as .

To learn the classifier , we optimize an objective function of the form:

(1)

where represents the supervised classification loss, represents an unsupervised regularization loss term, and

is the regularization hyperparameter. The parameter vector

is common to both losses because they are both computed on the learned feature space . The supervised loss is a softmax loss:

(2)

where

is the softmax probability of the correct class and

is the number of labeled training instances in .

In the following, we first discuss how the unsupervised regularization loss may be constructed to exploit temporal smoothness in video (Sec 3.2). Then we generalize this to exploit temporal steadiness and other higher order coherence (Sec 3.3). Sec 3.4 then shows how a neural network corresponding to may be trained to minimize Eq (1) above.

3.2 Review: First-order temporal coherence

As discussed above, slow feature analysis (SFA) [43] seeks to learn image features that vary slowly over the frames of a video, with the aim of learning useful invariances. This idea of exploiting “slowness” or “temporal coherence” for feature learning has been explored in the context of neural networks  [31, 15, 3, 47, 13]. We briefly review that underlying objective before introducing the proposed higher order generalization of temporal coherence.

A temporal neighbor pair dataset is first constructed from the unlabeled video , as follows:

(3)

where is the temporal neighborhood size, and the subscript 2 signifies that the set consists of pairs. indexes image pairs with neighbor-or-not binary annotations , automatically extracted from the video. We discuss the setting of in results. In general, one wants the time window spanned by to include motions that are small enough to be label-preserving, so that correct invariances are learned; in practice this is typically on the order of a second or less.

With this dataset, the SFA property translates as . A simple formulation of this as an unsupervised regularizing loss would be as follows:

(4)

where is a distance measure (e.g., in [31] and in [15]), and denotes the subset of “positive” neighboring frame pairs i.e. those for which . This loss by itself admits problematic minimizers such as , which corresponds to . Such solutions may be avoided by a contrastive [15]

version of the loss function that also exploits “negative” (non-neighbor) pairs:

(5)

where denotes and . As shown above, the contrastive loss penalizes distance between and when the pair are neighbors (), and encourages distance between them when they are not (), up to a margin .

3.3 Higher-order temporal coherence

The slow feature formulation of Eq (5) encourages feature maps that produce small first-order temporal derivatives in the learned feature space: . This first-order temporal coherence is restricted to learning to ignore small jitters in the visual signal.

Our idea is to model higher order temporal coherence in the unlabeled video, so that the features can further capture rich structure in how the visual content changes over time. In the general case, this means we want a regularizer that encourages higher order derivatives to be small: . Accordingly, we need to generalize from pairs of temporally close frames to tuples of frames.

In this work, we focus specifically on learning steady features—the second-order case, which can be encoded with triplets of frames, as we will see next. In a nutshell, whereas slow learning insists that the features not change too quickly, steady learning insists that feature changes in the immediate future remain similar to those in the recent past.

Figure 2: “Siamese” network configuration (shared weights for the layer stacks) with portions corresponding to the 3 terms , and in our objective. and compose the unsupervised loss in Eq (1). is the supervised loss for recognition in static images.

First, we create a triplet dataset from the unlabeled video as:

(6)

indexes image triplets with binary annotations indicating whether they are in-sequence, evenly spaced frames in the video, within a temporal neighborhood . In practice, we select “negatives” () from triplets where but to provide a buffer and avoid noisy negatives.

We construct our steady feature analysis regularizer using these triplets, as follows:

(7)

where is again shorthand for and refers to the contrastive loss defined above. For positive triplets—meaning those occurring in sequence and within a temporal neighborhood—the above loss penalizes distance between the adjacent pairwise feature difference vectors. For negative triplets, it encourages this distance, up to a maximum margin distance . Effectively, encourages the feature representations of positive triplets to be collinear i.e. . See Figure 1.

Our final optimization objective combines the first and second order losses (Eq (5) and (7)) into the unsupervised regularization term:

(8)

where controls the relative impact of the two terms. Recall this regularizer accompanies the classification loss in the main objective of Eq (1).

Beyond second-order coherence:

The proposed framework generalizes naturally to the -th order, by defining analogously to Eq (7) using a contrastive loss over -th order discrete derivatives, computed over recursive differences on -tuples. While in principle higher would more thoroughly exploit patterns in video, there are potential practical drawbacks. As grows, the number of samples would likely need to also grow to cover the space of -frame motion patterns, requiring more training time, compute power, and memory. Besides, discrete -th derivatives computed over large -frame time windows may grow less reliable, assuming steadiness degrades over longer temporal windows in typical visual phenomena. Given these considerations, we focus on second-order steadiness combined with slowness, and find that slow and steady does indeed win the race (Sec 4). The empirical question of applying is left for future work.

Equivariance-inducing property of :

While first-order coherence encourages invariance, the proposed second-order coherence may be seen as encouraging the more general property of equivariance. is equivariant to an image transformation if there exists some “simple” function such that Equivariance has been found to be useful for visual representations [16, 36, 26, 18]. To see how feature steadiness is related to equivariance, consider a video with frames . Given a small temporal neighborhood , frames and must be related by a small transformation (small because of first order temporal coherence assumption) i.e. . Assuming second order coherence of video, this transformation itself remains approximately constant in a small temporal neighborhood, so that, in particular, .

Now, for equivariant features , by the definition of equivariance and the observations above, . Further, given that is a small transformation, is well-approximated in a small neighborhood by its first order Taylor approximation, so that: (1) , and (2) . In other words, under the realistic assumption that natural videos evolve smoothly, within small temporal neighborhoods, feature equivariance is equivalent to the second order temporal coherence formulated in Eq (7), with set to respectively. This connection between equivariance and the second order temporal coherence induced by helps motivate why we can expect our feature learning scheme to benefit recognition.

3.4 Neural networks for the feature maps

We use a convolutional neural network (CNN) architecture to represent the feature mapping function . The parameter vector represents the CNN’s learned layer weight matrices. See Sec 4.1 and Supp for architecture choices.

To optimize Eq (1) with the regularizer in Eq (8

), we employ standard mini-batch stochastic gradient descent (as implemented in 

[20]) in a “Siamese” setup, with 6 replicas of the stack , as shown in Fig 2, 1 stack for (input: supervised training samples ), 2 for (input: temporal neighbor pairs ) and 3 for (input: triplets ). The shared layers are initialized to the same random values and modified by the same gradients (sum of the gradients of the 3 terms) in each training iteration, so they remain identical throughout. See Supp for details.

Task Img/frame dims #Classes Recog. Task #Train #Test Unsup. Input Type #Pairs (1:3) #Triplets (1:1)
NORBNORB 96961 25 object 150 8100 pose-reg. images 50,000 75,000
KITTISUN 32321 397 scene 2382 7940 car-mounted video 100,000 100,000
HMDBPASCAL-10 32323 10 action 50 2000 web video 100,000 100,000
Datasets NORB KITTI HMDB
sfa-1 [31] 0.95 31.04 2.70
sfa-2 [15] 0.91 8.39 2.27
ssfa (ours) 0.53 7.79 1.78
Table 1: Left: Statistics for the unsupervised and supervised datasets () used in the recognition tasks (positive to negative ratios for pairs and triplets indicated in headers). Right: Sequence completion normalized correct candidate rank . Lower is better. (See Sec 4.2.)

4 Experiments

We test our approach using five challenging public datasets for three tasks—object, scene, and action recognition—spanning 432 categories. We also analyze its ability to learn higher order temporal coherence with a sequence completion task.

4.1 Experimental setup

Our three recognition tasks (specified by the names of the unsupervised and supervised datasets as ) are NORBNORB object recognition, KITTISUN scene recognition and HMDBPASCAL-10 single-image action recognition. Table 1 (left) summarizes key dataset statistics.

Supervised datasets :

(1) NORB [25] has 972 images each of 25 toys against clean backgrounds captured over a grid of camera elevations and azimuths. (2) SUN [44] contains Web images of 397 scene categories. (3) PASCAL-10 [9] is a still-image human action recognition dataset with 10 categories. For all three datasets, we use few labeled training images (see Table 1), since unsupervised regularization schemes should have most impact when labeled data is scarce [18, 31]. This is an important scenario, given the “long tail” of categories lacking ample labeled exemplars.

Unsupervised datasets :

(1) NORB consists of pose-registered turntable images (not video), but it is straightforward to generate the pairs and triplets for and assuming smooth motions in the annotated pose space. We mine these pairs and triplets from among the 648 images per class that are not used for testing. (2) KITTI [10] has videos captured from a car-mounted camera in a variety of locations around the city of Karlsruhe. Scenes are largely static except for traffic, but there is large and systematic camera motion. (3) HMDB [24] contains 6849 short Web and movie video clips containing 51 diverse actions. We select 1000 clips at random. While some videos include camera motion (e.g. to follow an athlete running), most have stationary cameras and small human pose-change motions. The time window is a hyperparameter of both our method as well as existing SFA methods. We fix and seconds for KITTI and HMDB, respectively, based on cross-validation for best performance by the SFA baselines.

Baselines:

We compare our slow-and-steady feature analysis approach (ssfa) to four methods, including two key existing methods for learning from unlabeled video. The three unsupervised baselines are: (1) unreg: An unregularized network trained only on the supervised training samples . (2) sfa-1: An SFA approach proposed in [31] that uses for in Eq 5. (3) sfa-2: Another SFA variant [15] that sets the distance function to the distance in Eq 5. The sfa methods train with the unlabeled pairs, while ssfa trains with both the pairs and triplets.

These comparisons are most crucial to gauge the impact of the proposed approach versus the state of the art for feature learning with unlabeled video. However, we are also interested to what extent learning from unlabeled video can even start to compete with methods learned from heavily labeled data (which costs substantial human effort). Thus, we also compare against a supervised pretraining and finetuning approach denoted sup-ft (details in Sec 4.3).

Network architectures:

For the NORBNORB task, we use a fully connected network architecture: input 25 hidden units ReLU nonlinearity =25 features. For the other two tasks, we resize images to to allow fast and thorough experimentation with standard CNN architectures known to work well with tiny images [1], producing =64-dimensional features. Recognition tasks on 3232 images are much harder than with full-sized images, so these are highly challenging tasks. All networks are optimized with Nesterov-accelerated stochastic gradient descent until validation classification loss converges or begins to increase. Optimization hyperparameters are selected greedily through cross-validation in the following order: base learning rate, and (starting from ==0). The relative scales of the margin parameters of the contrastive loss in Eq (5) and Eq (7) are validated per dataset. See Supp for more details on the 3232 architecture, data pre-processing and optimization.

Figure 3: Sequence completion examples from all three video datasets. In each instance, a query pair is presented on the left, and the top three completion candidates as ranked by our method are presented on the right. Ground truth frames are marked with black highlights.

4.2 Quantifying steadiness

First we use a sequence completion task to analyze how well the desired steadiness property is induced in the learned features. We compose a set of sequential triplets from the pool of test images, formed similarly to the positives in Eq (6). At test time, given the first two images of each triplet, the task is to predict what the third looks like.

We apply our ssfa to infer the missing triplet item as follows. Recall that our formulation encourages sequential triplets to be collinear in the feature space. As a result, given and , we can extrapolate as . To backproject to the image space, we identify an image closest to in feature space. Specifically, we take a large pool of candidate images, map them all to their features via , and rank them in increasing order of distance from . The rank of the correct candidate is now a measure of sequence completion performance. See Supp for details.

Tab 1 (right) reports the mean percentile rank over all query pairs. Lower is better. Clearly, our ssfa regularization induces steadiness in the feature space, reducing nearly by half compared to baseline regularizers on NORB and by large margins on HMDB too. Our regularizer is closely matched to this task, so these gains are expected. Note however that these gains are reported after training to minimize the joint objective, which includes and , apart from , and with regularization weights tuned for recognition tasks.

Fig 3 shows sequence completion examples from all 3 video datasets. Particularly impressive results are the third NORB example (where despite a difficult viewpoint, the sequence is completed correctly by the top-ranked candidate), and the third HMDB example, where a highly dynamic baseball pitch sequence is correctly completed by the third ranked image. The top-ranked candidate for this example illustrates a common failure mode—the second image of the query pair is itself picked to complete the sequence. This may reflect the fact that HMDB sequences in particular exhibit very little motion (camera motions rare, mostly small object motions). Usually, as in the third KITTI example, even the top-ranked candidates other than the ground truth frame are highly plausible completions.

4.3 Recognition results

Unlabeled video as a prior for supervised recognition:

Now we report results on the 3 unsupervised-to-supervised recognition tasks. Table 2 shows the results. Our ssfa method comprehensively outperforms not only the purely supervised unreg baseline, but also the popular sfa-1 and sfa-2 slow feature learning approaches, beating the best baseline for each task by 9%, 36% and 9% respectively. The results on KITTISUN and HMDBPASCAL-10 are particularly impressive because the unsupervised and supervised dataset domains are mismatched. All KITTI data comes from a single car-mounted road-facing camera driving through the streets of one city, whereas SUN images are downloaded from the Web, captured by different cameras from diverse viewpoints, and cover 397 scene categories mostly unrelated to roads. PASCAL-10 images are bounding-box-cropped and therefore centered on single persons, while HMDB videos, which are mainly clips from movies and Web videos, often feature multiple people, are not as tightly focused on the person performing the action, and are of low quality, sometimes with overlaid text etc.

Task type Objects Scenes Actions
Datasets NORBNORB KITTISUN HMDBPASCAL-10
Methods [25 cls] [397 cls] [397 cls, top-10] [10 cls]
random 4.00 0.25 2.52 10.00
unreg 24.640.85 0.700.12 6.100.67 15.340.28
sfa-1 [31] 37.570.85 1.210.14 8.240.25 19.260.45
sfa-2 [15] 39.230.94 1.020.12 6.780.32 19.040.24
ssfa (ours) 42.830.33 1.650.04 9.190.10 20.950.13
Table 2: Recognition results (mean standard error of accuracy % over 5 repetitions) (Sec 4.3). Our method outperforms both existing slow feature/temporal coherence methods and the unregularized baseline substantially, across three distinct recognition tasks.

Aside from the diversity of tasks (object, scene, and action recognition), our unsupervised datasets also exhibit diverse types of motion. NORB is generated from planned, discrete camera manipulations around a central object of interest. The KITTI camera moves through a real largely static landscape in smooth motions on roads at varying speeds. HMDB videos on the other hand are usually captured from stationary cameras with a mix of large and small foreground and background object motions. Even the dynamic camera videos in HMDB are sometimes captured from hand-held devices leading to jerky motions, where our temporal steadiness assumptions might be stressed.

Pairing unsupervised and supervised datasets:

Thus far, our pairings of unsupervised and supervised datasets reflect our attempt to learn from video that a priori seems related to the ultimate recognition task, e.g. HMDB human action videos are paired with PASCAL-10 Action still images. However, as discussed above, the domains are only roughly aligned. Curious about the impact of the choice of unlabeled video data, we next try swapping out HMDB for KITTI in the PASCAL action recognition task. On this new KITTIPASCAL task, we still easily outperform our nearest baseline, although our gain drops by 0.9% (sfa-2:19.06% vs. our ssfa:20.01%). Despite the fact that the human motion dynamics of HMDB ostensibly match the action recognition task better than the egomotion dynamics of KITTI (where barely any people are visible), we maintain our advantage over the purely slow methods. This indicates that there is reasonable flexibility in the choice of unlabeled videos fed to ssfa.

Increasing supervised training sets:

Thus far, we have kept labeled sets small to simulate the “long tail” of categories with scarce training samples where priors like ours and the baselines’ have most impact. In a preliminary study for larger training pools, we now increase SUN training set sizes from 6 to 20 samples per class for KITTISUN. Our method retains a 20% gain over existing slow methods (ssfa: 3.24% vs sfa-2: 2.65%). This suggests our approach is valuable even with larger supervised training sets.

Varying unsupervised training set size:

To observe the effect of unsupervised training set size, we now restrict ssfa to use varying-sized subsets of unlabeled video on the HMDBPASCAL-10 task. Performance scales roughly log-linearly with the duration of video observed,222At 3, 12.5, 25, and 100% resply. of the full unlabeled dataset (32k frames), performance is 18.06, 19.74, 20.36, and 20.95% (see Supp) suggesting that even larger gains may be achieved simply by training ssfa with more freely available unlabeled video.

Purely unsupervised feature learning:

We now evaluate the usefulness of features trained to optimize the unsupervised ssfa loss (Eq (8)) alone. Features trained on HMDB are evaluated at various stages of training, on the task of -nearest neighbor classification on PASCAL-10 (5, and 100 training images per action). Starting at 17.8% classification accuracy for randomly initialized networks, unsupervised ssfa training steadily improves the discriminative ability of features to 19.62, 20.32 and 22.14% after 1, 2 and 3 passes respectively over training data (see Supp). This shows that ssfa can train useful image representations even without jointly optimizing a supervised objective.

Comparison to supervised pretraining and finetuning:

Recently, a two-stage supervised pretraining and finetuning strategy (sup-ft) has emerged as the leading approach to solve visual recognition problems with limited training data where high-capacity models like deep neural networks may not be directly learned [11, 7, 33, 21]. In the first stage (“supervised pretraining”), a neural network “NET1” is first trained on a related problem for which large training datasets are available. In a second stage (“finetuning”), the weights from NET1 are used to initialize a second network (“NET2”) with similar architecture. NET2 is then trained on the target task, using reduced learning rates to minimally modify the features learned in NET1.

In principle, completely unsupervised feature learning approaches like ours have important advantages over the sup-ft paradigm. In particular, (1) they can leverage essentially infinite unlabeled data without requiring expensive human labeling effort thus potentially allowing the learning of higher capacity models and (2) they do not require the existence of large “related” supervised datasets from which features may be meaningfully transferred to the target task. While the pursuit of these advantages continues to drive vigorous research, unsupervised feature learning methods still underperform supervised pretraining for image classification tasks, where great effort has gone into curating large labeled databases, e.g., ImageNet [6], CIFAR [22].

(a)
(b)
Figure 4: Comparison to CIFAR-100 supervised pretraining sup-ft, at various supervised training set sizes. Flat dashed lines reflect that our method (and SFA) always use zero additional labels.

As a final experiment, we examine how the proposed unsupervised feature learning idea competes with the popular supervised pretraining model. To this end, we adopt the CIFAR-100 dataset consisting of 100 diverse object categories as a basis for supervised pretraining.333We choose CIFAR-100 for its compatibility with the 32 32 images used throughout our results, which let us leverage standard CNN architectures known to work well with tiny images [1]. The new baseline sup-ft trains NET1 on CIFAR (see Supp), then finetunes NET2 for either PASCAL-10 action or SUN scene recognition tasks using the exact same (few) labeled instances given to our method. In parallel, our method “pretrains” only via the SSFA regularizer learned with unlabeled HMDB / KITTI video respectively for the two tasks. Our method uses zero labeled CIFAR data.

Fig  4 shows the results. On PASCAL-10 action recognition (left), our method significantly outperforms sup-ft pretrained with all 50,000 images of CIFAR-100! Gathering image labels from the crowd for large multi-way problems can take on average 1 minute per image [35], meaning we are getting better results while also saving 830 hours of human effort. On SUN scene recognition (right), ssfa outperforms sup-ft with 5K labels and remains competitive even when the supervised method has a 17,500 label advantage. However, sup-ft-50k’s advantage on the SUN task is more noticeable; its gain is similar to our gain over the best slow-feature method.

The upward trend in accuracy for sup-ft with more CIFAR-100 labeled data indicates that it successfully transfers generic recognition cues to the new tasks. On the other hand, the fact that it fares worse on PASCAL actions than SUN scenes reinforces that supervised transfer depends on having large curated datasets in a strongly related domain. In contrast, our approach successfully “transfers” what it learns from purely unlabeled video. In short, our method can achieve better results with substantially less supervision. More generally, we view it as an exciting step towards unlabeled video bridging the gap between unsupervised and supervised pretraining for visual recognition.

5 Conclusion

We formulated an unsupervised feature learning approach that exploits higher order temporal coherence in unlabeled video, and demonstrated its powerful impact for several recognition tasks. Despite over 15 years of research surrounding slow feature analysis (SFA), its variants and applications, to the best of our knowledge, we are the first to identify that SFA is only the first order approximation of a more general temporal coherence idea. This basic observation leads to our intuitive approach that can be easily plugged into applications where first order temporal coherence has already been found useful [31, 3, 47, 13, 42, 15, 46, 45, 32, 28]

. To our knowledge, ours are the first results where unsupervised learning from video actually surpasses the accuracy of today’s favored approach, heavily supervised pretraining.

Acknowledgements: We thank Texas Advanced Computing Center for their generous support. This work was supported in part by ONR YIP N00014-15-1-2291.

References

  • [1] Cuda-convnet. https://code.google.com/p/cuda-convnet/.
  • [2] P. Agrawal, J. Carreira, and J. Malik. Learning to see by moving. ICCV, 2015.
  • [3] J. Bergstra and Y. Bengio. Slow, decorrelated features for pretraining complex cell-like networks. In NIPS, 2009.
  • [4] P. Berkes and L. Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal of vision, 5(6), 2005.
  • [5] Y. Burda, R. Grosse, and R. Salakhutdinov.

    Importance weighted autoencoders.

    In ICLR, 2016.
  • [6] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
  • [7] Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013.
  • [8] A. Dosovitskiy, J.T. Springenberg, M. Riedmiller, and T. Brox. Discriminative Unsupervised Feature Learning with Convolutional Neural Networks. NIPS, 2014.
  • [9] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. IJCV, 2010.
  • [10] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. CVPR, 2012.
  • [11] R. Girshick, J. Donahue, T. Darrell, and J.Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014.
  • [12] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. AISTATS, 2010.
  • [13] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun. Unsupervised Learning of Spatiotemporally Coherent Metrics. ICCV, 2014.
  • [14] Ross Goroshin, Michael F Mathieu, and Yann LeCun. Learning to linearize under uncertainty. In NIPS, 2015.
  • [15] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality Reduction by Learning an Invariant Mapping. CVPR, 2006.
  • [16] G. Hinton, A. Krizhevsky, and S.D. Wang. Transforming Auto-Encoders. ICANN, 2011.
  • [17] J. Hurri and A. Hyvarinen. Simple-cell-like receptive fields maximize temporal coherence in natural video. Neural Computation, 15(3), 2003.
  • [18] D. Jayaraman and K. Grauman. Learning image representations tied to ego-motion. ICCV, 2015.
  • [19] D. Jayaraman and K. Grauman. Slow and Steady Feature Analysis: Higher Order Temporal Coherence in Video. CoRR, abs/1506.04714, June 2015.
  • [20] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, Sergio S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv, 2014.
  • [21] S. Karayev, M. Trentacoste, H. Han, A. Agarwala, T. Darrell, A. Hertzmann, and H. Winnemoeller. Recognizing image style. BMVC, 2014.
  • [22] A. Krizhevsky. Learning multiple layers of features from tiny images, 2009.
  • [23] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012.
  • [24] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recognition. ICCV, 2011.
  • [25] Y. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. CVPR, 2004.
  • [26] Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equivariance and equivalence. CVPR, 2015.
  • [27] N. Li and J. DiCarlo. Unsupervised natural experience rapidly alters invariant object representation in visual cortex. Science, 321, 2008.
  • [28] S. Liwicki, S. Zafeiriou, and M. Pantic. Incremental slow feature analysis with indefinite kernel for online temporal video segmentation. In ACCV, 2012.
  • [29] R. Memisevic. Learning to relate images. PAMI, 2013.
  • [30] V. Michalski, R. Memisevic, and K. Konda. Modeling Deep Temporal Dependencies with Recurrent Grammar Cells. NIPS, 2014.
  • [31] H. Mobahi, R. Collobert, and J. Weston. Deep Learning from Temporal Coherence in Video. ICML, 2009.
  • [32] F. Nater, H. Grabner, and L. Van Gool. Temporal relations in videos for unsupervised activity analysis. In BMVC, 2011.
  • [33] M. Oquab, L. Bottou, I. Laptev, and J.Sivic. Learning and transferring mid-level image representations using convolutional neural networks. CVPR, 2014.
  • [34] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language) modeling: a baseline for generative models of natural videos. arXiv, 2014.
  • [35] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. IJCV, 2015.
  • [36] U. Schmidt and S. Roth. Learning rotation-aware features: From invariant priors to equivariant descriptors. CVPR, 2012.
  • [37] P. Simard, D. Steinkraus, and J.C. Platt. Best practices for convolutional neural networks applied to visual document analysis. ICDAR, 2003.
  • [38] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2014.
  • [39] P. Vincent, H. Larochelle, Y. Bengio, and P.A. Manzagol.

    Extracting and composing robust features with denoising autoencoders.

    ICML, 2008.
  • [40] C. Vondrick, H. Pirsiavash, and A. Torralba. Anticipating the future by watching unlabeled video. CoRR, abs/1504.08023, 2015.
  • [41] J. Walker, A. Gupta, and M. Hebert. Dense optical flow prediction from a static image. In ICCV, 2015.
  • [42] X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. ICCV, 2015.
  • [43] L. Wiskott and T. J. Sejnowski. Slow feature analysis: unsupervised learning of invariances. Neural computation, 2002.
  • [44] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition from abbey to zoo. CVPR, 2010.
  • [45] L. Zafeiriou, M. Nicolaou, S. Zafeiriou, S. Nikitidis, and M. Pantic. Learning slow features for behaviour analysis. In ICCV, 2013.
  • [46] Z. Zhang and D. Tao. Slow feature analysis for human action recognition. PAMI, 2012.
  • [47] W. Zou, S. Zhu, K. Yu, and A. Ng. Deep learning of invariant features via simulated fixations in video. NIPS, 2012.
  • [48] Will Y Zou, Andrew Y Ng, and Kai Yu. Unsupervised learning of visual invariance with temporal coherence. In NIPS Workshop on Deep Learning, 2011.

6 Appendix

We now provide supplementary details on (1) the CNN architecture used in our SUN and PASCAL-10 experiments, (2) the sequence completion task used to quantify steadiness, (3) our experiments with varying sizes of unsupervised training datasets, (4) our experiments with purely unsupervised feature learning, (5) pre-processing steps for the datasets used in our experiments, (6) optimization-related details, and (7) details of the supervised pretraining and finetuning baseline sup-ft from the paper. We also show samples of all the real image datasets used in our experiments.

3232 images CNN architecture:

The 3232 CNN architecture [1] representing , used for the KITTISUN and HMDBPASCAL-10 tasks is shown in Fig 5.

Figure 5: 3232 CNN architecture used for the KITTISUN and HMDBPASCAL-10 tasks

Quantifying steadiness - details

As described in the main paper (Sec 4.2), the candidate set for NORB was straightforward to construct – the entire NORB test image set was used. For the video datasets KITTI and HMDB though, it would have been practically difficult to include all image frames in the candidate set . To avoid having to compute features and perform nearest neighbor search over too large a number of frames, we formed a randomly sub-sampled instead, as follows. Starting from empty , we added (1) all the unique images among the query pairs (2) their corresponding ground truth completion images and (3) a minimum number of randomly chosen frames from each video represented within until this point. This ensures that the task is non-trivial by adding distractors from the same video as the ground truth candidate image, which are likely to have similar appearance. We used =10 for KITTI and =5 for HMDB to keep the total numbers of images manageable. Finally, we select from 8100, 5000 and 5000 candidates respectively for NORB, KITTI and HMDB, for each of 20,000, 1000 and 1,000 query pairs respectively for the three datasets.

Figure 6: ssfa classification accuracy vs. duration of unsupervised video (mean, standard error over 5 runs).

Varying unsupervised training set size:

To observe the effect of unsupervised training set size, we now restrict ssfa to use varying-sized subsets of unlabeled video on the HMDBPASCAL-10 task. The full HMDB dataset has approximately 1000 videos, for a total of 32000 frames. Performance scales roughly log-linearly with the duration of video observed as shown in Fig 6, suggesting that even larger gains may be achieved simply by training ssfa with more freely available unlabeled video.

Figure 7: ssfa k-NN accuracy improvement with ssfa training (mean, standard error over 5 runs).

Purely unsupervised feature learning:

We evaluate the usefulness of features trained to optimize the unsupervised ssfa loss (main paper Eq (8)) alone. Features trained on HMDB are evaluated at various stages of training, on the task of -nearest neighbor classification on PASCAL-10 (5, and 100 training images per action). Fig 7 shows the results. Starting at 17.8% classification accuracy for randomly initialized networks, unsupervised ssfa training steadily improves the discriminative ability of features. This shows that ssfa can train useful image representations even without jointly optimizing a supervised objective.

Dataset pre-processing details

For all tasks, images are mean-subtracted and contrast-normalized before passing to the neural networks. In addition, for KITTISUN, full KITTI frames were resized to 3232 and SUN images were cropped to KITTI aspect ratio before resizing to the same dimensions. Grayscale images were used in this task. Similarly, for HMDBPASCAL-10, HMDB frames were cropped to centered squares, and PASCAL-10 bounding boxes were expanded to the closest square before resizing to 3232. Resizing for KITTISUN and HMDBPASCAL-10 was done to allow fast and thorough experimentation with standard CNN architectures known to work well with tiny images [1]. On the SUN dataset apart from resizing, where we also lose information due to KITTI-aspect-ratio cropping, we verified that our baselines were legitimate by running a simple nearest neighbor baseline in the pixel space (standard approach for tiny images). This achieved 0.61% accuracy compared to UNREG’s 0.70%, given the same training data.

Optimization details

We initialized according to the scheme proposed in [12], and run Nesterov accelerated stochastic gradient descent using the open source Caffe [20] package. The base learning rate and regularization s are selected with greedy cross-validation.444our validated (,) values for NORBNORB, KITTISUN, and HMDBPASCAL respectively are (0.1,0.3),(3,0.1), and (0.3,1) Specifically, for each task, the optimal base learning rate (from 0.1, 0.01, 0.001, 0.0001) was first identified for unreg. Next was set through a logarithmic grid search (steps of ), with set to 0 i.e. this parameter was optimized for sfa-2. The margin parameter of the contrastive loss in was set to 1.0 for all methods – this affects the objective function only up to a feature scaling operation, and so may be set to any positive value. For ssfa, a similar search was then performed over (logarithmic grid search with steps of ), and then a small search for the contrastive loss margin in (over 0, 0.1 and 1). Setting the margin to in a contrastive loss reduces it to the simple distance loss over positive samples.

On a single Tesla K-40 GPU machine, NORBNORB training tasks took 30 minutes, KITTISUN tasks took 90 minutes, and HMDBPASCAL-10 tasks took 60 minutes. ssfa

training took about 2x training time and 1.5x training epochs to converge, compared to

sfa baselines, because of the more complex loss function.

Supervised pretraining and finetuning - details

For the supervised pretraining and finetuning comparison experiments in Sec 4.3, we used the same neural network architecture as used for our approach and other baselines on the SUN scene and PASCAL-10 action recognition tasks (architecture shown in Fig 5). A 100-way softmax classifier was trained on the 64-dimensional final layer features to classify CIFAR-100 classes during pretraining, but these classifier weights are ignored for supervised transfer. All other weights in the network are used to set the corresponding weights on the network to be trained for the target task. For SUN (397 classes x 5 images per class), we found it beneficial to finetune features by reducing the learning rate for the pretrained layers by a factor of 0.1 compared to the full learning rate used to train the 397-way classifier on top. For PASCAL-10 (10 classes x 5 images per class), only the 10-way action classifier was trained starting from random weights, while the weights of lower layers were frozen to their pretrained values, since finetuning was found to adversely impact classification results.

Dataset sample images

Some sample images of KITTI, SUN, HMDB-51 and PASCAL-10 are shown at the end of this document.