For our human beings, vision, and how the brain uses visual information, are learned skills. Meanwhile, the ultimate goal of computer vision research is to teach machines to understand the visual world. But obviously we cannot do it all in the manner of hand over hand, i.e. via empirically man-devised models. It would be more ideal and practicable if we can teach them to learn vision by themselves. This work focuses on the fundamental problem of establishing 2D-2D correspondences across a pair of consecutive frames, and notably proves that a solution to this low-level vision problem could be achieved in an unsupervised way by relying only on natural video sequences.
Our key insight lies in the understanding that frame interpolation implicitly solves for dense correspondences between the input image pair. It is well known that dense matching can be regarded as a sub-problem of frame-interpolation, as the interpolation could be immediately generated by correspondence-based image warping once dense inter-frame matches are available. It then comes as no surprise that if we were able to train a deep neural network for frame interpolation, its application would implicitly also generate knowledge about dense image correspondences. Retrieving this knowledge is known as analysis by synthesis , a paradigm in which learning is described as the acquisition of a measurement synthesising model, and inference of generating parameters as model inversion once correct synthesis is achieved. In our context, synthesis simply refers to frame interpolation. We then, for the analysis part, show that the correspondences can be recovered from the network through gradient back-propagation, which produces sensitivity maps for each interpolated pixel. The procedure is summarised in Figure 1, explaining how the reciprocal mapping between frame-interpolation and dense correspondences is encoded in the forward and backward propagation through one and the same network architecture. We call our approach MIND, which stands for Matching by INverting 111The term of inverting is read as back-propagation through the given deep neural network. a Deep neural network.
The key benefit of MIND lies in the fact that the deep convolutional network for frame-interpolation can be trained from ordinary video sequences without any man-made ground truth signals. The training data in our case is given by triplets of images, each one consisting of two input images and one output image that represents the ground-truth interpolated frame. A correct example of a ground truth output image is an image that—when inserted in between the input pair of images—forms a temporally coherent sequence of frames. Such temporal coherency is naturally contained in regular video sequences, which allows us to simply use triplets of sequential images from almost arbitrary video streams for training our network. The first and the third frame of each triplet are used as inputs to the network, and the second frame as the ground truth interpolated frame. Most importantly, since the inversion of our network returns frame-to-frame correspondences, it therefore learns how to do image matching without any requirement for manually designed models or expensive ground truth correspondences. In other words, the presented approach learns image matching by simply “watching videos”.
The paper is organized as follows. Section 2 reviews relevant prior work. Section 3 explains the present analysis-by-synthesis approach, including both the analysis part of how MIND works and the synthesis part of the deep convolutional architecture for frame interpolation. Section 4 demonstrates the surprising performance for the present purely unsupervised learning approach, which is comparable to several traditional empirically designed methods. Section 5 finally discusses our contribution and provides an outlook onto future works.
2 Related Work
Deep learning meets image matching Image matching is a classical problem in computer vision. Here we limit the discussion to recent works that address image matching through learning based approaches. Roughly speaking, there exist two lines of research for this topic: the first one consists of making use of features or representations learned by deep neural networks, which are either originally trained for other tasks such as object recognition [2, 3], or specially designed and trained for the purpose of image matching [4, 5, 6]. The second major line of research employs deep neural networks to compute the similarity between image patches [7, 8, 9]
. In contrast to our work, the cited contributions mainly address sub-modules of image matching (feature extraction or matching cost computation), rather than providing end-to-end solutions. An exception is given by FlowNet
, which presents an interesting deep learning based approach for dense optic flow computation. It does however depend on ground truth flow for training the network.
It is also worth to mention that the Gated restricted Boltzmann machine model proposed by Memisevic and Hinton and then extended by Taylor et al.  could also be trained in an unsupervised manner and be applied to infer constrained image transforms such as flow fields for “shifting pixel”. However, this line of work is mainly aiming at learning motion features for understanding video data. It is similar to the works of temporal coherence learning mentioned below.
Temporal coherence learning
Unsupervised learning is a broad topic in the field of machine learning. Our discussion here focuses on works that exploit temporal coherency in natural videos, sometimes also calledtemporal coherence learning [13, 14, 15]. As a recent representative work, Wang et al.  exploit temporal coherency by visual tracking in videos, and report that the learned representation achieves competitive performance compared to some supervised alternatives. While temporal coherence learning mostly aims at learning features or representations, some recent works on reconstructing and predicting video frames in an unsupervised setting  are closely related to our work as well. Srivastava et al.  use an encoder LSTM to map input sequences into a fixed length representation, and use the latter for reconstructing the input or even predicting future frames. Goroshin et al. 
consider videos as one-dimensional, time-parametrized trajectories embedded in a low dimensional manifold. They train deep feature hierarchies that linearise the transformations observed in natural video sequences for the purpose of frame prediction. Though related to our work, these works are not aiming at image matching. It will be interesting to apply our concept of matching by inverting to the above models for temporal coherence learning.
Network inversion Note that inverting a learned network is traditionally defined as reconstructing the input from the output of an artificial neural network . Mahendran et al.  and Dosovitskiy et al.  apply this concept to understand what information is preserved by a network. In our context, inverting a network means back-propogation through a learned network in order to obtain the gradient map with respect to the input signals. Interestingly, the idea has already been introduced in the work of Simonyan et al. , emphasizing that the retrieved sensitivity maps may serve to identify image-specific class saliency. Similarly, Bach et al. 
employ gradient maps as a measure for the contribution of single pixels to nonlinear classifier, thus helping to explain how decisions are made.
The analysis by synthesis approach for dense image matching is described in this section: we first explain the analysis part, i.e. how to obtained correspondences given the trained neural network and the interpolated image. For the synthesis part, it is described here the detailed architecture of the deep convolutional network designed for frame interpolation.
3.1 Matching by Inverting a Deep neural network
Assuming that we have a well trained deep neural network for frame interpolation in our hand, the core technical question behind our work is how to recover the correspondences between the input pair of images from there. As explained previously, dense correspondence matching may be regarded as a sub-problem of frame-interpolation, which is why we should be able to trace back the matches starting from the interpolated frame generated during the forward-propagation through the trained network. Our task then consists of back-tracking each pixel in the output image to exactly one pixel in each of the two input images. Note that this back-tracking does not mean reconstructing input images from the output one. Instead, we only need to find the pixels in each input image which have the maximum influence to each pixel of the output image.
We perform back-tracking by applying a technique similar to the one adopted by Simonyan et al. . For each pixel in the output image, we compute the gradient of its value with respect to each input pixel, thus telling us how much it is under the influence of individual pixels at the input. The gradient is computed based on back-propagation, and leads to sensitivity or influence maps at the input of the network.
From a more formal perspective, our approach may be explained as follows. Let denote a non-linear function (i.e. the trained deep neural network) that describes the mapping from two input images and to an interpolated image lying approximately at the “center” of the input frames. Thinking of as a vectorial mapping, it can be split up into non-linear sub-functions, each one producing the corresponding pixel in the output image
In order to produce the sensitivity maps, we apply back-propagation to compute the Jacobian matrix with respect to each input image individually. The Jacobian with respect to the first image is given by
illustrating that this derivative results in one matrix for each one of the pixels at the output. The Jacobian with respect to is given in a similar way. Let’s define the absolute gradients of the output point with respect to each one of the input images, and evaluated for the concrete inputs and . They are given by
where replaces each entry of a matrix by its absolute value. The gradient maps produced in this way notably represent the seeked sensitivity or influence maps that may now serve in order to derive the coordinates of each correspondence. We notably extract the most responsible point in each gradient map, and connect those two points in order to return the correspondence.
In the spirit of unsupervised learning, we opted for the simplest possible choice of taking the coordinates of the maximum entry in and , respectively. Let us denote these points with and . By computing the two gradient maps for each point in the output image and extracting each time the most responsible point, we thus obtain the following two lists of points
The set of correspondences is then given by combining same-index elements from and , eventually resulting in
3.2 Deep neural network for Frame Interpolation
The architecture of our frame-interpolation network is inspired by FlowNetSimple as presented in Fischer et al. . As illustrated in Figure 2, it consists of a Convolutional Part and a Deconvolutional Part. The two parts serve as “encoder” and “decoder” respectively, similar to the auto-encoder architecture presented by Hinton and Salakhutdinov . The basic block within the Convolutional Part—denoted Convolution Block—follows the common pattern of the convolutional neural network architecture:
INPUT –>[CONV –>PRELU] * 3 –>POOL –>OUTPUT.
The Parametric Rectified Linear Unit is adopted in our work. Following the suggestions from VGG-Net [CONV –>PRELU] three times to better model the non-linearity.
The Deconvolution Part consists of Deconvolution Blocks, each one including a convolution transpose layer  and two convolution layers. The first one has a receptive field of four, a stride of two, and a padding of one. The pattern of the Deconvolution Block follows:
INPUT –>[CONVT –>PRELU] –>[CONV –>PRELU] * 2 –>OUTPUT.
In order to maintain fine-grained image details in the interpolation frame, we make a copy of the output features produced by Convolution Blocks 2, 3, and 4, and concat them as an additional input to the Deconvolution Blocks 4, 3, and 2, respectively. This concept is illustrated by the side arrows in Figure 2, and similar ideas have already been used in prior work [10, 29]. Recent works [30, 31] indicate that the ‘side arrows’ may also help to better train the deep network.
It is easy to notice that our network is a fully convolutional one, thus allowing us to feed it with images of different resolutions. This is an important advantage, as different datasets may use different height-to-width ratios. The output blob size for each block in our network is listed in Table 1.
In this section, we first explain the implementation details behind MIND such as training data and loss function. The examples as proofs of concept for MIND are introduced before a discussion on the generalization ability of the trained CNN. We finally evaluate MIND in terms of quantitative matching performance and compare it to traditional image matching methods.
4.1 Implementation Details
Training Data: Quantity and quality of training data are crucial for training a deep neural network. However, our case is particularly easy as we can simply use huge amounts of real-world videos. In this work, we focus on training with the KITTI RAW videos  and Sintel videos222Sintel, the Durian Open Movie Project. https://durian.blender.org/ and show that the resulting learned network performs reasonably well. The network is first trained with the KITTI RAW video sequences which are captured by driving around the city of Karlsruhe, through rural areas and over highways. The dataset contains 56 image sequences with in total 16,951 frames. For each sequence, we take every three consecutive frames (both in forward and backward direction) as a training triplet, where the first and the third image serve as inputs to the network and second image as the corresponding output. These images are then augmented by vertical flipping, horizontal flipping and a combination of both. The total number of sample triplets is 133,921. We then fine-tune the network on examples selected from the original Sintel movie. We manually collected 63 video clips with in total 5,670 frames from the movie. After grouping and data augmentation we finally obtain 44,352 sample triplets. Note that, compared to the KITTI sequences which are recorded with relatively uniform velocity, the Sintel sequences represent more difficult training examples in the context of our work, as they contain a lot of fast and unrealistic motion captured with a frame rate of only 24 fps. A significant portion of the Sintel samples therefore does not contain the required temporal coherence. We will discuss this issue further in Section 4.2.
Loss Function: Several previous works [19, 16] mention that minimizing the L2 loss between the output frame and the training example may lead to unrealistic and blurry predictions. We have not been able to confirm this throughout our experiments, but found that the Charbonnier loss commonly employed for robust optical flow computation  leads to an improvement over the L2 loss. We employ it to train our network, with set to 0.1.
The training is performed using Caffe on a machine with two K40c GPUs. The weights of the network are initialized by Xavier’s approach  and optimized by the Adam solver  with a fixed momentum of 0.9. The initial learning rate is set to 1e-3 and then manually tuned down once ceasing of loss reduction sets in. For training on the KITTI RAW data, the images are scaled to 384128. For training on the Sintel dataset, the images are scaled to 256
128. The batch size is 16. We run the training on KITTI RAW from scratch for about 20 epochs, and then fine-tuned it on the Sintel movie images for 15 epochs. We did not observe over-fitting during training, and terminated the training after 5 days.
Execution time: MIND can be applied to different scenarios (e.g. sparse or dense matching). We focus here on semi-dense image matching in order to obtain a result comparable with other methods. We compute the correspondences across the input images for each corner of a predefined raster grid of 4 pixels width in the interpolated image. Note that MIND currently depends on a large amount of computational resources as it performs back-propagation through the entire network for every pixel that needs to be matched. For an image of size 384128, each forward pass through our network takes 40ms on a PC with K40c GPU, and each backward pass takes 158ms. For each image pair, we need to perform one forward pass to first obtain the interpolation. We then need to perform 384128 / 4 / 4 = 3072 backward passes to find the correspondences, resulting in a total of about 486 seconds (8 minutes).
4.2 Qualitative examples for Interpolation and Matching
We demonstrate here the visual examples as proofs of concept for how the present approach works on both tasks of frame interpolation and image matching. We further introduce a discussion on the generalization ability of the trained model.
Examples for frame interpolation: We show the examples of frame interpolation in Figure 3. The first two columns show the examples on KITTI and Sintel images which are taken from the validation datasets originally collected for the purpose of monitoring the network training process. It can be seen that the trained CNNs cover the motion correctly for both KITTI and Sintel image pairs. It could be noticed as well that some fine-grained details are not preserved well in both examples, even though we have put special considerations when designing the convolutional architecture, c.f. section 3.2. Nevertheless, we would like to mind the readers that the goal of the present work is not to provide a state-of-the-art frame interpolation algorithm. And for the goal of image matching, we will see that the preservation of perfect image details is in fact not necessary.
Examples for image matching: Here we present examples to demonstrate how MIND obtains correspondences given the trained CNNs for frame interpolation. The examples taken from KITTI and Sintel videos are shown in Figure 4. By computing the gradient of manually marked pixels in the interpolated image, MIND successfully obtains correct correspondences between the 2 input images. It can be seen that the correct correspondences are obtained even in some fast moving areas where fine-grained image details are missed, e.g. the area of the character’s shaking hand in the Sintel example.
We further show one failure example taken from Sintel images. In Figure 5, it can be observed that the interpolation fails as the motion of the small dragon and the character’s hand have not been covered correctly. It then comes as no surprise that MIND fails to extract correct matches for almost all of the selected points. However, it is worth to note that the No.4 match has better quality than others, of which the corresponding gradient maps are less distinctive. The matching score/confidence returned by MIND is inspired by this behaviour and defined as the ratio between the maximum gradient intensity and the mean gradient intensity within a small area around the maximal gradient location.
As illustrated in Section 4.3, the general performance of MIND, especially on KITTI images, is good. The failure example in Figure 5 outlines a extreme case in the Sintel sequences dominated by fast and highly non-rigid motion in the scene.
Generalization ability: It is essential for learning based approaches to hold good generalization ability. Though MIND enjoys the benefit that it can learn image matching by just “watching videos” (i.e. it could first do fine-tuning in the given image sequences and perform the interpolation & matching after that), it is important to verify whether the present CNN is indeed learning the ability to interpolate frames and match images, rather than only “remember” the KITTI or Sintel-like images.
We demonstrate the generalization ability of the trained CNN by applying it to images taken from the ETH Multi-Person Tracking dataset  and the Bonn Benchmark on Tracking , which have not been used for either training or fine-tuning. The results are showed in Figure 3, from which we can see that the trained CNN again covers the motion correctly. It provides evidence about what has been learned by “watching videos”.
The generalization ability is further illustrated by applying MIND to DICOM images of coronary angiogram333The images are taken from a DICOM sample image set: http://www.osirix-viewer.com/datasets/. Alias Name: GRUSELAMBIX.. In Figure 6, it can be seen that these images are substantially different from natural ones. Though again failing to preserve perfect image details, the CNN, which is trained on natural images, performs impressively well on the DICOM images. The nice generalization ability of the CNN is underlined by results on both frame interpolation and image matching.
4.3 Quantitative Performance of Image Matching
We compare the matches produced by MIND against those of several empirically designed methods: the classical Kanade–Lucas–Tomasi feature tracker , HoG descriptor matching  (which is widely employed to boost dense optical flow computation), and the more recent DeepMatching approach  which relies on a multilayer convolutional architecture and achieves state-of-the-art performance. As observed in , comparing different matching algorithms is delicate because they usually produce different numbers of matches for different parts of the image. For the sake of a fair comparison, we adjust the parameters of each algorithm to make them produce as many as possible matches with an as homogeneous as possible distribution across the input images. For DeepMatching, we use the default parameters. For MIND, we extract correspondences for each corner of a uniform grid of 4 pixels width. For KLT, we set the minEigThreshold to 1e-9 to generate as many matches as possible. For HoG, we again set the pixel sampling grid width to 4. We then sort the matches according to suitable metrics444For DeepMatching, we sort the matches according to the matching score given by the open source code . For KLT, the metric is the error returned by the OpenCV implementation . For HoG, we use the matching score defined in . For MIND, the matching score is defined as the ratio between the maximum gradient intensity and the mean gradient intensity within a 2020 area around the maximal gradient location. and select the same amount of “best” matches for each algorithm. In this way, the 4 algorithms produce the same numbers of matches with similar coverage over each input image.
The comparisons are performed on both KITTI  and MPI-Sintel  training sets where ground truth correspondences can be extracted from the available ground truth flow fields. We perform all of our experiments on the same image resolution than the one used by our network.555It is ideal to evaluate both image matching and optical flow in benchmarks of KITTI and MPI-Sintel. Due to the fact that the present MIND is currently designed only for resolution-reduced images, we can’t process the benchmark datasets directly, but apply all algorithms locally to the test datasets, followed by the standard evaluation and error metrics known from prior art. On KITTI, the images are scaled to 384128, and for MPI-Sintel, 256128. We use the network trained on the KITTI RAW sequences for the matching experiment on the KITTI Flow 2012 training set. We then use the network fine-tuned on Sintel movie clips for the experiments on the MPI-Sintel Flow training set. The 4 algorithms are evaluated in terms of the Average Point Error (APE) and the Accuracy@T. The latter is defined as the proportion of “correct” matches from the first image with respect to the total number of matches . A match is considered correct if its pixel match in the second image is closer than T pixels to ground-truth.
As can be observed in Table 2 and Table 3, DeepMatching produces matches with the highest quality in terms of all metrics and on both MPI-Sintel and KITTI sets. Notably, MIND performs very close to DeepMatching on KITTI and outperforms KLT tracking and HoG matching by a considerable amount in terms of Accuracy@10 and Accuracy@20. It is surprising to see that MIND—an unsupervised learning based approach—works so well. The performance on MPI-Sintel however drops a bit due to the difficulty of the contained unrealistic motion. Though the APE measure indicates better performance than HoG and KLT, it is only safe to conclude that MIND remains competitive in terms of overall performance on MPI-Sintel, which can be seen further in the next section.
4.4 Ability to Initialise Optical Flow Computation
To further understand the matching quality produced by MIND, we replace the DeepMatching part of DeepFlow with MIND to see whether MIND matches are able to boost optical flow performance in a similar way than DeepMatching and HoG or KLT matches. Similar to the evaluation in , we feed DeepFlow with matches obtained by each matching method in the previous section. The parameters (e.g. the matching weight) of DeepFlow are tuned accordingly to make best use of the pre-obtained matches. Note that we scale down the input images to 384128 for KITTI and 256128 for MPI-Sintel. We then up-size the obtained flow field to the original resolution by bilinear interpolation, to the end of comparing results in full resolution.
The results on the KITTI Flow 2012 training set are indicated in Table 4. It can be seen that using the matches obtained by any of the 4 algorithms improves the flow performance compared to the case where we use no matches for initialization. Notably, MIND again reaches closest performance to DeepMatching in terms of all metrics, thus underlining the good matching quality obtained by MIND (better than KLT and HoG and comparable to DeepMatching). Table 5 shows the results obtained on the MPI-Sintel training dataset. As in KITTI, the pre-obtained matches indeed help to improve the optical flow results especially in terms of the APE and s40+ metrics, while flow initialized by DeepMatching remains best overall. The results initialized from MIND matches however rank behind those initialized by HoG or KLT matches, which again suggests the importance of temporal coherency for training our network. The reason why KLT works better than in the evaluation presented in  is because we run KLT in the downscaled images rather than the full resolution ones, and this helps KLT to better deal with large displacements.
From the quantitative evaluations of matching and flow performance, it should be concluded that MIND works well on the KITTI Flow training set and achieves comparable performance to the state-of-the-art defined by DeepMatching. In the MPI-Sintel Flow training set, MIND still obtains comparable performance to the traditional HoG and KLT methods. The latter should still be interpreted as a good result especially considering that the quality of training data for the unrealistic Sintel images is insufficient.
refers to the percentage of pixels where flow estimation has an error abovex pixels.
We have shown that the present work enables artificial neural networks to learn accurate matching from only ordinary videos. Though the performance evaluation indicates that MIND works surprisingly well in the expected unsupervised manner, it fails to outperform the existing empirically designed methods even in the resolution-reduced images. However, as stated in the very beginning, the aim of this work is to prove that it is possible to learn image matching without manual supervision, rather than to provide a more practicable algorithm for frame interpolation or image matching. Furthermore, we believe that the present unsupervised learning approach holds brilliant potential for the more natural solutions to similar low-level vision problems, such as optical flow, tracking and motion segmentation.
Our future work focuses on making the present approach more applicable in real-world scenarios, in terms of both computational efficiency and reliability. It is also our hope that the present work helps to promote the concept of analysis by synthesis towards broader acceptance.
Quantitative evaluation of frame interpolation
We provide here the supplemental quantitative evaluation in terms of frame interpolation. The purposes are: 1. verifying that the trained CNNs performs quantitatively good frame interpolation; and 2. providing further evidence that the trained CNN holds good generalization ability.
For the trained CNN (refereed to as MIND below), the interpolated images are simply the outputs of the forward propagation though the CNN. We compare the results to the traditional interpolation method using state-of-the-art optical flow, i.e. DeepFlow  (initialized with DeepMatching). The interpolation algorithm used in the Middlebury benchmark  is employed to synthesize the in-between images, given the optical flow fields obtained by DeepFlow. For simplicity, this approach is refereed to as DeepFlow below.
The quantitative evaluations are performed on four image sequences: a representative sequence from KITTI RAW video, one Sintel movie clip666Sintel, the Durian Open Movie Project. https://durian.blender.org/, DICOM image sequence777The images are taken from a DICOM sample image set: http://www.osirix-viewer.com/datasets/. Alias Name: GRUSELAMBIX. and RubberWhale sequence from the Middlebury optical flow benchmark . For each image sequence, MIND and DeepFlow are evaluated on each image triplet, where the first and third frames are taken as inputs and the second frame serves as ground truth interpolated frame.
5.1 Sample images
We first show some sample results for each image sequence. In Figure 7 and Figure 8, it can be seen that both MIND and DeepFlow work correctly for the task of frame interpolation. Please note that MIND works surprisingly well on DICOM and RubberWhale images, though it has never been trained with similar images. Notably, in the second example of DICOM images shown in Figure 6, DeepFlow fails to cover the motion correctly, while MIND still obtains a good interpolated image.
5.2 Numerical results
Following , the interpolation error (IE) is defined as the root-mean-square (RMS) difference between the ground-truth image and the estimated interpolated image
is the number of pixels. For color images, we take the L2 norm of the vector of RGB color differences.
Furthermore, the normalized interpolation error (NE) between an interpolated image and a ground truth image is defined as
The numerical results of comparison between MIND and DeepFlow are reported in Table 6 and Figure 9. It can be seen that MIND works better on KITTI images than DeepFlow while failing to work well on Sintel images even though the CNN is fine-tuned using Sintel Movie clips. This fact is consistent with the evaluations on image matching and optical flow performance reported in our submission, which indicates that the current CNN cannot deal with Sintel images well mainly due to the existence of fast and complex motion.
Regarding the generalization ability, it is further illustrated by the quantitative results that MIND learns indeed the ability to interpolate and match images, rather than only ‘remembering’ the KITTI or Sintel-like images. MIND even does a better job than DeepFlow on DICOM images, which encourages us to explore its further applications.
Yildirim, I., Kulkarni, T., Freiwald, W., Tenenbaum, J.B.:
Efficient and robust analysis-by-synthesis in vision: A computational framework, behavioral tests, and modeling neuronal representations.In: Annual Conference of the Cognitive Science Society. (2015)
-  Long, J.L., Zhang, N., Darrell, T.: Do convnets learn correspondence? In: Advances in Neural Information Processing Systems. (2014) 1601–1609
-  Fischer, P., Dosovitskiy, A., Brox, T.: Descriptor matching with convolutional neural networks: a comparison to sift. arXiv preprint arXiv:1405.5769 (2014)
-  Huang, G., Mattar, M., Lee, H., Learned-Miller, E.G.: Learning to align from scratch. In: Advances in Neural Information Processing Systems. (2012) 764–772
-  Agrawal, P., Carreira, J., Malik, J.: Learning to see by moving. arXiv preprint arXiv:1505.01596 (2015)
-  Simo-Serra, E., Trulls, E., Ferraz, L., Kokkinos, I., Fua, P., Moreno-Noguer, F.: Discriminative learning of deep convolutional feature point descriptors. In: Proceedings of the International Conference on Computer Vision (ICCV). (2015)
Žbontar, J., LeCun, Y.:
Computing the stereo matching cost with a convolutional neural
In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 1592–1599
-  Park, M.G., Yoon, K.J.: Leveraging stereo matching with learning-based confidence measures. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2015) 101–109
-  Zagoruyko, S., Komodakis, N.: Learning to compare image patches via convolutional neural networks. CoRR abs/1504.03641 (2015)
-  Fischer, P., Dosovitskiy, A., Ilg, E., Häusser, P., Hazırbaş, C., Golkov, V., van der Smagt, P., Cremers, D., Brox, T.: Flownet: Learning optical flow with convolutional networks. arXiv preprint arXiv:1504.06852 (2015)
-  Memisevic, R., Hinton, G.: Unsupervised learning of image transformations. In: Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE Conference on, IEEE (2007) 1–8
-  Taylor, G.W., Fergus, R., LeCun, Y., Bregler, C.: Convolutional learning of spatio-temporal features. In: Computer Vision–ECCV 2010, Springer (2010) 140–153
-  Mobahi, H., Collobert, R., Weston, J.: Deep learning from temporal coherence in video. In: Proceedings of the 26th Annual International Conference on Machine Learning, ACM (2009) 737–744
-  Wiskott, L., Sejnowski, T.J.: Slow feature analysis: Unsupervised learning of invariances. Neural computation 14(4) (2002) 715–770
-  Becker, S.: Learning temporally persistent hierarchical representations. In: Advances in neural information processing systems. (1997) 824–830
-  Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. arXiv preprint arXiv:1505.00687 (2015)
-  Ranzato, M., Szlam, A., Bruna, J., Mathieu, M., Collobert, R., Chopra, S.: Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604 (2014)
-  Srivastava, N., Mansimov, E., Salakhutdinov, R.: Unsupervised learning of video representations using lstms. arXiv preprint arXiv:1502.04681 (2015)
-  Goroshin, R., Mathieu, M., LeCun, Y.: Learning to linearize under uncertainty. arXiv preprint arXiv:1506.03011 (2015)
-  Jensen, C., Reed, R.D., Marks, R.J., El-Sharkawi, M., Jung, J.B., Miyamoto, R.T., Anderson, G.M., Eggen, C.J., et al.: Inversion of feedforward neural networks: Algorithms and applications. Proceedings of the IEEE 87(9) (1999) 1536–1549
-  Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. arXiv preprint arXiv:1412.0035 (2014)
-  Dosovitskiy, A., Brox, T.: Inverting convolutional networks with convolutional networks. arXiv preprint arXiv:1506.02753 (2015)
-  Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 (2013)
-  Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10(7) (2015)
-  Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786) (2006) 504–507
-  He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. arXiv preprint arXiv:1502.01852 (2015)
-  Chatfield, K., Simonyan, K., Vedaldi, A., Zisserman, A.: Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531 (2014)
-  Vedaldi, A., Lenc, K.: Matconvnet-convolutional neural networks for matlab. arXiv preprint arXiv:1412.4564 (2014)
-  Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems. (2014) 2366–2374
-  Srivastava, R.K., Greff, K., Schmidhuber, J.: Highway networks. arXiv preprint arXiv:1505.00387 (2015)
-  He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
-  Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR) (2013)
-  Sun, D., Roth, S., Black, M.J.: A quantitative analysis of current practices in optical flow estimation and the principles behind them. International Journal of Computer Vision 106(2) (2014) 115–137
-  Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093 (2014)
Glorot, X., Bengio, Y.:
Understanding the difficulty of training deep feedforward neural
In: International conference on artificial intelligence and statistics. (2010) 249–256
-  Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
-  Ess, A., Leibe, B., Schindler, K., Van Gool, L.: Robust multiperson tracking from a mobile platform. Pattern Analysis and Machine Intelligence, IEEE Transactions on 31(10) (2009) 1831–1846
-  Klein, D.A., Schulz, D., Frintrop, S., Cremers, A.B.: Adaptive real-time video-tracking for arbitrary objects. In: IEEE Int. Conf. on Intelligent Robots and Systems (IROS). (Oct 2010) 772–777
-  Bouguet, J.Y.: Pyramidal implementation of the affine lucas kanade feature tracker description of the algorithm
-  Brox, T., Malik, J.: Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 33(3) (2011) 500–513
-  Weinzaepfel, P., Revaud, J., Harchaoui, Z., Schmid, C.: Deepflow: Large displacement optical flow with deep matching. In: Computer Vision (ICCV), 2013 IEEE International Conference on, IEEE (2013) 1385–1392
-  Bradski, G., Kaehler, A.: Learning OpenCV: Computer vision with the OpenCV library. ” O’Reilly Media, Inc.” (2008)
-  Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In A. Fitzgibbon et al. (Eds.), ed.: European Conf. on Computer Vision (ECCV). Part IV, LNCS 7577, Springer-Verlag (October 2012) 611–625
-  Revaud, J., Weinzaepfel, P., Harchaoui, Z., Schmid, C.: Deep convolutional matching. arXiv preprint arXiv:1506.07656 (2015)
-  Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. International Journal of Computer Vision 92(1) (2011) 1–31