1 Introduction
Deep Matching (DM) [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid]
is one of the most popular methods for establishing quasidense correspondences between images. An important application of DM is optical flow, where it is used for finding an initial set of image correspondences, which are then interpolated and refined by local optimisation.
The reason for the popularity of DM is the quality of the matches that it can extract. However, there is an important drawback: DM, as originally introduced in [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid], is in fact not a deep neural network and does not support training via backpropagation. In order to sidestep this limitation, several authors have recently proposed alternative Convolutional Neural Networks (CNN) architectures for dense image matching (Sect. 1.1). However, while several of these trainable models obtain excellent results, they are not necessarily superior to the handcrafted DM architecture in term of performance.
The quality of the matches established by DM demonstrates the strength of the DM architecture compared to alternatives. Thus, a natural question is whether it is possible to obtain the best of both worlds, and construct a trainable CNN architecture which is equivalent to DM. The main contribution of this paper is to carry out such a construction.
In more detail, DM comprises two stages (Fig. 1): In the first stage, DM computes a sequence of increasingly coarse match score maps, integrating information from progressively larger image neighbourhoods in order to remove local match ambiguities. In the second stage, the coarse information is propagated in the reverse direction, resolving ambiguities in the higherresolution score maps. While the first stage was formulated as a CNN in [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid], the second stage was given as a recursive decoding algorithm. In Sect. 2, we show that this recursive algorithm is equivalent to dynamic programming and that it can be implemented instead by a sequence of new convolutional operators, that reverse the ones in the first stage of DM.
The resulting CNN architecture (Fig. 2), which is numerically equivalent to the original DM, has a topology, as popularized in image segmentation [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox], and supports backpropagation. Combined with a structuredoutput loss (Sect. 2.2), this allows us to perform endtoend learning of the DM parameters, improving its performance (Sect. 3). Our findings and further potential advantages of the approach are discussed in Sect. 4.
1.1 Related Work
The key reason for the success of CNNs in many computer vision applications is the ability to learn complex systems endtoend instead of handcrafting individual components. A number of recent works have applied CNNbased systems to pixelwise labeling problems such as stereo matching and optical flow. In particular, Fischer
et al [Fischer et al.(2015)Fischer, Dosovitskiy, Ilg, Häusser, Hazırbaş, Golkov, van der Smagt, Cremers, and Brox] have shown it is possible to train a fully convolutional network for optical flow. Žbontar et al [Žbontar and LeCun(2016)] trained a CNN for stereo matching by using a refined stereo matching cost. Zagoruyko and Komodakis [Zagoruyko and Komodakis(2015)] and Han et al [Han et al.(2015)Han, Leung, Jia, Sukthankar, and Berg] have demonstrated learning local image description through a CNN.Optical flow estimation was tackled mostly by variational approaches
[Mèmin and Pèrez(1998), Brox et al.(2004)Brox, Bruhn, Papenberg, and Weickert, Wedel et al.(2009)Wedel, Cremers, Pock, and Bischof] since the work of Horn and Schunk [Horn and Schunck(1981)]. Brox and Malik [Brox and Malik(2011)] developed a system that integrates descriptor matching with a variational approach. Recently, leading optical flow approaches such as DeepMatching [Weinzaepfel et al.(2013)Weinzaepfel, Revaud, Harchaoui, and Schmid, Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid]demonstrated a CNNlike system where feature information is aggregated from fine to coarse using sparse convolutions and maxpooling. However, this approach does not perform learning and all parameters are handtuned. EpicFlow
[Revaud et al.(2015a)Revaud, Weinzaepfel, Harchaoui, and Schmid] has focused on refining the sparse matches from DM using a variational method that incorporates edge information. Fischer et al [Fischer et al.(2015)Fischer, Dosovitskiy, Ilg, Häusser, Hazırbaş, Golkov, van der Smagt, Cremers, and Brox] trained a fully convolutional network FlowNet for optical flow prediction on a largescale synthetic flying chair dataset. However, the results of FlowNet do not match the performance of DM on realistic datasets. This motivates us to reformulate DM [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid] as an endtoend trainable neural network.Beyond CNNs, many authors have applied machine learning techniques to matching and optical flow. Sun
et al [Sun et al.(2008)Sun, Roth, Lewis, and Black] investigate the statistical properties of optical flow and learn the regularizers using Gaussian scale mixtures, Rosenbaum et al [Rosenbaum et al.(2013)Rosenbaum, Zoran, and Weiss]use Gaussian mixture models to model the statistics of optical flow, and Black
et al. [Black et al.(1997)Black, Yacoob, Jepson, and Fleet]apply the idea of principal components analysis to optical flow. Kennedy and Taylor
[Kennedy and Taylor(2015)]train classifiers to choose different inertial estimatiors for optical flow. Leordeanu
et al [Leordeanu et al.(2013)Leordeanu, Zanfir, and Sminchisescu]obtain occlusion probabilities by learning classifiers. Menze
et al [Menze et al.(2015)Menze, Heipke, and Geiger] formulate optical flow estimation as a discrete inference problem in a conditional random field, followed by subpixel refinement. In these works, tuning feature parameters is mostly done separately and manually. In contrast to these works, our work aims to convert the whole quasidense matching pipeline into an endtoend trainable CNN.2 Method
Our key contribution is to show that the full DM pipeline can be formulated as a CNN with a topology (Fig. 2). The finetocoarse stage of DM was already given as a CNN in [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid]. Here, we complete the construction and show that the DM recursive decoding stage can: (1) be interpreted as dynamic programming and (2) be implemented by convolutional operators which reverse the ones used in the finetocoarse stage (Sect. 2.1). The architecture can be trained using backpropagation, for which we propose a structuredoutput loss (Sect. 2.2).
2.1 FullyTrainable Deep Matching Architecture
In this section we formulate the complete DM algorithm as a CNN. Consider a reference image and a target image . The goal is to estimate a correspondence field mapping points in the reference image to corresponding points in the target image. The correspondence field is found as the maximizer
(1) 
of a scoring function that encodes the similarity of point in with point in (the score has of course an implicit dependency on and ).^{1}^{1}1As proposed in DM, matches can be verified by testing whether they maximize the score also when going from the target image back to the reference image image :
A simple way of defining the scoring function is to compare patch descriptors. Thus, let be a visual descriptor of a patch centred at in image ; furthermore, assume that is normalised. The score of the match
can be defined as the cosine similarity of local descriptors, given by the inner product:
(2) 
A significant drawback of this scoring function is that it pools information only locally, from the compared patches. Therefore, unless all patches have a highly distinctive local appearance, many of the matches established by eq. (1) are likely to be incorrect.
Correcting these errors requires integrating global information in the score maps. In order to do so, DM builds a sequence of scoring functions which are increasingly coarse but that incorporate information from increasingly larger image neighborhoods (Fig. 1 top). Given these maps, equation (2) is replaced by a recursive decoding process that extracts matches by analysing in reverse order.
While the authors of [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid] already showed that maps can be computed by convolutional operators, they did not formulate the decoding stage of DM as a network supporting endtoend learning. Here we show that the recursive decoding process can be reformulated as the computation of additional score maps (Fig. 1 bottom) by reversing the convolutional operators used to compute . The two stages, fine to coarse and coarse to fine, are described in detail below.
Stage 1: Fine to coarse.
DM starts with the scoring function , computed by comparing local patches as explained above, and builds the other scores by alternating two operations: max pooling and aggregation.
The max pooling step pools scores with respect to the first argument in a square of side of pixels, where is a parameter. This results in an intermediate scoring function :
(3) 
In the following, the locations of the local maxima, also known as pooling switches, will be denoted as , where is defined such that Note that max pooling is exactly the same operator as commonly defined in convolutional neural networks. The resulting score can be interpreted as the strength of the best match between in the reference image and all points within a distance from in the target image.
After max pooling, the scores are aggregated at the four corners of a square patch of side pixels:
(4) 
where , is a parameter, and
are the unit displacement vectors:
The exponent (set to 1.4 in DM) monotonically rescales the scores, emphasising larger ones. As detailed in [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid], the score can be roughly interpreted as the likelihood that a deformable square patch of side centered at in the reference image matches an analogous deformable patch centered at in the target image .
Eq. (4) can be rewritten as the convolution of with a particular 4D filter. Note that most neural network toolboxes are limited to 2+1D or 3+1D convolutions (with 2 or 3 spatial dimension plus one spanning feature channels), whereas here there are four spatial dimensions (given by the join of and ) and one feature channel, i.ethe convolution is 4+1D. Hence, while implementing aggregation through convolution is more general, for the particular filter used in DM a direct implementation of (4) is much simpler.
Part 2: Coarse to fine.
In the original DM, scores are decoded by a recursive algorithm to obtain the final correspondence field. Here, we give an equivalent algorithm that uses only layerwise and convolutional operators, with the major advantage of turning DM in an endtoend learnable convolutional network. Another significant advantage is that the final product is a full, refined score map assigning a confidence to all possible matches rather than finding only the best ones.
Since the last operation in the fist stage was to apply aggregation to to obtain , the first operation in the reverse order is disaggregation. In general, is disaggregated to obtain as follows:
(5) 
Disaggregation is similar to deconvolution [Zeiler et al.(2010)Zeiler, Krishnan, Taylor, and Fergus, Long et al.(2015)Long, Shelhamer, and Darrell, Noh et al.(2015)Noh, Hong, and Han, Ronneberger et al.(2015)Ronneberger, Fischer, and Brox] or convolution transpose [Vedaldi and Lenc(2015)] as it reverses a linear filtering operation. However, a key difference is that overlapping contributions are maxed out rather than summed.
Next, is obtained by unpooling and adding the result to :
(6) 
Unpooling is also found in architectures such as deconvnets; however here 1) the result is infilled with rather than zeros and 2) overlapping unpooled values are maxed out rather than summed. The result of unpooling is summed to to mix coarse and fine grained information.
Next, we discuss the equivalence of these operations to the original DM decoding algorithm. In the fine to coarse stage, through pooling and aggregation, the score contributes to the formation of the coarser scores along certain paths restricted to the set:
DM associates to the match the sum of the scores along the best of such paths:
DM uses recursion and memoization to compute this maximum efficiently; the disaggregation and unpooling steps given above implement a dynamic programming equivalent of this recursive algorithm. This is easily proved; empirically, the two implementations were found to be numerically equivalent as expected.
2.2 Training and loss functions
Training with DM requires to define a suitable loss function for the computed scoring function . One possibility is to minimise the distance between and a smoothed indicator function of the ground truth correspondence field . While a similar loss is often used to learn keypoint detectors with neural networks [Long et al.(2014)Long, Zhang, and Darrell, Han et al.(2015)Han, Leung, Jia, Sukthankar, and Berg], it has two drawbacks: first, it requires scores to attain prespecified values when only relative values are relevant and, second, the loss must be carefully rebalanced as for the vast majority of pairs .
In order to avoid these issues, we propose to use instead the following structured output loss:
Here, the term defines a variable margin for the hinge loss, small when and close to 1 otherwise. This loss looks at relative scores; in fact requires the correct matches to have score larger than incorrect ones. Furthermore, it is automatically balanced as each term in the summation involves comparing the score of a correct and an incorrect match.
Note that DM defines a whole hierarchy of score maps () and a loss can be applied to each level of the hierarchy. In general, we expect application at the last level to be the most important, as this reflects the final output of the algorithm, but combinations are possible. For training image pairs , and by denoting with the parameters of DM, learning reduces to optimizing the objective function:
We follow the standard approach of optimizing the objective using (stochastic) gradient descent
[LeCun et al.(1998)LeCun, Bottou, Bengio, and Haffner]. This requires computing the derivative of the loss and DM function w.r.t. the parameters , which can be done using backpropagation. Note that, while derivations are omitted, all layers in the DM architecture are amenable to backpropagation in the usual way.2.3 Discretization
So far, variables and have been treated as continuous. However, in a practical implementation these are discretized. By choosing a discretization scheme smartly, we can make the implementation more efficient and simpler. We describe such a scheme here.
For efficiency, DM doubles at each layer the sampling stride of the variable
and restricts the match to be within a given maximum distance of . Hence, is sampled as:where is a discrete index, is the sampling stride (in pixels) at level , the distance to at level 0, and is halved with each layer. In this expression, and in the rest of the section, summing a scalar to a vector means adding it to all its components.
For efficiency, DM is usually restricted to a quasidense grid of points in the reference image, given by:
The parameters and are the stride and offset of the patch descriptors extracted from the reference image and they remain constant at all layers; however, there is an additional variable offset to compensate for the effect of discretization in aggregation, as explained below. Here, the symbol is one if the condition is satisfied and zero otherwise.
From these definitions, the discretized score maps, denoted with a bar, are given by and similarly for .
Simplifications arise by assuming that divides exactly the pooling window size , that divides , and that divides . Under these assumptions, is obtained from by applying the standard CNN max pooling operator with a pooling window size
and padding
. Note in particular that is the same at all layers. Since usually , this amounts to pooling with a padding of zero or one pixels. The discretized aggregation operator is also simple and given by:Note that, since is expressed relatively to , aggregation reduces to averaging selected slices of the discretized score maps (i.ethere is no shift applied to ). Note also that for , given that divides , the increment applied to the index is integer as required. For and (as it is usually the case), the shift is fractional. In this case, however, the additional offset restores integer coordinates as needed.
3 Experiments
The primary goal of this section is to demonstrate the benefit of learning the DM parameters using backpropagation compared to handtuning. There are several implementations of DM available online; we base ours on the GPUbased version by the original authors^{2}^{2}2http://lear.inrialpes.fr/src/deepmatching/. [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid], except for the decoding stage for which we use their CPU version with memoization removed. We do so because this eliminats a few small approximations found in the original code. This version is the closest, and in fact numerically equivalent, to our implementation using MatConvNet [Vedaldi and Lenc(2015)] and our new convolutional operators.
Datasets.
The MPI Sintel [Butler et al.(2012)Butler, Wulff, Stanley, and Black] dataset contains 1,041 image pairs and correspondence fields obtained from synthetic data (computer graphics). Scenes are carefully engineered to contain challenging conditions. There are two versions: clean and final (with effects such as motion blur and fog). We consider a subset of the Sintel clean training set to evaluate our methodology. This is dubbed SintelMini, and consists of 7 sequences (313 images) for training and every 10th frame from a different set of 5 sequences (25 images) for validation. The FlyingChair dataset by Fischer et al [Fischer et al.(2015)Fischer, Dosovitskiy, Ilg, Häusser, Hazırbaş, Golkov, van der Smagt, Cremers, and Brox] contains syntheticallygenerated data as Sintel, but with abstract scenes consisting of “flying chairs”. It consists of respectively 22,232/640 train/val image pairs and corresponding flow fields. These images are generated by rendering 3D chair models in front of random background images from Flickr, while the motions of both the chairs and the background are purely planar. The KITTI flow 2012 [Geiger et al.(2012)Geiger, Lenz, and Urtasun, Menze and Geiger(2015)] dataset contains 194/195 training/testing image pairs and correspondence fields for road scenes. The data contains large baselines but only motions arising from driving a car. Ground truth correspondences are obtained using 3D laser scanner and hence are not available at all pixels. Furthermore, the flow is improved by fitting 3D CAD models to observed vehicles on the road and using those to compute displacements.
Evaluation metrics.
In order to measure matching accuracy, we adopt the accuracy@T metric of Revaud et al [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid]. Given the ground truth and estimated dense correspondence fields from image to image , accuracy@T is the fraction of pixels in correctly matched up to an error of pixels, i.e.^{3}^{3}3Following [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid], the quasidense DM matches are first filtered by reciprocal verification and then correspondences are propagated to all pixels by assigning to each point the same displacement vector of the most confident available nearest available neighbor within a radius of 8 pixels by setting . In addition to accuracy@T, we also consider the end point error (EPE), obtained as the average correspondence error . In all cases, scores are averaged over all image pairs to yield the final result for a given dataset. If ground truth correspondences are available only at a subset of image locations, is restricted to this set in the definitions above. For the KITTI dataset, we report in particular results for restricted to nonoccluded ares (NOC) and all areas (OCC).
Implementation details.
For DM, unless otherwise stated we use layers, pixels, , , , . Training uses an NVIDIA Titan X GPU with 12 GBs of onboard memory. Training uses stochastic gradient descent with momentum with minibatches comprising one image pair at a time (note that an image pair can be seen as the equivalent of a very large batch of image patches).
3.1 Results

Patch  Training set  Elements learned  Acc@2  Acc@5  Acc@10  EPE  EPE  

descr.  expon.  features  (matches)  (flow)  
(a) 
HOG  —  84.52%  91.89%  94.36%  3.83  1.88  
(b)  HOG  Sintel Mini  ✓  84.59%  92.03%  94.49%  3.73  1.84  
(c)  CNN  —  85.28%  92.25%  94.83%  3.58  1.80  
(d)  CNN  Sintel Mini  ✓  85.30%  92.27%  94.87%  3.70  1.64  
(e)  CNN  Sintel Mini  ✓  86.81%  92.52%  94.86%  3.37  1.60  
(f)  CNN  Sintel Mini  ✓  ✓  86.79%  92.58%  94.90%  3.34  1.57 
(g)  CNN  Flying Chairs  ✓  ✓  86.11%  92.47%  94.88%  3.33  1.65 

Method  Training  Test  Acc@2  Acc@5  Acc@10  EPE  EPE  ErrOCC 
(matches)  (flow)  (flow 3px)  
FlowNet S+v [Fischer et al.(2015)Fischer, Dosovitskiy, Ilg, Häusser, Hazırbaş, Golkov, van der Smagt, Cremers, and Brox]  Flying Chairs  KITTI12          6.50   
DMHOG  —  KITTI12  60.50%  79.34%  84.27%  11.39  3.59  16.56% 
DMCNN  —  KITTI12  61.21%  78.81%  84.01%  12.29  4.11  17.78% 
DMCNN  Flying Chairs  KITTI12  63.90%  80.11%  84.71%  11.12  3.61  16.41% 
FlowNet S+v [Fischer et al.(2015)Fischer, Dosovitskiy, Ilg, Häusser, Hazırbaş, Golkov, van der Smagt, Cremers, and Brox]  Flying Chairs  Sintel Final          4.76   
DM [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid]  —  Sintel Final      89.2%    4.10   
DMHOG  —  Sintel Final  74.37%  85.26%  89.39%  7.08  3.72  11.44% 
DMCNN  —  Sintel Final  75.15%  85.42%  89.48%  7.03  3.63  11.52% 
DMCNN  Flying Chairs  Sintel Final  76.55%  86.22%  90.03%  6.77  3.50  11.10% 
FlowNet C+v [Fischer et al.(2015)Fischer, Dosovitskiy, Ilg, Häusser, Hazırbaş, Golkov, van der Smagt, Cremers, and Brox] 
Flying Chairs  Sintel Clean          3.57   
DMHOG  —  Sintel Clean  82.51%  90.18%  92.70%  5.26  2.32  7.00% 
DMCNN  —  Sintel Clean  83.03%  90.24%  92.87%  5.22  2.25  6.85% 
DMCNN  Flying Chairs  Sintel Clean  84.16%  90.85%  93.31%  4.78  2.14  6.51% 

Endtoend DM training.
In our first experiment (Table 1) we evaluate several variants of DM training. To do so, we consider the smaller and hence more efficient Sintel Mini dataset, a subset of Sintel described above. In Table 1 (a) vs (b) we compare using the default value of used to modulate the output of the aggregation layers and learning values specific for each layer. Even with this simple change there is a noticeable improvement (+0.13% acc@10). Next, we replace the HOG features with a trainable CNN architecture to extract descriptors from image patches. We use the first four convolutional layers (conv1_1, conv1_2, conv2_1, conv2_2) of the pretrained VGGVD network [Simonyan and Zisserman(2015)]
. Just by replacing the features, we notice a further improvement ((a) vs (c) +0.47% acc@10) of DM, which can be increased by learning the DM exponents (d). Most interestingly, in (f) we obtain a further improvement by backpropagating from DM to the feature extraction layers and optimizing the features themselves (hence achieving endtoend training from the raw pixels to the matching result). The last experiment (g) shows that similar improvements can also be obtained by training from completely unrelated datasets, namely Flying Chairs, indicating that learning generalizes well.
Standard benchmark comparisons.
To test DM training in realistic scenarios, we evaluate performance on two standard benchmarks, namely the Sintel and KITTI 2012 training sets (Table 2) as these have publiclyavailable ground truth to compute accuracy. For training, we use Flying Chairs, which is designed to be statistically similar to the Sintel target dataset. Compared to the HOGDM baseline, training the CNN patch descriptors in DM improves accuracy@10 by +0.44% on KITTI and by +0.64% on Sintel Final.
An application of DM is optical flow, where it is usually followed by interpolation and refinement such as Brox and Malik [Brox and Malik(2011)] or EpicFlow [Revaud et al.(2015a)Revaud, Weinzaepfel, Harchaoui, and Schmid]. We use EpicFlow to interpolate our quasidense matches and compare the EPE results of FlowNet [Fischer et al.(2015)Fischer, Dosovitskiy, Ilg, Häusser, Hazırbaş, Golkov, van der Smagt, Cremers, and Brox]. While there are better methods than FlowNet for optical flow estimation, we choose it for comparison as this was proposed as a fullytrainable CNN for dense image matching; we compare to their results using variational refinement (+v) which is similar to EpicFlow interpolation. We train our method on Flying Chairs to allow a direct comparison with the results reported in [Fischer et al.(2015)Fischer, Dosovitskiy, Ilg, Häusser, Hazırbaş, Golkov, van der Smagt, Cremers, and Brox].
Compared to the pretrained CNN, training further on Flying Chairs gives a notable improvement in EPE, decreasing from 3.63 to 3.50 for Sintel Final and from 4.11 to 3.61 for KITTI. Compared to HOG, the improvement is even greater for Sintel Final, a gap of 0.22 pixels, however for KITTI the CNN is initially worse than HOG. Training on synthetic data improves most metrics on KITTI, with the exception of EPE (flow). We believe the latter result to be due to the fact that the EpicFlow refinement step, which is not trained, is not optimally tuned to the different statistics of the improved quasidense matches. The refinement step is in fact known to be sensitive to the data statistics (for example, in [Revaud et al.(2015a)Revaud, Weinzaepfel, Harchaoui, and Schmid] different tunings are used for different datasets). If we exclude occlusions in the ground truth for KITTI, our trained CNN gets EPENOC of 1.43 compared to 1.51 for HOG, and ErrNOC falls from 7.84% to 7.41%.
FlowNet EPEs on KITTI12Train and Sintel Final Train are respectively 6.50 and 4.76, whereas our trained DMCNN model has EPEs of 3.61 and 3.50 respectively. This confirms the benefit of the DM architecture, which we turn into a CNN in this paper.
4 Summary
In this paper, we have shown that the complete DM algorithm can be equivalently rewritten as a CNN with a topology, involving a number of new CNN layers. This allows to learn endtoend the parameters of DM using backpropagation, including the CNN filters that extract the patch descriptors, robustly improving the quality of the correspondence extracted in a number of different datasets.
Once formulated as a modular CNN, components of DM can be easily replaced with new ones. For instance, the max pooling and unpooling units could be substituted with soft versions, resulting in denser score maps, which could result in easier training and in the ability of better expressing the confidence of dense matches. We are currently exploring a number of such extensions.
For the problem of optical flow estimation, it is still required to have EpicFlow as a postprocessing step. This type of twostage approach results a suboptimal solution. In particular, the parameters of EpicFlow are not optimized by endtoend training with our DM. We would like to explore a solution that allows endtoend optical flow estimation.
Acknowledgements.
This work was supported by the AIMS CDT (EPSRC EP/L015897/1) and grants EPSRC EP/N019474/1, EPSRC EP/I001107/2, ERC 321162HELIOS, and ERC 677195IDIU. We gratefully acknowledge GPU donations from NVIDIA.
References
 [Black et al.(1997)Black, Yacoob, Jepson, and Fleet] Michael J. Black, Yaser Yacoob, Allan D. Jepson, and David J. Fleet. Learning parameterized models of image motion. In IEEE CVPR, 1997.
 [Brox and Malik(2011)] Thomas Brox and Jitendra Malik. Large displacement optical flow: Descriptor matching in variational motion estimation. IEEE TPAMI, 33(3):500–513, 2011.
 [Brox et al.(2004)Brox, Bruhn, Papenberg, and Weickert] Thomas Brox, Andres Bruhn, Nils Papenberg, and Joachim Weickert. High accuracy optical flow estimation based on a theory for warping. In ECCV, 2004.
 [Butler et al.(2012)Butler, Wulff, Stanley, and Black] Daniel J. Butler, Jonas Wulff, Garrett B. Stanley, and Michael J. Black. A naturalistic open source movie for optical flow evaluation. In ECCV, 2012.
 [Fischer et al.(2015)Fischer, Dosovitskiy, Ilg, Häusser, Hazırbaş, Golkov, van der Smagt, Cremers, and Brox] Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In IEEE ICCV, 2015.
 [Geiger et al.(2012)Geiger, Lenz, and Urtasun] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? The KITTI vision benchmark suite. In IEEE CVPR, 2012.
 [Han et al.(2015)Han, Leung, Jia, Sukthankar, and Berg] Xufeng Han, Thomas Leung, Yangqing Jia, Rahul Sukthankar, and Alexander C Berg. Matchnet: Unifying feature and metric learning for patchbased matching. In IEEE CVPR, 2015.
 [Horn and Schunck(1981)] Berthold K. P. Horn and Brian G. Schunck. Determining optical flow. Artificial Intelligence, 17(3):185–203, 1981.
 [Kennedy and Taylor(2015)] Ryan Kennedy and Camillo J. Taylor. Optical flow with geometric occlusion estimation and fusion of multiple frames. In EMMCVPR, 2015.
 [LeCun et al.(1998)LeCun, Bottou, Bengio, and Haffner] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradientbased learning applied to document recognition. In Proceedings of the IEEE, 1998.
 [Leordeanu et al.(2013)Leordeanu, Zanfir, and Sminchisescu] Marius Leordeanu, Andrei Zanfir, and Cristian Sminchisescu. Locally affine sparsetodense matching for motion and occlusion estimation. In IEEE ICCV, 2013.
 [Long et al.(2015)Long, Shelhamer, and Darrell] Jon Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In IEEE CVPR, 2015.
 [Long et al.(2014)Long, Zhang, and Darrell] Jonathan Long, Ning Zhang, and Trevor Darrell. Do convnets learn correspondence? In NIPS, 2014.
 [Mèmin and Pèrez(1998)] Etienne Mèmin and Patrick Pèrez. Dense estimation and objectbased segmentation of the optical flow with robust techniques. IEEE TIP, 7(5):703–719, 1998.
 [Menze and Geiger(2015)] Moritz Menze and Andreas Geiger. Object scene flow for autonomous vehicles. In IEEE CVPR, 2015.
 [Menze et al.(2015)Menze, Heipke, and Geiger] Moritz Menze, Christian Heipke, and Andreas Geiger. Discrete optimization for optical flow. In GCPR, 2015.
 [Noh et al.(2015)Noh, Hong, and Han] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic segmentation. In IEEE ICCV, 2015.
 [Revaud et al.(2015a)Revaud, Weinzaepfel, Harchaoui, and Schmid] Jérôme Revaud, Philippe Weinzaepfel, Zaïd Harchaoui, and Cordelia Schmid. Epicflow: Edgepreserving interpolation of correspondences for optical flow. In IEEE CVPR, 2015a.
 [Revaud et al.(2015b)Revaud, Weinzaepfel, Harchaoui, and Schmid] Jérôme Revaud, Philippe Weinzaepfel, Zaïd Harchaoui, and Cordelia Schmid. Deepmatching: Hierarchical deformable dense matching. IJCV, 2015b.
 [Ronneberger et al.(2015)Ronneberger, Fischer, and Brox] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. UNet: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
 [Rosenbaum et al.(2013)Rosenbaum, Zoran, and Weiss] Dan Rosenbaum, Daniel Zoran, and Yair Weiss. Learning the local statistics of optical flow. In NIPS, 2013.
 [Simonyan and Zisserman(2015)] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. In ICLR, 2015.
 [Sun et al.(2008)Sun, Roth, Lewis, and Black] Deqing Sun, Stefan Roth, J.P. Lewis, and Michael J. Black. Learning optical flow. In ECCV, 2008.
 [Vedaldi and Lenc(2015)] Andrea Vedaldi and Karel Lenc. MatConvNet: Convolutional neural networks for MATLAB. In ACM MM, 2015.
 [Wedel et al.(2009)Wedel, Cremers, Pock, and Bischof] Andreas Wedel, Daniel Cremers, Thomas Pock, and Horst Bischof. Structured motionadaptive regularization for high accuracy optical flow. In IEEE ICCV, 2009.
 [Weinzaepfel et al.(2013)Weinzaepfel, Revaud, Harchaoui, and Schmid] Philippe Weinzaepfel, Jérôme Revaud, Zaïd Harchaoui, and Cordelia Schmid. Deepflow: Large displacement optical flow with deep matching. In IEEE ICCV, 2013.
 [Zagoruyko and Komodakis(2015)] Sergey Zagoruyko and Nikos Komodakis. Learning to compare image patches via convolutional neural networks. In IEEE CVPR, 2015.
 [Žbontar and LeCun(2016)] Jure Žbontar and Yann LeCun. Stereo matching by training a convolutional neural network to compare image patches. The Journal of Machine Learning Research, 17(65):1–32, 2016.
 [Zeiler et al.(2010)Zeiler, Krishnan, Taylor, and Fergus] Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks. In IEEE CVPR, 2010.
Comments
There are no comments yet.