1 Introduction
Optical flow can be seen as an instance of the dense image matching problem, where the goal is to find for each pixel its corresponding match in the other image. One fundamental question in the dense matching problem is how to choose good descriptors or features. Data mining with convolutional neural networks (CNNs) has recently shown excellent results for learning taskspecific image features, outperforming previous methods based on handcrafted descriptors. One of the major difficulties in learning features for optical flow is the high dimensionality of the cost function: Whereas in stereo, the full cost function can be represented as a 3D volume, the matching cost in optical flow is a 4D volume. Especially at high image resolutions, operations on the flow matching cost are expensive both in terms of memory requirements and computation time.
Our method avoids explicit storage of the full cost volume, both in the learning phase and during inference. This is achieved by a splitting (or minprojection) of the 4D cost into two quasiindependent 3D volumes, corresponding to the and component of the flow. We then formulate CNN learning and CRF inference in this reduced setting. This achieves a space complexity linear in the size of the search range, similar to recent stereo methods, which is a significant reduction compared to the quadratic complexity of the full 4D cost function.
Nevertheless, we still have to compute all entries of the 4D cost. This computational bottleneck can be optimized by using binary descriptors, which give a theoretical speedup factor of
. In practice, even larger speedup factors are attained, since binary descriptors need less memory bandwidth and also yield a better cache efficiency. Consequently, we aim to incorporate a binarization step into the learning. We propose a novel hybrid learning scheme, where we circumvent the problem of hard nonlinearities having zero gradient. We show that our hybrid learning performs almost as well as a network without hard nonlinearities, and much better than the previous state of the art in learning binary CNNs.
2 Related Work
In the past handcrafted descriptors like SIFT, NCC, FAST etc. have been used extensively with very good results, but recently CNNbased approaches [23, 13] marked a paradigm shift in the field of image matching. To date all top performing methods in the major stereo benchmarks rely heavily on features learned by CNNs. For optical flow, many recent works still use engineered features [5, 1], presumably due to the difficulties the high dimensional optical flow cost function poses for learning. Only very recently we see a shift towards CNNs for learning descriptors [9, 10, 22]. Our work is most related to [22], who construct the full 4D cost volume and run an adapted version of SGM on it. They perform learning and cost volume optimization on of the original resolution and compress the cost function in order to cope with the high memory consumption. Our method is memoryefficient thanks to the dimensionalityreduction by the minprojection, and we outperform the reported runtime of [22] by a factor of .
Full flow with CRF [5] is a related inference method using TRWS [12] with efficient distance transform [8]. Its iterations have quadratic time and space complexity. In practice, this takes 20GB^{1}^{1}1Estimated for the cost volume size based on numbers in [5] corresponding to resolution of Sintel images. of memory, and 1030 sec. per iteration with a parallel CPU implementation. We use the decomposed model [19] with a better memory complexity and a faster parallel inference scheme based on [18].
Handcrafted Binary Descriptors like Census have been shown to work well in a number of applications, including image matching for stereo and flow [14, 15, 20, 4]. However, direct learning of binary descriptors is a difficult task, since the hard thresholding function, , has gradient zero almost everywhere. In the context of Binary CNNs there are several approaches to train networks with binary activations [2] and even binary weights [7, 16]. This is known to give a considerable compression and speedup at the price of a tolerable loss of accuracy. To circumvent the problem of having zero gradient a.e., surrogate gradients are used. The simplest method, called straightthrough estimator [2] is to assume the derivative of is 1, , simply omit the function in the gradient computation. This approach can be considered as the state of the art, as it gives best results in [2, 7, 16]. We show that in the context of learning binary descriptors for the purpose of matching, alternative strategies are possible which give better results.
3 Method
We define two models for optical flow: a local model, known as WinnerTakesAll (WTA) and a joint model, which uses CRF inference. Both models use CNN descriptors, learned in § 3.1.1. The joint model has only few extra parameters that are fit separately and the inference is solved with a parallel method, see § 3.2. For CNN learning, we optimize the performance of the local model. While learning by optimizing the performance of the joint model is possible [11], the resulting procedures are significantly more difficult.
We assume color images , where is a set of pixels. Let be a window of discrete 2D displacements, with given by the search window size , an even number. The flow associates a displacement to each pixel so that the displaced position of is given by . For convenience, we denote by , where and are mappings , the components of the flow in horizontal and vertical directions, respectively. The perpixel descriptors are computed by a CNN with parameters . Let be descriptors of images , , respectively. The local matching cost for a pixel and displacement is given by
(1) 
where is a distance function in . “Distance” is used in a loose sense here, we will consider the negative^{2}^{2}2since we want to pose matching as a minimization problem scalar product . We call
(2) 
the local optical flow model, which finds independently for each pixel a displacement that optimizes the local matching cost. The joint optical flow model finds the full flow field optimizing the coupled CRF energy cost:
(3) 
where denotes a 4connected pixel neighborhood, are contrastsensitive weights, given by and is a robust penalty function shown in Fig. 2(a).
3.1 Learning Descriptors
A common difficulty of models (2) and (3) is that they need to process the 4D cost (1), which involves computing distances in per entry. Storing such cost volume takes space and evaluating it time. We can reduce space complexity to by avoiding explicit storage of the 4D cost function. This facilitates memoryefficient endtoend training on high resolution images, without a patch sampling step [22, 13]. Towards this end we write the local optical flow model (2) in the following wayequationparentequation
(4a)  
(4b) 
The inner step in (4a) and (4b), called minprojection
, minimizes out one component of the flow vector. This can be interpreted as a decoupling of the full 4D flow problem into two simpler quasiindependent 3D problems on the reduced cost volumes
. Assuming the minimizer of (2) is unique, (4a) and (4b) find the same solution as the original problem (2). Using this representation, CNN learning can be implemented within existing frameworks. We point out that this approach has the same space complexity as recent methods for learning stereo matching, since we only need to store the 3D cost volumes and . As an illustrative example consider an image with size and a search range of 256. In this setting the full 4D cost function takes roughly GB whereas our splitting consumes only GB.3.1.1 Network
Fig. 1 shows the network diagram of the local flow model Eq. 2. The structure is similar to the recent methods proposed for learning stereo matching [13, 23, 6, 11]. It is a siamese network consisting of two convolutional branches with shared parameters, followed by a correlation layer.
The filter size of the convolutions is for the first layer and for all other layers. The
nonlinearity keeps feature values in a defined range, which works well with the scalar product as distance function. We do not use striding or pooling. The last convolutional layer uses 64 filter channels, all other layers have 96 channels. This fixes the dimensionality of the distance space to
.Loss Given the groundtruth flow field , we pose the learning objective as follows: we define a probabilistic softmax model of the local prediction (resp. ) as , then we consider a naive model and apply the maximum likelihood criterion. The negative log likelihood is given by
(5) 
This is equivalent to crossentropy loss with the target distribution concentrated at the single point for each . Variants of the crossentropy loss, where the target distribution is spread around the ground truth point are also used in the literature [13] and can be easily incorporated.
3.1.2 Learning Quantized Descriptors
The computational bottleneck in scheme (4) is computing the minprojections, with time complexity . This operation arises during the learning as well as in the CRF inference step, where it corresponds to the message exchange in the dual decomposition. It is therefore desirable to accelerate this step. We achieve a significant speedup by quantizing the descriptors and evaluating the Hamming distance of binary descriptors.
Let us define the quantization: we call the quantized descriptor field. The distance between quantized descriptors is given by , equivalent to the Hamming distance up to a scaling and an offset. Let the quantized cost function be denoted , defined similar to (1). We can then compute quantized minprojections , .
However, learning model (2) with quantized descriptors is difficult due to the gradient of the sign function being zero almost everywhere. We introduce a new technique specific to the matching problem and compare it to the baseline method that uses the straightthrough estimator of the gradient [2]. Consider the following variants of the model (4a)
(FQ)  
(QQ) 
The respective variants of (4b) are symmetric. The second letter in the naming scheme indicates whether the inner problem, , the minprojection step, is performed on (Q)uantized or (F)ull cost, whereas the first letter refers to the outer problem on the smaller 3D cost volume. The initial model (4a) is thus also denoted as FF model. While models FF and QQ correspond, up to nonuniqueness of solutions, to the joint minimum in of the cost and respectively, the model FQ is a mixed one. This hybrid model is interesting because minimization in can be computed efficiently on the binarized cost with Hamming distance, and the minimization in has a nonzero gradient in . We thus consider the model FQ as an efficient variant of the local optical flow model (2). In addition, it is a good learning proxy for the model QQ: Let be a minimizer of the outer problem FQ. Then the derivative of FQ is defined by the indicator of the pair . This is the same as the derivative of FF, except that is computed differently. Learning the model QQ involves a hard quantization step, and we apply the straightthrough estimator to compute a gradient. Note that the exact gradient for the model FQ can be computed at approximately the same reduced computational cost as the straightthrough gradient in the model QQ.
3.2 Crf
The baseline model, which we call product model, has variables with the state space . It has been observed in [8] that maxproduct message passing in the CRF (3) can be computed in time per variable for separable interactions using a fast distance transform. However, storing the messages for a 4connected graph requires memory. Although such an approach was shown feasible even for large displacement optical flow [5], we argue that a more compact decomposed model [19] gives comparable results and is much faster in practice. The decomposed model is constructed by observing that the regularization in (3) is separable over and . Then the energy (3) can be represented as a CRF with variables with the following pairwise terms: The inplane term and the crossplane term , forming the graph shown in Fig. 2(b). In this formulation there are no unary terms, since costs
are interpreted as pairwise terms. The resulting linear programming (LP) dual is more economical, because it has only
variables. The message passing for edges inside planes and across planes has complexity and , respectively.





(a)  (b)  (c) 
We apply the parallel inference method [18] to the dual of the decomposed model [19] (see Fig. 2(b)). Although different dual decompositions reach different objective values in a fixed number of iterations, it is known that all decompositions with trees covering the graph are equivalent in the optimal value [21]. The decomposition in Fig. 2(b) is into horizontal and vertical chains in each of the  and  planes plus a subproblem containing all crosslayer edges. We introduce Lagrange multipliers enforcing equality constraints between the subproblems as shown in Fig. 2(c). The Lagrange multipliers are identified with modular functions . Let us also introduce shorthands for the sum of pairwise terms over horizontal chains , and a symmetric definition for the sum over the vertical chains. The lower bound corresponding to the decomposition in Fig. 2(c) is given by: equationparentequation
(6a)  
(6b)  
(6c)  
(6d) 
Our Lagrangian dual to (3) is to maximize in , which enforces consistency between minimizers of the subproblems. The general theory [21] applies, in particular, when the minimizers of all subproblems are consistent they form a global minimizer. In (6b), there is a sum of horizontal and vertical chain subproblems in the plane. When is fixed, is the lower bound corresponding to the relaxation of the energy in with the unary terms given by . It can be interpreted as a stereolike problem with 1D labels . Similarly, is a lower bound for the plane with unary terms . Subproblem is simple, it contains both variables but the minimization decouples over individual pairs . It connects the two stereolike problems through the 4D cost volume .
Updating messages inside planes can be done at a different rate than across planes. The optimal rate for fast convergence depends on the time complexity of the message updates. [19] reported an optimal rate of updating inplane messages 5 times as often using the TRWS solver [12]. The decomposition (6a) facilitates this kind of strategy and allows to use the implementation [18] designed for stereolike problems. We therefore use the dual solver [18], denoted Dual MinorizeMaximize (DMM) to perform inplane updates. When applied to the problem of maximizing in , it has the following properties: a) the bound does not decrease and b) it computes a modular minorant such that for all and . The modular minorant is an excess of costs, called slacks, which can be subtracted from while keeping nonnegative. The associated update of the plane can be denoted as equationparentequation
(7a)  
(7b) 
The slack is then passed to the plane by the following updates, , message passing :
(8) 
The minimization (8) has time complexity , assuming the 4D costs are available in memory. As discussed above, we can compute the costs efficiently on the fly and avoid storage. The update is symmetric to (7a):
(9) 
The complete method is summarized in Algorithm 1. It starts from collecting the slacks in the plane. When initialized with , the update (9) simplifies to , , it is exactly matching to the minprojection (4). The problem solved with DMM in Algorithm 1 in the first iteration is a stereolike problem with cost . The dual solution redistributes the costs and determines which values of are worse than others, and expresses this cost offset in as specified in (7a). The optimization of the plane then continues with some information of good solutions for propagated via the cost offsets using (8).
4 Evaluation
We compare different variants of our own model on the Sintel optical flow dataset [3].
In total the benchmark consists of 1064 training images and 564 test images. For CNN learning we use
a subset of of the training images, sampled evenly from all available scenes. For evaluation,
we use a subset of of the training images.
Comparison of our models
To investigate the performance of our model, we conduct the following experiments:
First, we investigate the influence of the size of the CNN, and second we
investigate the effect of quantizing the learned features.
Additionally, we evaluate both the WTA solution (2), and the
CRF model (3). To assess the effect of quantization, we evaluate the local flow
model a) as it was trained, and b) QQ, , with quantized descriptors both in the minprojection
step as well as in the outer problem on respectively.
In CRF inference the updates (8) and (9) amount to solving a minprojection step with additional cost
offsets. F and Q indicate how this minprojection step is computed.
CRF parameters are fixed at
(Fig. 2) for all experiments and we run 8 inner and 5 outer iterations.
Table 1 summarizes the comparison of different variants of our model.
Local Flow Model (WTA)  CRF  
Train  #Layers  as trained  F  Q  
noc (all)  noc (all)  noc (all)  noc (all)  
FF  5  5.25 (10.38)  10.45 (15.67)  1.58 (4.48)  1.64 (4.87) 
7  4.72 (10.04)  9.43 (14.93)  1.53 (4.32)  1.61 (4.70)  
9  –^{1}^{1}1Omitted due to very long training time.  –^{1}^{1}1Omitted due to very long training time.  –^{1}^{1}1Omitted due to very long training time.  –^{1}^{1}1Omitted due to very long training time.  
FQ  5  6.15 (11.36)  11.43 (16.78)  –^{2}^{2}2Not applicable.  1.63 (4.62) 
7  5.62 (10.98)  10.15 (15.70)  –^{2}^{2}2Not applicable.  1.65 (4.62)  
9  5.62 (11.13)  9.87 (15.52)  –^{2}^{2}2Not applicable.  1.64 (4.69)  
5  same as QQ  9.63 (14.80)  –^{2}^{2}2Not applicable.  1.72 (4.91)  
7  same as QQ  9.75 (15.23)  –^{2}^{2}2Not applicable.  1.66 (4.78)  
9  same as QQ  9.72 (15.31)  –^{2}^{2}2Not applicable.  1.72 (4.85) 
We see that the WTA solution of model FQ performs similarly to FF, while being much faster to train and evaluate. In particular, model FQ performs better than QQ, which was trained with the straight through estimator of the gradient. If we switch to QQ for evaluation, we see a drop in performance for models FF and FQ. This is to be expected, because we now evaluate costs differently than during training. Interestingly, our joint model yields similar performance regardless whether we use F or Q for computing the costs.
4.0.1 Runtime
The main reason for quantizing the descriptors is speed. In CRF inference, we need to compute the minprojection on the 4D cost function twice per outer iteration, see Alg. 1. We show an exact breakdown of the timings for on full resolution images in Table 3, computed on a Intel i7 6700K and a Nvidia Titan X.
Method  Feature Extraction  WTA  Full Model 

FF  0.04 – 0.08  4.25  24.8 
FQ  0.04 – 0.08  1.82   
0.04 – 0.08  0.07  3.2  
[22] ( res.)  0.02  0.06  3.4 
QQ ( res.)  0.004 – 0.008  0.007  0.32 
Method  noc  all 

EpicFlow [17]  1.360  4.115 
FullFlow [5]  1.296  3.601 
FlowFields [1]  1.056  3.748 
DCFlow [22]  1.103  3.537 
Ours QQ  2.470  8.972 
The column WTA refers to computing the solution of the local model on the cost volumes , see
Eq. 4. Full model is the CRF inference, see § 3.2. We see that we can reach a
significant speedup by using binary descriptors and Hamming distance for computing intensive
calculations. For comparison, we also report the runtime of [22], who, at the time of
writing, report the fastest execution time on Sintel. We point out
that our CRF inference on full resolution images takes about the same time as their
method, which constructs and optimizes the cost function at resolution.
Test performance
We compare our method on the Sintel clean images.
In contrast to the other methods we do not use a sophisticated postprocessing pipeline, because the
main focus of this work is to show that learning and inference on high resolution images is
feasible. Therefore we cannot compete with the highly tuned methods.
Fig. 3 shows that we are able to recover fine details, but since we do not employ a
forwardbackward check and local planar inpainting we make large errors in occluded regions.
5 Conclusion
We showed that both learning and CRF inference of the optical flow cost function on high resolution images is tractable. We circumvent the excessive memory requirements of the full 4D cost volume by a minprojection. This reduces the space complexity from quadratic to linear in the search range. To efficiently compute the cost function, we learn binary descriptors with a new hybrid learning scheme, that outperforms the previous stateoftheart straightthrough estimator of the gradient.
Acknowledgements
We acknowledge grant support from Toyota Motor Europe HS, the ERC starting grant ”HOMOVIS” No. 640156 and the research initiative Intelligent Vision Austria with funding from the AIT and the Austrian Federal Ministry of Science, Research and Economy HRSM programme (BGBl. II Nr. 292/2012)
References

[1]
Bailer, C., Taetz, B., Stricker, D.: Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation. In: International Conference on Computer Vision (ICCV) (2015)
 [2] Bengio, Y., Léonard, N., Courville, A.C.: Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR abs/1308.3432 (2013), http://arxiv.org/abs/1308.3432
 [3] Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: European Conference on Computer Vision (ECCV) (2012)
 [4] Calonder, M., Lepetit, V., Strecha, C., Fua, P.: Brief: Binary robust independent elementary features. In: European Conference on Computer Vision (ECCV) (2010)

[5]
Chen, Q., Koltun, V.: Full flow: Optical flow estimation by global optimization over regular grids. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
 [6] Chen, Z., Sun, X., Wang, L., Yu, Y., Huang, C.: A deep visual correspondence embedding model for stereo matching costs. In: International Conference on Computer Vision (ICCV) (2015)
 [7] Courbariaux, M., Bengio, Y.: Binarynet: Training deep neural networks with weights and activations constrained to +1 or 1. CoRR abs/1602.02830 (2016), http://arxiv.org/abs/1602.02830
 [8] Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. International Journal of Computer Vision 70(1), 41–54 (2006)
 [9] Gadot, D., Wolf, L.: Patchbatch: A batch augmented loss for optical flow. In: Conference on Computer Vision and Pattern Recognition, (CVPR) (2016)
 [10] Güney, F., Geiger, A.: Deep discrete flow. In: Asian Conference on Computer Vision (ACCV) (2016)
 [11] Knöbelreiter, P., Reinbacher, C., Shekhovtsov, A., Pock, T.: Endtoend training of hybrid CNNCRF models for stereo. In: Conference on Computer Vision and Pattern Recognition, (CVPR) (2017), http://arxiv.org/abs/1611.10229
 [12] Kolmogorov, V.: Convergent treereweighted message passing for energy minimization. Transactions on Pattern Analysis and Machine Intelligence 28(10) (october 2006)

[13]
Luo, W., Schwing, A., Urtasun, R.: Efficient deep learning for stereo matching. In: International Conference on Computer Vision and Pattern Recognition (ICCV) (2016)
 [14] Ranftl, R., Bredies, K., Pock, T.: Nonlocal total generalized variation for optical flow estimation. In: European Conference on Computer Vision (ECCV) (2014)
 [15] Ranftl, R., Gehrig, S., Pock, T., Bischof, H.: Pushing the limits of stereo using variational stereo estimation. In: IEEE Intelligent Vehicles Symposium (IV) (2012)

[16]
Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: Xnornet: Imagenet classification using binary convolutional neural networks. In: European Conference on Computer Vision (ECCV) (2016)

[17]
Revaud, J., Weinzaepfel, P., Harchaoui, Z., Schmid, C.: EpicFlow: EdgePreserving Interpolation of Correspondences for Optical Flow. In: Computer Vision and Pattern Recognition (CVPR) (2015)
 [18] Shekhovtsov, A., Reinbacher, C., Graber, G., Pock, T.: Solving Dense Image Matching in RealTime using DiscreteContinuous Optimization. ArXiv eprints (Jan 2016)
 [19] Shekhovtsov, A., Kovtun, I., Hlaváč, V.: Efficient MRF deformation model for nonrigid image matching. CVIU 112 (2008)
 [20] Trzcinski, T., Christoudias, M., Fua, P., Lepetit, V.: Boosting binary keypoint descriptors. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2013)
 [21] Wainwright, M., Jaakkola, T., Willsky, A.: MAP estimation via agreement on (hyper)trees: Messagepassing and linearprogramming approaches. IT 51(11) (November 2005)
 [22] Xu, J., Ranftl, R., Koltun, V.: Accurate Optical Flow via Direct Cost Volume Processing. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2017)
 [23] Žbontar, J., LeCun, Y.: Computing the stereo matching cost with a convolutional neural network. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2015)