Deep Optical Flow Estimation Via Multi-Scale Correspondence Structure Learning

by   Shanshan Zhao, et al.
Zhejiang University

As an important and challenging problem in computer vision, learning based optical flow estimation aims to discover the intrinsic correspondence structure between two adjacent video frames through statistical learning. Therefore, a key issue to solve in this area is how to effectively model the multi-scale correspondence structure properties in an adaptive end-to-end learning fashion. Motivated by this observation, we propose an end-to-end multi-scale correspondence structure learning (MSCSL) approach for optical flow estimation. In principle, the proposed MSCSL approach is capable of effectively capturing the multi-scale inter-image-correlation correspondence structures within a multi-level feature space from deep learning. Moreover, the proposed MSCSL approach builds a spatial Conv-GRU neural network model to adaptively model the intrinsic dependency relationships among these multi-scale correspondence structures. Finally, the above procedures for correspondence structure learning and multi-scale dependency modeling are implemented in a unified end-to-end deep learning framework. Experimental results on several benchmark datasets demonstrate the effectiveness of the proposed approach.



There are no comments yet.


page 3


Multi-Scale Generalized Plane Match for Optical Flow

Despite recent advances, estimating optical flow remains a challenging p...

OmniFlow: Human Omnidirectional Optical Flow

Optical flow is the motion of a pixel between at least two consecutive v...

AutoScaler: Scale-Attention Networks for Visual Correspondence

Finding visual correspondence between local features is key to many comp...

Multi-Scale Video Frame-Synthesis Network with Transitive Consistency Loss

Traditional approaches to interpolating/extrapolating frames in a video ...

FPCR-Net: Feature Pyramidal Correlation and Residual Reconstruction for Semi-supervised Optical Flow Estimation

Optical flow estimation is an important yet challenging problem in the f...

Bregman Iteration for Correspondence Problems: A Study of Optical Flow

Bregman iterations are known to yield excellent results for denoising, d...

RNA Secondary Structure Prediction By Learning Unrolled Algorithms

In this paper, we propose an end-to-end deep learning model, called E2Ef...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Optical flow estimation seeks for perceiving the motion information across consecutive video frames, and has a wide range of vision applications such as human action recognition and abnormal event detection. Despite the significant progress in the literature, optical flow estimation is still confronted with a number of difficulties in discriminative feature representation, correspondence structure modeling, computational flexibility, etc. In this paper, we focus on how to set up an effective learning pipeline that is capable of performing multi-scale correspondence structure modeling with discriminative feature representation in a flexible end-to-end deep learning framework.

Due to the effectiveness in statistical modeling, learning based approaches emerge as an effective tool of optical flow estimation [8, 11, 1, 18]. Usually, these approaches either just take image matching at a single scale into account, or take a divide-and-conquer strategy that copes with image matching at multiple scales layer by layer. Under the circumstances of complicated situations (e.g., large inter-image displacement or complex motion), they are often incapable of effectively capturing the interaction or dependency relationships among the multi-scale inter-image correspondence structures, which play an important role in robust optical flow estimation. Furthermore, their matching strategies are often carried out in the following two aspects. 1) Set a fixed range of correspondence at a single scale in the learning process [8, 11, 18]; and 2) update the matching range dynamically with a coarse-to-fine scheme [1, 13]. In practice, since videos have time-varying dynamic properties, selecting an appropriate fixed range for matching is difficult for adapting to various complicated situations. Besides, the coarse-to-fine scheme may cause matching error propagations or accumulations from coarse scales to fine scales. Therefore, for the sake of robust optical flow estimation, correspondence structure modeling ought to be performed in an adaptive multi-scale collaborative way. Moreover, it is crucial to effectively capture the cross-scale dependency information while preserving spatial self-correlations for each individual scale in a totally data-driven fashion.

Motivated by the above observations, we propose a novel unified end-to-end optical flow estimation approach called Multi-Scale Correspondence Structure Learning (MSCSL) (as shown in Fig. 1), which jointly models the dependency of multi-scale correspondence structures by a Spatial Conv-GRU neural network model based on multi-level deep learning features. To summarize, the contributions of this work are twofold:

  • We propose a multi-scale correspondence structure learning approach, which captures the multi-scale inter-image-correlation correspondence structures based on the multi-level deep learning features. As a result, the task of optical flow estimation is accomplished by jointly learning the inter-image correspondence structures at multiple scales within an end-to-end deep learning framework. Such a multi-scale correspondence structure learning approach is innovative in optical flow estimation to the best of our knowledge.

  • We design a Spatial Conv-GRU neural network model to model the cross-scale dependency relationships among the multi-scale correspondence structures while preserving spatial self-correlations for each individual scale in a totally data-driven manner. As a result, adaptive multi-scale matching information fusion is enabled to make optical flow estimation adapt to various complicated situations, resulting in robust estimation results.

Figure 1: The proposed CNN framework of Multi-Scale Correspondence Structure Learning (MSCSL). The in Pool() and Conv() denotes the of corresponding operation, and

denotes three consecutive operations. The network consists of three parts: (1) Multi-Scale Correspondence Structure Modelling, this part uses a Siamese Network to extract robust multi-level deep features for the two images, and then constructs the correspondence structures between the feature maps at different scales, (2) Correspondence Maps Encoding, this part employs a Spatial Conv-GRU presented in this work to encode the correspondence maps at different scales, (3) Prediction, we use the encoded feature representation to predict the optical flow map.

2 Our Approach

2.1 Problem Formulation

Let be a set of training samples, where and represent an RGB image pair and the corresponding optical flow respectively. In this paper, our objective is to learn a model parameterized by to predict the dense motion of the first image . For the sake of expression, we ignore the left subscript in the remaining parts.

In this paper, we focus on two factors, (1) computing the correlation maps between image representations at different scales and adaptively setting up the correspondence structure in a data-driven way, (2) encoding the correspondence maps into high-level feature representation for regressing the optical flow.

2.2 Multi-Scale Correspondence Structure Modelling

Multi-Scale Image Representations.

To represent the input image at multiple scales, we firstly employ convolution neural networks (CNNs) to extract the deep features at a single scale parameterized by

to represent the image , as illustrated in Fig. 1:


and then model the multi-level feature representations parameterized by with as the input, as depicted in Fig. 1:


where represents the -th level, and the size of is larger than that of . From top to bottom (or coarse to fine), the feature representations at small scales111In this paper, the small scale means small size; the large scale means large size tend to learn the sematic components, which contribute to find the correspondence of semantic parts with large displacements; Furthermore, the large scale feature maps tend to learn the local representation, which can distinguish the patches with small displacements. In this paper, we use and to denote the multi-scale representations of and respectively.

Correspondence Structure Modelling. Given an image pair from a video sequence, we firstly extract their multi-level feature representations and using Eq. 1 and Eq. 2. In order to learn the correspondence structures between the image pair, we calculate the similarity between the corresponding feature representations instead. Firstly, we discuss the correlation computation proposed in [8]:


where and

denote the feature vector at the

-th location of and respectively, and denotes concatenating the elements in the set to a vector, denotes the neighborhood of location . The meaning of Eq. 3 is that given a maximum displacement , the correlations between the location in and in can be obtained by computing the similarities between the square patch of size centered at location in and square patches of the same size centered at all locations of in .

To model the correspondence between the -th location in and its corresponding location in , we can (1) calculate in a small neighbourhood of the -th location in , or (2) calculate in a large enough neighbourhood of the -th location in , or even in the whole feature map . But the former can not guarantee the computation of similarity between the -th location and the corresponding -th location, while the latter leads to low computational efficiency, because the complexity of Eq. 3 exhibits quadratic growth when the value of increases. To address that problem, we adopt correlation computation at each scale of multi-scale feature representations and :


where the maximum displacement varies from bottom to top.

In order to give the network more flexibility in how to deal with the correspondence maps, we add three convolutional layers to the outputs of the operation, which is the same as that proposed in [8], to extract the high-level representations parameterized by , as described in Fig. 1:


2.3 Correspondence Maps Encoding Using Spatial Conv-GRU

Cross-Scale Dependency Relationships Modelling. For the sake of combining the correlation representations

and preserving the spatial structure to estimate dense optical flow, we consider the representations as a feature map sequence, and then apply Convolutional Gated-Recurrent-Unit Recurrent Networks(Conv-GRUs) to model the cross-scale dependency relationships among the multi-scale correspondence structures. Conv-GRUs have been used to model the temporal dependencies between frames of the video sequence 

[4, 15]. A key advantage of Conv-GRUs is that they can not only model the dependencies among a sequence, but also preserve the spatial location of each feature vector. One of significant differences between a Conv-GRU and a traditional GRU is that innerproduct operations are replaced by convolution operations.

However, because of the employed scheme similar to coarse-to-fine, the size of the -th input in the sequence is larger than that of the -th input. We cannot apply the standard Conv-GRU on our problem, so instead we propose a Spatial Conv-GRU in which each layer’s output is upsampled as the input of the next layer. For the input sequence , the formulation of the Spatial Conv-GRU is:


where and denote a convolution operation and an element-wise multiplication respectively, and

is an activation function, e.g.,

., denotes the transposed convolution. The Spatial Conv-GRU can model the transition from coarse to fine and recover the spatial topology, outputting intra-level dependency maps .

Methods Sintel clean Sintel final KITTI 2012 Middlebury Flying Chairs Time (sec)
train test train test train test train test
Table 1: Comparison of average endpoint errors (EPE) to the state-of-the-art. The times with right superscript indicate that the methods run on CPU, while the rest run on GPU. The numbers in parentheses are the results of the networks on dataset they were fine-tuned on. And the methods with +ft represent that the models were fine-tuned on MPI Sintel training dataset (two versions together) after trained on Flying Chairs training dataset.

Intra-Level Dependency Maps Combination. After getting the hidden outputs , we upsample them to the same size, written as :


where are the parameters needed to be optimized. Furthermore, we concatenate the hidden outputs with the nd convolutional output of to get the final encoded feature representation for optical flow estimation, as depicted in Fig. 1:


where represents the concatenation operation.

Finally, the proposed framework learns a function parameterized by to predict the optical flow:


2.4 Unified End-to-End Optimization

As the image representation, correspondence structure learning and correspondence maps encoding are highly related, we construct a unified end-to-end framework to optimize the three parts jointly. The loss function used in the optimization framework consists of two parts, namely, a supervised loss and an unsupervised loss (or reconstruction loss). The former is the endpoint error (EPE), which measures the Euclidean distance between the predicted flow

and the ground truth , while the latter is based on the brightness constancy assumption, which measures the Euclidean distance between the first image and the warped second image .


where and denote the displacement in horizontal and vertical respectively, and is the balance parameter. can be calculated via bilinear sampling according to

, as proposed in Spatial Transform Networks


Methods Sintel Final
Methods Sintel Clean
Table 2: Comparison of FlowNet, SPyNet and our proposed methods on MPI Sintel test datasets for different velocities () and displacement ().

Because the raw data and contain noise and illumination changes and are less discriminative, in some cases the brightness constancy assumption is not satisfied; Furthermore, in highly saturated or very dark regions, the assumption also suffers difficulties [11]. Therefore, applying Eq. 16 on the raw data directly will make the network more difficult when training. To address that issue, we apply the brightness constancy assumption on the nd convolutional outputs and of and instead of and . The training and test stages are shown in Alg. 1.

Input: A set of training samples
Output: The deep model parameterized by :
/* The training stage */
1 repeat
        /* For the batches, do */
2        for  do
               /* Process the -th training mini-batches */
3               for  do
                      /* Process the -th image pair in */
4                      Extract the image representation and using Eq. 1;
5                      Model the multi-scale feature representation and using Eq. 2;
6                      Compute the correlation between feature representations using Eq. 3 and Eq. 4;
7                      Extract the high-level representations of using Eq. 5;
8                      Encode the correspondence representations to get using Eq. 6;
9                      Concatenate with the nd convolutional outputs of to obtain using Eq. 12;
10                      Regress the optical flow estimation using Eq. 13;
11                      Minimize the objective function Eq. 14;
13               end for
              /* Update network parameters */
14               Update parameters using Adam;
15        end for
17until  max_iter;
18return ;
/* The test stage */
Input: Given an image pair and the trained deep model
Output: The predicted optical flow
19 return ;
Algorithm 1 Deep Optical Flow Estimation Via MSCSL

3 Experiments

3.1 Datasets

Flying Chairs [8] is a synthetic dataset created by applying affine transformations to a real image dataset and a rendered set of 3D chair models. This dataset contains image pairs, and is split into training and test pairs.

MPI Sintel [7] is created from an animated movie and contains many large displacements and provides dense ground truth. It consists of two versions: the Final version and the Clean version. The former contains motion blurs and atmospheric effects, while the latter does not include these effects. There are training image pairs for each version.

KITTI 2012 [9] is created from real world scenes by using a camera and a 3D laser scanner. It consists of training image pairs with sparse optical flow ground truth.

Middlebury [3] is a very small dataset, containing only image pairs for training. And the displacements are typically limited to pixels.

Figure 2: Examples of optical flow estimation using FlowNetC, MSCSL/wosr, MSCSL/wor and MSCSL on the MPI Sintel dataset (Clean version). Note that our proposed methods perform well in both small displacement and large displacement.

3.2 Implementation Details

3.2.1 Network Architecture

In this part, we introduce the network architecture briefly. We use convolutional kernel for the first convolutional layer and

for the second and third convolutional layers. Then we use max-pooling and convolutional operations to obtain multi-scale representations, as illustrated in Fig. 

1. The correlation layer is the same as that proposed in [8], and the are set to from top to bottom (or from coarse to fine). And then we employ kernel and kernel for the other convolutional layers and deconvolutional layers respectively.

3.2.2 Data Augmentation

To avoid overfitting and improve the generalization of network, we employ the data augmentation strategy for the training by performing random online transformations, including scaling, rotation, translation, as well as additive Gaussian noise, contrast, multiplicative color changes to the RGB channels per image, gamma and additive brightness.

3.2.3 Training Details

We implement our architecture using Caffe 

[12] and use an NVIDIA TITAN X GPU to train the network. To verify our proposed framework, we conduct three comparison experiments, (1) MSCSL/wosr, this experiment does not contain both the proposed Spatial Conv-GRU and reconstruction loss, and use the refinement network proposed in [8] to predict dense optical flow, (2) MSCSL/wor, this experiment employs the Spatial Conv-GRU, which can be implemented by unfolding the recurrent model in the prototxt file, to encode the correspondence maps for dense optical flow estimation and demonstrates the effectiveness of the Spatial Conv-GRU in comparison to MSCL/wosr, (3) MSCSL, this experiment contains all parts (Spatial Conv-GRU and reconstruction loss) aforementioned.

In the MSCSL/wosr and MSCSL/wor, we train the networks on Flying Chairs training dataset using Adam optimization with and . To tackle the gradients explosion, we adopt the same strategy as proposed in [8]. Specifically, we firstly use a learning rate of for the first k iterations with a batch size of pairs. After that, we increase the learning rate to for the following k iterations, and then divide it by every k iterations. We terminate the training after k iterations (about hours).

In the MSCSL, we firstly train the MSCSL/wor for k iterations using the training strategy above. After that, we add the reconstruction loss with the balance parameter . And then we fine-tune the network for k iterations with a fixed learning of .

After training the three networks on Flying Chairs training dataset respectively, we fine-tune the networks on the MPI Sintel training dataset for tens of thousands of iterations with a fixed learning rate of until the networks converge. Specifically, we fine-tune the networks on the Clean version and Final version together with for training and for validation. Since the KITTI 2012 dataset and Middlebury dataset are small and only contain sparse ground truth, we do not conduct fine-tuning on these two datasets.

3.3 Comparison to State-of-the-Art

In this section, we compare our proposed methods to recent state-of-the-art approaches, including traditional methods, such as EpicFlow [14], DeepFlow [16], FlowFields [2], EPPM [5], LDOF [6], DenseFlow [17], and deep learning based methods, such as FlowNetS [8], FlowNetC [8], SPyNet [13]. Table 1 shows the performance comparison between our proposed methods and the state-of-the-art using average endpoint errors (EPE). We mainly focus on the deep learning based methods, so we only compare our proposed methods with the learning-based frameworks such as FlowNet and SpyNet.

Flying Chairs. For all three comparison experiments, We train our networks on this dataset firstly, and employ MPI Sintel dataset to fine-tune them further. Table 1 shows that MSCSL outperforms the other comparison experiments, MSCSL/wosr and MSCSL/wor. Furthermore, our proposed methods achieve better performance comparable with the state-of-the-art methods. After fine-tuning, in most cases most learning based methods suffer from performance decay, this is mostly because of the disparity between Flying Chairs and MPI Sintel dataset. Some visual estimation results on this dataset are shown in Fig. 3.

MPI Sintel. After the training on Flying Chairs firstly, we fine-tune the trained models on this dataset. The models trained on Flying Chairs are evaluated on the training dataset. The results shown in Table 1 demonstrate MSCSL’s and MSCSL/sor’s better ability to generalize than MSCSL/wosr’s and other learning based approaches’. To further verify our proposed methods, we compare our methods with FlowNetS, FlownetC and SPyNet on MPI Sintel test dataset for different velocities and distances from motion boundaries, as described in Table 2. As shown in Table 1 and Table 2, our proposed methods perform better than other deep learning based methods. However, in the regions with velocities larger than pixels (smaller than pixels), the proposed methods are less accurate than FlowNetC (SpyNet). Some visual results are shown in Fig. 2.

KITTI 2012 and Middlebury. These two datasets are too small, so we do not fine-tune the models on these datasets. We evaluate the trained models on KITTI 2010 training dataset, KITTI 2012 test dataset and Middlebury training dataset respectively. Table 1 shows that our proposed methods outperform other deep learning based approaches remarkably on the KITTI 2012 dataset (including training set and test set). However, in most cases, on Middlebury training dataset, mainly containing small displacements, our proposed methods do not perform well, comparison to SPyNet.

Analysis. The results of our framework are more smooth and fine-grained. Specifically, our framework is capable of capturing the motion information of fine-grained object parts, as well as preserving edge information. Meanwhile, our Spatial Conv-GRU can suppress the noises in the results of model without it. All these insights can be observed in Fig. 3 and Fig. 2. However, our proposed frameworks are incapable of effectively capturing the correspondence structure and unstable in regions where the texture is uniform (e.g., on Middlebury dataset).

Timings. In Table 1, we show the per-frame runtimes of different approaches. Traditional methods are often implemented on a single CPU, while deep learning based methods tend to run on GPU. Therefore, we only compare the runtimes with FlowNetS, FlowNetC and SPyNet. The results in Table 1 demonstrate that our proposed methods (run on NVIDIA TITAN X GPU) improve the accuracy with a comparable speed against the state-of-the-art.

4 Conclusion

In this paper, we propose a novel end-to-end multi-scale correspondence structure learning based on deep learning for optical flow estimation. The proposed MSCSL learns the correspondence structure and models the multi-scale dependency in a unified end-to-end deep learning framework. Our model outperforms the state-of-the-art approaches based on deep learning by a considerable computing efficiency. The experimental results on several datasets demonstrate the effectiveness of our proposed framework.

Figure 3: Examples of optical flow prediction on the Flying Chairs dataset. Comparison to MSCSL/wosr, the results of MSCSL/wor and MSCSL are more smooth and finer.


This work was supported in part by the National Natural Science Foundation of China under Grant U1509206 and Grant 61472353, in part by the Alibaba-Zhejiang University Joint Institute of Frontier Technologies.


  • [1] Ahmadi, A., and Patras, I. Unsupervised convolutional neural networks for motion estimation. In Image Processing (ICIP), 2016 IEEE International Conference on (2016), IEEE, pp. 1629–1633.
  • [2] Bailer, C., Taetz, B., and Stricker, D. Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation. In Proceedings of the IEEE International Conference on Computer Vision (2015), pp. 4015–4023.
  • [3] Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M. J., and Szeliski, R. A database and evaluation methodology for optical flow. International Journal of Computer Vision 92, 1 (2011), 1–31.
  • [4] Ballas, N., Yao, L., Pal, C., and Courville, A. Delving deeper into convolutional networks for learning video representations. arXiv preprint arXiv:1511.06432 (2015).
  • [5] Bao, L., Yang, Q., and Jin, H. Fast edge-preserving patchmatch for large displacement optical flow. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    (2014), pp. 3534–3541.
  • [6] Brox, T., and Malik, J. Large displacement optical flow: descriptor matching in variational motion estimation. IEEE transactions on pattern analysis and machine intelligence 33, 3 (2011), 500–513.
  • [7] Butler, D. J., Wulff, J., Stanley, G. B., and Black, M. J. A naturalistic open source movie for optical flow evaluation. In European Conference on Computer Vision (2012), Springer, pp. 611–625.
  • [8] Dosovitskiy, A., Fischery, P., Ilg, E., Hazirbas, C., Golkov, V., van der Smagt, P., Cremers, D., Brox, T., et al. Flownet: Learning optical flow with convolutional networks. In 2015 IEEE International Conference on Computer Vision (ICCV) (2015), IEEE, pp. 2758–2766.
  • [9] Geiger, A., Lenz, P., and Urtasun, R. Are we ready for autonomous driving? the kitti vision benchmark suite. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on (2012), IEEE, pp. 3354–3361.
  • [10] Jaderberg, M., Simonyan, K., Zisserman, A., et al. Spatial transformer networks. In Advances in Neural Information Processing Systems (2015), pp. 2017–2025.
  • [11] Jason, J. Y., Harley, A. W., and Derpanis, K. G.

    Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness.

    In Computer Vision–ECCV 2016 Workshops (2016), Springer, pp. 3–10.
  • [12] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., and Darrell, T. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia (2014), ACM, pp. 675–678.
  • [13] Ranjan, A., and Black, M. J. Optical flow estimation using a spatial pyramid network. arXiv preprint arXiv:1611.00850 (2016).
  • [14] Revaud, J., Weinzaepfel, P., Harchaoui, Z., and Schmid, C.

    Epicflow: Edge-preserving interpolation of correspondences for optical flow.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1164–1172.
  • [15] Siam, M., Valipour, S., Jagersand, M., and Ray, N. Convolutional gated recurrent networks for video segmentation. arXiv preprint arXiv:1611.05435 (2016).
  • [16] Weinzaepfel, P., Revaud, J., Harchaoui, Z., and Schmid, C. Deepflow: Large displacement optical flow with deep matching. In Proceedings of the IEEE International Conference on Computer Vision (2013), pp. 1385–1392.
  • [17] Yang, J., and Li, H.

    Dense, accurate optical flow estimation with piecewise parametric model.

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015), pp. 1019–1027.
  • [18] Zhu, Y., Lan, Z., Newsam, S., and Hauptmann, A. G. Guided optical flow learning. arXiv preprint arXiv:1702.02295 (2017).